text
stringlengths 5.5k
44.2k
| id
stringlengths 47
47
| dump
stringclasses 2
values | url
stringlengths 15
484
| file_path
stringlengths 125
141
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 4.1k
8.19k
| score
float64 2.52
4.88
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
Confederate patriots living in Little Rock were alarmed when the Union Army shattered the Confederate forces that attacked Helena on July 4, 1863 and a few weeks later began moving west. As the Federals slowly advanced toward Arkansas’ capital, some of the city’s wealthier families began leaving, many taking their slaves to safe havens further south.
Ann McHenry Reider
Reider opened a mercantile business to sell groceries, dry goods, shoes, liquor, and whatever else consumers might want. Beginning in 1830, he conducted his business at a one-story building on the corner of Main and Market Streets, where he also lived. He prospered, and in the late 1830s, bought his first slaves. The 1840 census showed that he owned six slaves; by 1850 he possessed sixteen and in 1860 he had twelve. In 1860 census Reider was the richest German-speaking immigrant living in Little Rock. The self-assessed value of his real and personal property was over $1.2 million in current dollars. An “unlettered man” not active in local civil affairs, he was a devoted Catholic. In 1830, he attended the first Catholic mass conducted in Little Rock.
Jacob and Ann McHenry had married on April 30, 1833. Born in
Tennessee in 1805, she came with her parents to Arkansas in 1818 “in a canvas
covered wagon.” After the marriage, the couple built Little Rock’s first
two-story building, a house near the corner of 2nd and Louisiana
Streets. The widow and her slaves were
still living there in 1863.
Advertisement for the Return of Charlotte
One day in the middle of August, as General Sterling Price was strengthening Little Rock’s fortifications in preparation for a Union Army attack, Charlotte bumped into Henry Jacobi, a 50-year-old German immigrant who had moved to Little Rock in about 1848. A couple of years later, he had opened a book bindery. In the decade that followed, he had expanded his Markham Street store to sell books and other assorted goods. Charlotte was thoroughly acquainted with Jacobi because, she later explained, “As my mistresses house in town was near his store, I often ran in there [Jacobi’s store] to buy little things before the war and got to know him well.”
Jacobi was an educated man interested in public affairs. A U.S. citizen since 1844, he was active in the “Sag Nicht” movement that in the middle 1850s sought to counteract the anti-immigrant and anti-Catholic Know Nothing party. Jacobi may have been Jewish, but likely was not. In 1845, while living in Philadelphia, Pennsylvania, he had married Sarah Ann Jewel (1826 – 1904), who was not Jewish. After moving to Arkansas, he was not active in Little Rock’s nascent Jewish community in the 1850s or the B’nai B’rith congregation that officially formed after the Civil War. None of his children were raised in the Jewish faith. Perhaps a freethinker, he apparently did not attend any church in Little Rock.
Portrait of Henry Jacobi
Although Jacobi worked hard to build his business from its modest beginning, he had mixed success and sometimes struggled to support his growing family – during the 1850s, he and his wife added five children to their household, including a set of twin girls. When he made extra profits from his business, he invested in real estate, buying large tracts of undeveloped land near the city. At the end of the 1850s, he encountered severe financial difficulties and ended up deeply in debt. To help financially, his wife opened a shop in 1859 next to his bookstore that first sold “hoop shirts” and, later, shoes.
Jacobi closed his store just as the Civil War was arriving. With a partner, he opened a beer garden and grocery store in May 1861 on about twelve acres of unincorporated fenced land he owned by the western edge of the city. He lived in a house on this land, which sat a few blocks south of the state penitentiary (now site of the state capitol) at a location that was 10th and High Streets before High Street was destroyed by Interstate 630. Jacobi initially called his establishment “Jacobi’s Garden,” but it became known as “Jacobi’s Grove.”
During the Civil War, Jacobi was quietly pro-Union, like many ethnic German immigrants living in Pulaski County. He said little publicly about his views but confided in a few close friends and some of the slaves he knew. For example, Shederick Parrish, who was in bondage until the Union Army occupied Little Rock, testified before the U.S. Southern Claims Commission in 1874 that Jacobi “always talked in favor of the Federal government and said the Yankees would lick the rebels at last. He would read the papers to colored men and tell us how things were going on.” Another former slave, Asa Richmond, who served on the Little Rock city council from 1869 to 1872, told the commission, “I have often spoken to him about the war, but he would not have much to say about it, for it was dangerous for a white man like him who was suspicioned and threatened to talk to a negro – he told me he was a union man. I know he dared not to do anything to show he was a loyal man….” A third former slave, Sol Winfrey, testified, “I believe from what I know of old man Jacobi that he is a union man and that he had to keep what he did a secret or he would have been taken out and hung.”
At the chance encounter of Charlotte and Henry Jacobi in August 1863, the German immigrant warned her, as she later related in her own words, that Mrs. Reider “was getting wagons and fixin to send us to Texas” the next day. Jacobi suggested, she said, that “I had better run off if I could, that the Federals would be in town soon….” Jacobi offered to help her.
Knowing that if she were taken to Texas, she would be beyond the reach of the Union army and the freedom it would bring to slaves in Little Rock, Charlotte ran away that night from Mrs. Reider. She was joined in her escape by six other slaves, including her two children, two other females, and two other children. The seven escapees hid in wooded land lying near the borders of Jacobi’s Grove. She later recalled, “[F]or three weeks we laid out in the woods, night and day, wet and dry, and along in the evening every day, Mr. Jacobi sent out a little girl to us with a bucket full of victuals. She would go up the hill like she was going for water and slip round to us in the bushes.”
By helping the escaped slaves, Jacobi put himself and his family in danger. If his actions had been discovered, he would have been arrested, or more likely would have been beaten or worse, and his property destroyed. According to Charlotte, Jacobi “was suspicioned of having us there for one night some rebel soldiers came out to his house. I was only 200 yards in the timber and saw it all as it was bright moon light, the men were on horses and surrounded the house, some them went in and made the old man get up, then they looked through the stable and everywhere – and when they could not find us they got mad and went down in the cellar and brought up all the barrels of wine and liquor, and after they drank all they wanted – they throwed the rest out.”
One night two weeks after that incident, Charlotte was at Jacobi’s house when a “Federal spy” arrived. He told her that the Union Army “would open the ball(?) at Bayou Meto next morning,” and advised her “not to stay in the woods because the rebels would catch us if we were there as they would scatter them all over.” Immediately, the seven escaped slaves moved to conceal themselves “under the colored Methodist Church.” Charlotte described what came next: “Sure enough next morning the cannons begun to fire, and about 10 o’clock the rebels began to leave there and kept it up till three, and about four o’clock I heard the clank of the cavalry sabers, and looked out and seen the men with blue coats, and I knew it must be the yankees.”
After the union army arrived on September 10th, Mr. Jacobi boarded Charlotte and her six companions for two weeks at his house as they began their lives as free people. They had avoided being taken to Texas, where most of the slaves were not freed until many weeks after the war ended in April 1865.
After her emancipation, Charlotte took Edwards as her family name or married a man whose last name was Edwards. Little is known about her life after she was freed. Her voice speaks through time only in her testimony before the Southern Claims Commission, where she told the story of her escape. She likely lived in Little Rock for the rest of her life (she was still living there in 1874 when she gave her testimony). Although it is not certain, she may be buried in Little Rock’s Fraternal Cemetery where more than 2,000 African Americans have graves. Among them are at least fourteen with the last name of Edwards who were buried before 1915. Their burials were recorded in the cemetery record book, but their graves are not marked, either because they have no tombstones or, if they do, any writing on them is illegible. One person listed in the cemetery record book is Lotte Edwards, who was buried on June 29, 1909. Perhaps she was the Charlotte who escaped from Mrs. Reider. If so, she lived the last half of her life as a free woman, reaching her eighties before her death.
|Reider Burial Grounds|
Unlike the post-war life of her former slave, that of Ann McHenry Reider is easy to trace. She resumed her life in Little Rock after the war with some of her wealth remaining. She continued to live at 2nd and Louisiana Streets in her house that was “all enclosed with green shutters” and had “an old-fashioned garden in which flowers bloomed in profusion” until April 1887 when she moved to a large home at 1406 Lincoln Street, which is now Cantrell Road. She occupied the house, later known as the “Packet House,” with the families of her daughters Cassie (1839-1931) and Amanda (1845-1920) who were married, respectively, to brothers Robert C. Newton (1840-1887) and Thomas W. Newton (1843-1908). Mrs. Reider overcame the trauma of losing her slaves to live a long life, dying in 1897 at the age of 93. According to one obituary, she was at the time of her death “the oldest resident of Little Rock.”
Like her husband, Mrs. Reider was a devout Catholic, and both are buried at Little Rock’s Cavalry Cemetery. Their burial places are in a family plot marked by a marble monument more than a dozen feet tall that features the sculpture of a near life-size woman whose arm is draped over a cross. The sculpture stands on a massive base with Jacob Rider’s name and birth/death dates prominently inscribed in the front.
Jacobi stayed in Pulaski County for the rest of his life, sometimes living in the city but mostly residing on a farm about eight miles from Little Rock. After the war, he did not return to his bookbinding business but continued operating Jacobi’s Grove until about 1871. In addition to the hospitality business, Jacobi found government work. When the Union Army occupied Pulaski County, he signed on with its Provost General Office as a detective and a “secret service” member. For a few months after the end of the war, Jacobi served as the city’s appointed police chief. In 1866, he was elected the city’s constable and collector.
In 1868, Jacobi was elected county coroner at the same election at which voters approved a new state constitution. He was re-elected to that office in 1870 as part of the brindletail ticket. Two years later, he ran for circuit and criminal court clerk, an elective county government office, but lost. After Reconstruction ended, he was defeated in his 1874 campaign to be elected a Justice of the Peace (JP) from Big Rock Township. However, he was appointed to fill a vacant JP seat a couple of months later on Dec. 31th. During most of the decade that followed, he was known as ‘Squire Jacobi, and he presided over a JP court, later called a magistrate court, where people accused of breaking county laws were tried. He resigned from the court in December 1883.
The paltry salaries of his elected positions and the meager profits he earned from his beer garden and farm provided too little income to pay off his pre-war debts. In 1872, the Pulaski County Chancery Court forced him to settle the $7,000 debt owed to creditors in New York, Philadelphia, and Cincinnati by selling large amounts of land he had bought in the 1850s, including 320 acres located fifteen miles from Little Rock, 120 acres nine miles from the city, and three city blocks.
In the early 1870s, Jacobi filed a claim with the U.S. Southern Claims Commission for compensation for property (mainly lumber and animals) taken from him by the Union army soon after it occupied Little Rock. (It was as part of the investigation of this claim that Charlotte Edwards was called as a witness in 1874.) His initial claim was rejected, but when he refiled it in 1876 with letters from Gen. Frederick Steele, who led the successful Union army attack on Little Rock, and Sen. Clayton Powell, it was approved. He was awarded $821.50 of the $3,582 he requested. The commission had no doubts about Jacobi’s loyalty but questioned the value of the property taken from him.
‘Squire Jacobi, a respected citizen, died on January 23, 1887, a couple of weeks before his 74th birthday. His wife, Sarah Ann, lived for 78 years, passing away on December 31, 1904 (the year on her tombstone is wrong). They share a marble headstone at Little Rock’s Mt. Holly cemetery. Jacobi was remembered in his obituary as “charitable, kind, and affectionate to everybody….a true and warm friend always ready to help and assist.” Those characteristics, along with compassion, were evident in his good deed nearly twenty-five years earlier when – at some risk to himself and his family – he assisted Charlotte Edwards and six other slaves to gain freedom that would have been delayed at least twenty months without his help.
1. Mark K. Christ. 2010. Civil War Arkansas 1863. University of Oklahoma Press. See chapter 4 “The Battle of Helena” and Chapter 5 “The Campaign to Capture Little Rock.”
2. Reider’s obituary stated that he came to Arkansas “about 40 years ago.” “Obituary.” 1861. Little Rock True Democrat, Aug. 1, p. 2. His presence in Batesville is mentioned in “Early Times in Arkansas by N.” 1858. Weekly Ark. Gazette, Jan 9, p. 2.
Reider’s year of birth is uncertain. The date on his tombstone is 1776, which would have made him 85 years old when he died in 1861. His obituary stated he was 85. However, in the 1860 census, his age is given as 76. In the 1850 census, his age was listed as 53, and the 1840 census indicates that his age was between 40 and 49. According to the 1850 census, he and his wife had a three-year-old child, which means that if he were 85 years old in 1861, he would have been 71 when the child was born.
3. The exact day he arrived is mentioned by Fay Hempstead (p. 773) in Pictorial History of Arkansas from Earliest Times to the Year 1890, published in 1890. Accessed via Google Books.
4. His first advertisement in the Arkansas Gazette, which at that time was published at Arkansas Post, appeared on May 21, 1828. Because of the time needed to set up a store, Hempstead's arrival date (footnote 3) was likely not accurate. “New Goods.” (Adv). Ark. Gazette, May 21, 1828, p. 4.
5. According to the 1860 census, Reider owned real property worth $25,000 and personal property valued at $15,000. In 2020 dollars, the amount was about $758,000 (real property) and $455,000 (personal property). I used the inflation calculator at http://www.in2013dollars.com/ to determine the present values in 2020. The site estimates that a $1 in 1860 had the purchasing power of $30.31 in 2020.
6. “St. Andrews Cathedral, Little Rock.” 1924. The Guardian (Official Organ of the Diocese of Little Rock), December 20, p. 8. Accessed athttp://arc.stparchive.com/Archive/ARC/ARC12201924p08.php
In his obituary, Reider was described as follows: “An unlettered man, he was endowed by nature with remarkable mind and memory, and sound judgment.” “Obituary.” 1861. Little Rock True Democrat, Aug. 1, p. 2.
“Glimpses of Yesterday.” 1934. Ark. Gazette, Mar. 11, p. 30.
Calvin L. Collier. 1961. First In – Last Out: The Capitol Guards, Arkansas Brigade in the Civil War. Pioneer Press (Little Rock), p. 115.
“Jacobi’s Garden.” 1861. Weekly Ark. Gazette, July 6, p. 3. The advertisement stated:
Jacobi gave similar advice to Nelson Douglas, the slave of a Confederate Army officer. According to Brooks, a few days before the occupation, “[Jacobi] told me to remain in Little Rock and not to go south with Col. Brooks and the Confederate Army.” Brooks took the advice. On the day that the Union Army arrived, Brooks went to work for Jacobi, living at his place until June 1865. Testimony of Nelson Douglas in
searches of Ancestry.com, familysearch.org, newspapers.com, newspaperarchives.com, and geneologybank.com.
Oakland and Fraternal Historic Cemetery Records,” accessed on familysearch.org.
“Glimpses of Yesterday.” 1934. Ark. Gazette, Mar. 11, p. 30 and Renton Tunnah. 1929. http://www.arkansaspreservation.com/National-Register-Listings/PDF/PU3243.nr.pdf For more on the Packet House, see
tombstone has the date of her death as November 16, 1898. However, her obituaries are dated 1897: “Mrs. Anna Reider’s Death.” 1897. Ark. Gazette, Nov. 16, p. 5 and “The Oldest Resident of Little Rock.” 1897. Forrest City Times, Nov. 19, p. 6. (The likely date of her death was Nov. 14, 1897; the Arkansas Gazette obituary published on Tuesday, Nov. 16, stated that her death was on the preceding Sunday.)
28. Jacobi’s Grove hosted many events, including the city’s first Maifest, held by ethnic Germans in 1867. Also, it was a popular venue for events held by the city’s former slaves. Jacobi sold this property in the early 1870s, but the name and venue remained in use into the 1880s. See
“The Late Henry Jacobi.” 1887. Ark. Gazette, July 5, p. 5 and “Mrs. S. A. Jacobi Dead.” 1905. Ark. Gazette, Jan. 1, p. 7. | <urn:uuid:6514b20e-718f-4112-bda8-e50719f7d4b5> | CC-MAIN-2021-21 | https://www.eclecticatbest.com/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991428.43/warc/CC-MAIN-20210514152803-20210514182803-00252.warc.gz | en | 0.985904 | 4,337 | 3.078125 | 3 |
How much noise do turbines generate?
The main source of sound from wind turbines is aerodynamic sound, which is created as air passes around the blades. This sound is heard as a swishing or whooshing near to the turbines. Turbines can also produce mechanical sound from the generator and gear box (if present), and adjacent to the turbine the electrical transformer can be heard.
At the typical distance of the nearest houses to a wind farm (500m to 1km) the overall sound is generally a bland indistinct low level sound, sometimes compared to the sound of waves on a beach.
When standing underneath or in the vicinity of the wind turbines (within approximately 100 metres) the sound levels are typically in the range of 55 to 60 dB, which is similar to sound levels experienced during normal conversation between people.
At the nearest houses (eg 500m to 1km) sound levels from wind farms are usually in the range of 35 to 40 dB outside houses. These sound levels outside houses are similar to the sound levels normally experienced inside a quiet library, or from people talking in hushed voices.
Does the geography of the area proposed for the turbines add to the noise they emit?
Topographic contours are integral to the acoustics computer model for the wind farm, and the interaction of sound waves with the ground between each wind turbine and each house is included in the calculations. At some frequencies the sound is partially absorbed by the ground, but at other frequencies it is amplified. These effects are included in sound level predictions.
Reflections from surrounding terrain in the wider area do not noticeably increase sound levels because of the increased propagation distance of sound travelling out to the valley side and back compared to the direct sound path, and losses due to absorption and scattering at the reflection from the valley side.
Also, the inclination of valley sides is such that sound is reflected predominantly upwards rather than down towards houses. Reflections from hillsides of impulsive or short duration sound are often clearly audible as echoes, but this does not relate to a significant increase in level for continuous wind farm sound.
In rural valleys people often refer to experiencing ‘amphitheatre’ effects. In many cases this relates to the fact that the area is quiet at times so sounds from surrounding activities are still audible at a significant distance. Another reason is that sheltered valleys can support the development of temperature inversions, under which condition sound propagation is enhanced. However, with respect to wind farms, strong temperature inversions only develop in stable air conditions which can only exist with low wind speeds when wind farms do not operate.
Is there a difference in decibel level and tone between 90m and 207m wind turbine blades?
Yes - There are differences between different size wind turbines. When comparing modern designs, a single large wind turbine produces more sound than a single small wind turbine. However, a single large wind turbine produces less sound than multiple small wind turbines that would be required to generate the same electrical power.
Wind turbine rotation speeds are limited by the speed of the blade tip. A larger wind turbine therefore rotates at a slower speed than a small turbine, altering the timing of the blade swish heard when standing close to turbines.
Noise effects vary between specific sites and are not universally better or worse with larger or smaller wind turbines.
What noise monitoring has been carried out?
In March 2017 monitoring was conducted at a property in Thorp Road and another in Rawhiti Road to gain an appreciation of the existing environment used in conjunction with site observations. Further baseline measurements for establishing noise limits will be required prior to construction, and compliance measurements will be required when turbines start operating. These additional measurements will be undertaken at three representative locations, which are proposed at three properties in Thorp, Rawhiti and Rotokohu Roads.
Were different wind conditions and weather factored into the noise readings?
Yes – sound level measurements are analysed relative to the measured wind speed and direction in each 10 minute period during the survey. This is required by NZS 6808.
Will an acoustics plan be prepared?
Prior to construction a prediction report in accordance with NZS 6808 will be prepared to confirm the sound from the final turbine type and layout (unless it is identical to the current assessment). A compliance assessment report will also be prepared once the wind farm is operating and will be submitted to the Council.
What impact will the turbines have on the ecology, fauna and birdlife in the area?
Environmental specialist, Kessels Ecology, was commissioned to undertake an ecological effects assessment of the proposed Kaimai wind farm and surrounding locality to determine existing ecological features and their relative sensitivity to the construction and operation of the proposed wind farm.
The field work for the investigation was undertaken from 2009 to 2017 enabling data to be collected across multiple years on the distribution and habitat utilisation of the locality by birds and bats. Further desktop analysis was undertaken to determine the effects of the proposal on aquatic freshwater biota, indigenous vegetation, lizards and terrestrial invertebrates. Below is a summary of the investigation –
Effects on Vegetation
The wind farm area can generally be described as a mosaic of rolling pasture land with a number of exotic plantations and indigenous forest remnants scattered throughout. Some 72% of the site is covered in pasture. Smaller stands of secondary broadleaved forest are mainly present within the gully systems in the northern half of the site, while larger areas of logged tawa forest remain along the eastern margin of the site (i.e. the Kaimai Ranges), as well as in the southern extent of the site and near the quarry at the north-western margin of the site.
While indigenous forest and scrubland is situated within 100 m from the edge of some of the turbine locations, since all the centres of the turbines are located in the pastoral land no indigenous vegetation will be removed in the turbine footprint. No ecologically significant indigenous vegetation or nationally threatened plant species would be affected by the proposal.
The introduction of new weeds, diseases and the spread of existing weed species will need to be managed to protect the ecological health of the existing indigenous vegetation remnants in the locality.
All machinery and aggregate brought onto site will need to be cleaned, or otherwise guaranteed free of attached seed or plant matter before being brought on to site.
Provided due care and initial weed control is carried out as and when required, it is expected that the pasture or indigenous scrubland species will quickly gain a foot-hold and dominate vegetative cover along access road batters and cuts.
Effects on Freshwater Aquatic Habitats
No fish or aquatic macroinvertebrate habitats would be adversely affected provided appropriate sediment control measures are adopted. No upgrades to existing access stream crossing are proposed with the current roading design. Although water abstraction requirements have not be defined at this point in time, abstraction points should result in no more than minor adverse effects on in-stream biota provided suitable storage and/or non-fully allocated water sources can be devised and found.
Sediment control measures include, but are not restricted to, controlling run off, the prevention of slumping of batters, cuts and side casting, maintain slope stability and contingency measures for heavy rainfall events.
Effects on Lizards, Frogs and Terrestrial Invertebrates
As no ecologically significant indigenous vegetation will be disturbed during the construction phase adverse ecological effects on lizards and indigenous terrestrial invertebrates is likely to be minimal. However, it is possible that areas of non-ecologically significant vegetation (both exotic and indigenous) cleared or trimmed for infrastructure development or tower placement will include lizard and invertebrate habitat.
The consequential relatively minor adverse effects on these fauna groups can be managed through appropriate mitigation and monitoring measures. Details of these measures can be dealt with as part of the consent conditions.
Effects on Birds
According to international best practice guidelines a summary of the main bird habitat areas which should be avoided when locating a wind farm are: (1) Areas with a high density of wintering or migratory waterfowl and waders where important habitat might be affected by disturbance or where there is potential for significant collision mortality; (2) Areas with a high level of raptor activity, especially core areas of individuals breeding ranges and in cases where local topography focuses flight activity which would cause a large number of flights to pass through the wind farm; and (3) Breeding, wintering or migrating populations of less abundant species, particularly those of conservation concern, which may be sensitive to increased mortality as a result of collision.
The main bird groups impacted by wind farm developments internationally have been swans, geese, ducks, waders, gulls, terns, large soaring raptors, owls and nocturnally migrating passerines. Most resident bird species within the study site are common and widespread with the potential exceptions of New Zealand pipit, North Island kaka and New Zealand falcon, which are all found in the local area. There is a risk of collision with the turbine blades, especially along the forest edge. It is possible that New Zealand falcon and kaka will suffer occasional strike, particularly by the turbines along the forest edge of the Kaimai-Mamaku Conservation Park. Australasian bittern may be also be at risk from strike while moving between the Bay of Plenty and Kopuatai Peat Dome. However; of these species, only pipit was detected during the bird surveys or by the acoustic surveys, so while non-detection does not necessarily mean these birds are absent from the locality, it does suggest that they may be present in low densities. While the ability of these key forest and wetland bird species to adapt to the turbines and become accustomed to associated noise and movement is likely, and the birds should be able to fly around the turbines to gain access to other remnant bush areas within the locality, there is a likelihood that strike will occur from time to time.
There is insufficient data for this site to determine the strike level, but modelling and carcass searches at other similarly situated New Zealand wind farms suggest strike rates will be low. Nonetheless, the local effects of this mortality may be more than minor on threatened species, so some form of offset mitigation, such as a contribution to local animal pest control to increase bird productivity, is recommended.
The impact of the wind farm on migratory birds is dependent on any flight path these species may take between key habitats in the Bay of Plenty and Firth of Thames. Wader and shorebird species, such as bar-tailed godwit, wrybill and South Island pied oystercatcher, may move between the Firth of Thames and Tauranga Harbour on a regular basis and in doing so traverse the proposed windfarm footprint. The sound recorders detected two flocks of South Island pied oystercatchers crossing the proposed wind farm site on one occasion in January 2013, from a total recording effort of some 4,000 hours. These detected South Island pied oystercatchers were crossing the southern section of the windfarm over the Kaimai range. This indicates that the site is likely part of a seasonal commuting route for waders between the Haruaki Gulf and Tauranga Harbour.
Initial strike risk analysis at similar New Zealand sites indicates that turbine strike is possible for wader species and it will be in the range of less than 2-5 birds per annum for the proposed Kaimai wind farm. This level of strike risk is considered to have a minor adverse effect on the target shorebird species. However, given that several species are threatened, such as wrybill, offset mitigation may be required to compensate for any residual adverse effects on wader bird species. Quantification of this offset can be addressed at the consenting stage, but could involve a contribution to conservation activities by community groups at Miranda, which is a key site for international and national wader birds.
Effects on Bats
The nationally threatened North Island long-tailed bat is known to be present within the Kaimai Ranges and was detected during the surveys for this proposal. The survey results showed longtailed bat activity during 4-17 January 2013, and from 22 September to 27 October 2015 at the study site. In the 2015 survey 63% (eight) of all of the surveyed sites contained long-tailed bats, while in the 2013 bat survey 55% (11) of the sites contained bats. In total 59% (19) of the surveyed sites detected bats. No publicly accessible studies have investigated the impacts of wind farms on the spatial use of either of New Zealand’s native bat species. Therefore, it is not clear whether avoidance behaviour occurs in either native bat species.
Based on review of international studies it is considered possible that long-tailed bats will suffer mortality as a result of interactions with the turbines. Thus, bats are considered to be at moderate risk of being killed or injured by turbine strike at this proposed wind farm site. A combination of habitat restoration and pest control would enhance the local North Island longtailed bat population, producing a healthy source population which could mitigate against any declines at the proposed wind farm site.
Avoidance, Remediation and Mitigation Recommendations
The proposed Kaimai wind farm is situated within a largely pastoral environment, heavily modified by human activities and animal pests. No ecologically significant or legally protected natural features will be directly affected by the proposed wind farm. However, there are several threatened birds and one bat species which could be adversely affected by the turbines in the form of turbine blade strike. The biodiversity consequences of this risk are low to moderate at a local level, and the effects are likely to be minor at a regional, national and international scale.
It is recommended that measures are taken to avoid, remedy or mitigate the adverse effects of turbine strike on these key animals and their habitats, as well as address the localised potential adverse effects associated with construction. A range of measures that will avoid, remedy or mitigate the adverse effects of the project (inclusive of the wind turbines, access roads and the transmission lines) are required. They should include:
- Ensuring all aspects of the construction and operation of the wind farm minimise any potential adverse effects associated with indigenous flora and fauna habitat disturbance, sediment runoff, water abstraction and stream crossings (if any);
- Preparation and implementation of a mitigation package to compensate for potential turbine strike on key indigenous fauna which incorporates enhancing productivity of the target species through ongoing animal pest control and ecological enhancement of targeted natural features; and
- Monitoring of key fauna species, as well as carcass searches under the operational turbines, for a specified period, in order to ensure that the risks associated with the operation of the wind farm are low and to allow for adaptive management risk minimisation contingencies if required.
What impact have windfarms had on property values where they have been established in New Zealand?
Research carried out by Colliers International indicates that in New Zealand thus far, there will be no or negligible long term ongoing negative value impacts on the values of rural properties surrounding the proposed Kaimai Wind Farm, caused by the wind farm being visible to the rural properties or parts of properties.
In summary, studies have shown that there may be a potential difference in the impact on property values arising from the proximity of wind turbines, depending on the property type. Rural properties have been shown to be least affected of all; and in some studies affected positively. Lifestyle blocks generally occupied by city office workers may potentially be affected if turbines are within hearing distance or very close to dwellings, at wind farms close to cities. At some other lifestyle locations, however, not near cities; where wind farms have been established nearby, such as at Te Apiti near Palmerston North, no fears over value erosion have arisen or been expressed in the resource consent process. It appears (and this is borne out from anecdotal experience) that residents largely support the environmental benefits derived from sustainable electricity generation.
In conclusion, Colliers’ introductory study has confirmed earlier findings that there are no discernible negative value impacts on rural property values caused by wind farms being visible to parts of properties.
What is the predicted traffic use on local roads?
Kaimai Wind Ltd is proposing that extra-heavy transportation be limited to one route – Rawhiti Road – to contain effects and need for bridge and roadside upgrades.
Eight to 16 tonne truck units may use Rotokohu Road and Rawhiti Road.
Lighter traffic – utes, cars and light trucks (less than 8 tonne) – may use Rotokohu Road which is convenient for staff accommodation and supply of equipment from outlets in Paeroa.
How will the local community benefit from the establishment of the windfarm?
We expect a good level of commerce will be generated in Paeroa during pre-construction, construction and commissioning of the wind farm.
At least two staff will reside in Paeroa and we will also establish a warehouse in the town to store key parts and consumables. There will therefore be advantage to the local community from personnel living in town and from local people being employed and trained for the wind farm.
The rating base for the Hauraki District Council will also increase with potential benefit via council services.
Aren’t there more remote locations where the windfarm could be established?
There are lots of remote locations in New Zealand, however the main constraint, when it comes to developing a wind farm, is remoteness from a grid connection and transport routes. To justify a remote wind farm (which has a high cost of grid connection and roads) wind farm projects have to be larger – often much larger – eg the now cancelled HMR project on the west cost of the Waikato. A wind farm in New Zealand needs to of moderate scale (to fit into a demand gap in the market) and needs to be close to roads and grid connection. It also needs to have an excellent wind resource and be consentable.
How will you keep the local and wider community informed?
Communication is a two-way path – the first part is ours, providing you with regular updates on what is happening so you feel informed. The second part is yours – if you have questions or concerns, let us know so we can answer them.
One tactic won’t achieve the level of engagement we want with the local community so we will be using a variety – from regular update letters to neighbours, to regular updates on our website, public meetings and via local media. Our aim is to be as transparent as possible so you understand what is proposed for your district.
Will the turbines be lit at night?
The wind farm is likely to have suitable lighting to comply with the requirements of CAANZ Rule Part 77.21(d) and appendix B and marked on aeronautical charts. This would be a CAANZ decision.
Do you plan to extend the windfarm beyond the current proposal?
There are currently no plans to extend the wind farm beyond the current proposal.
Will public meetings be held to provide local people with an opportunity to have their questions answered?
Public meetings have a place in public consultation and engagement – not simply as a means for us to tell you about the project, but to provide you with an opportunity to meet the people behind the project, and have your questions and concerns answered. We have conducted a number over the years and have also met – and will continue to meet – with residents in their own homes so we can experience and understand their concerns.
Are you talking with local Iwi?
Yes, we are working with local iwi to support the development of Cultural Values Assessments (CVA).
A CVA is a way to recognise and provide for the relationship of iwi and their culture and traditions with their ancestral lands, water, sites, wahi tapu and other taonga and to assess how any adverse effects could be avoided, remedied or mitigated.
Ngāti Tara Tokanui submitted its draft CVA at the end of October 2020 and CVAs are currently (May 2021) being endorsed by Ngāti Tamaterā, Ngāti Rahiri Tumutumu and Ngāti Hako. Once received, a hui will be held with iwi to discuss how any adverse effects can be avoided, remedied or mitigated.
The Kaimai Range is popular with paragliders – what steps are you taking to talk to, and answer, their concerns?
We have had a number of conversations with the local Soaring Club and with commercial and recreational flyers and, as a result, reduced the number of turbines from 26 to 24 to accommodate flight paths. We would also consider shutting down specific turbines during gliding competitions.
Peet Aviation also conducted a comprehensive aviation report which concluded that the proposed wind farm will not represent a physical obstacle to glider operations over the proposed site. Likewise, turbulence and wind shear will not be an issue when wind speeds in the area are approximately 16 knots, which is the norm. Glider operations over the proposed site may, however, be affected when wind speeds are more than 20 knots – although this would account for potentially 15% of the time, and needs to be considered against the fact that glider activity would remain viable and subject to pilots conducting flights ina safe and secure manner at an appropriate altitude.
What considerations are you able to give for people who have an emotional or special affiliation with the area proposed for the turbines?
We understand that people may have emotional connections to the land that we are proposing for the wind farm. If you, or someone you know, has particular concerns about any area of the proposed windfarm (see attached map), then we want to know. Please contact us via the website.
What is the proposed timeline for the proposal?
The consent application has been lodged with the Hauraki District and Waikato Regional Councils and was notified in December 2018. This enabled the sharing of a range of detailed reports and analyses on the project empowering the public to make submissions on the proposal.
Submissions closed on 31 January 2019. A range of pre-hearing meetings were held in 2019 to give submitters an opportunity to discuss and narrow down the issues to be heard at a later hearing. The hearing itself is expected to take place in the third quarter of 2021 and will give submitters the opportunity to present their views verbally to independent commissioners who will be appointed by both councils to make a decision on the proposal.
You can check out the application here: https://www.hauraki-dc.govt.nz/services/resource-consents/kaimai-wind-farm-project/
Got any questions?
If you have any questions about any aspect of the proposal to construct and operate a wind farm on the lower Kaimai Ranges, please let us know – simply fill out the form on the website www.kaimaiwind.nz and we will respond to you directly and include your question and our answer in this Q&A. | <urn:uuid:8a4e4f48-d8bd-488a-9348-e5ea0dabfa24> | CC-MAIN-2021-21 | https://877643000965973312.weebly.com/qna.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991269.57/warc/CC-MAIN-20210516105746-20210516135746-00095.warc.gz | en | 0.948737 | 4,721 | 3.828125 | 4 |
2 Technical feasibility of low carbon heating
The suitability of domestic low-carbon heating depends on the available heating technologies and on their compatibility with various characteristics of the homes in question. The following sections illustrate the low-carbon heating technologies considered in this study and factors influencing their suitability.
2.1 Low-carbon heating technologies
The range of low carbon heating systems considered in this study is summarised in Table 1, followed by a brief explanation of the technologies and the assumptions considered in this study.
Table 1: Low-carbon heating technologies considered in this study
1 Air Source Heat Pump (ASHP)
2 Ground Source Heat Pump (GSHP)
3 High-temperature ASHP
4 High-temperature GSHP
5 Communal ASHP
Electric resistive heating
6 Electric storage heating
7 Direct electric heating
8 Electric boiler
9 Solid biomass boiler
10 BioLPG boiler
11 Bioliquid boiler (B100)
Low carbon gas
12 Hydrogen boiler
13 Biomethane grid injection
Hybrid heat pumps
14 Hybrid ASHP + gas boiler (no hot water cylinder)
15 Hybrid ASHP + gas boiler (with hot water cylinder)
16 Hybrid ASHP + bio-liquid boiler (no hot water cylinder)
17 Hybrid ASHP + bio-liquid boiler (with hot water cylinder)
18 Hybrid ASHP + hydrogen boiler (no hot water cylinder)
19 Hybrid ASHP + hydrogen boiler (with hot water cylinder)
20 Hybrid ASHP + direct electric heating (no hot water cylinder)
21 Hybrid ASHP + direct electric heating (with hot water cylinder)
22 District heating
Combinations with solar thermal
23 ASHP + solar thermal
24 Electric storage heating + solar thermal
25 Direct electric heating + solar thermal
26 Electric boiler + solar thermal
The installation of PV technologies, thermal storage or electrical storage were not modelled in our study, but are expected to impact costs, increasing capital cost but generally reducing operating costs of the heating system.
Domestic heat pumps are central heating systems that can absorb heat from the outside of a building and transfer it to the inside by means of a refrigerant fluid. The main components of a heat pump are an evaporator, a compressor, a condenser and an expansion valve. The refrigerant is circulated from the condenser, where it extracts heat at low temperature from the outside source, to the compressor, where its temperature and pressure are increased. The refrigerant then flows though the condenser, where it releases heat at high temperature to the inside of the building. After circulating through the expansion valve its pressure is finally reduced and the cycle can restart. The energy requirement of a heat pumps corresponds roughly to the electrical power needed to compress and circulate the refrigerant. The performance of a heat pump is highly dependent on the temperature of the outside source and on the temperature at which heat is delivered through the wet system.
Technologies considered in this study are air source heat pumps (ASHPs), absorbing heat from the outside air, and ground source heat pumps (GSHPs), extracting heat from the ground either through a horizontal closed ground loop or a vertical closed ground loop. Both air source and ground source heat pumps operate at optimal performance when producing heating at low temperatures. For comparison, space heating can be delivered by conventional heat pumps with optimal efficiency at temperatures around 35-40°C, while gas boilers are designed to efficiently deliver space heating at temperatures from 60°C and up to 80-90°C. Therefore, the adoption of heat pump heating requires also the installation of emitters that are larger than those utilised for a gas central heating system, in order to ensure sufficient heat is transferred from the low-temperature circulating water to the heated spaces. Additionally, in homes where the pre-existent wet system is composed of very narrow pipes, the installation of conventional heat pumps may require also the installation of new wider pipes that are capable of delivering a high flow rate.
Additionally, a high-temperature heat pump is assumed to be capable of producing output temperatures of 65°C. While its upfront cost is generally higher than for a conventional heat pump, a high-temperature heat pump can deliver space heating at a temperature closer to that of a gas boiler. As a consequence, high-temperature heat pumps are unlikely to require the installation of radiators larger than those used in gas boiler heating systems, as opposed to conventional heat pumps.
Finally, a communal ASHP system refers to a single ASHP unit delivering heat to multiple flats or terraced houses, assuming a network shared between 6 dwellings.
This study assumes that with the installation of heat pump technologies heating demand is met by the heat pump itself, while hot water demand is met by electric on-demand devices, such as electric taps.
Electric resistive heating
Direct electric heating involves the production of heat from electricity through a resistive element and its delivery via radiators, panel heaters or infrared heaters.
Panel heaters and electric radiators are convector heaters, as they heat the air directly and generate passive convection currents that transfer heat across a room. Infrared heaters, or radiant heaters, transfer heat predominantly via infrared radiation to the surfaces in a room, while the surrounding air is heated indirectly by the room's warm surfaces.
For this study, direct electric heating is one of the investigated low-carbon heating options, but also one of the counterfactual heating technologies already present in Scottish homes. Only convector heaters are therefore considered for direct electric heating and radiant heaters are not included, as their use is not very common in Scottish homes. Electricity use of direct electric heating is assumed to occur during the day and is therefore subject to the higher tariffs of peak-time electricity.
Electric storage heating also produces heat from electricity though a resistive element, but typically occurs overnight, taking advantage of the lower electricity tariffs during off-peak times. The heat is absorbed and stored by high thermal mass bricks and later released during the day by a fan blowing air over the heated bricks. An independent heating unit is installed in each room.
Electric boilers produce heat from electricity and transfer it to water, delivering space heating though a wet heating system, either through radiators or through underfloor heaters. Additionally, the boiler may also produce hot water, when in combination with a hot water cylinder.
This study assumes that with the installation of electric resistive heating technologies hot water demand is met by electric on-demand devices, such as electric taps.
Bioenergy boilers operate in the same way as a conventional natural gas or LPG boiler, burning fuels to heat water in a wet heating system.
A solid biomass boiler can burn wood pellets, wood chips or logs to heat up water and deliver space heating via a wet heating system or produce hot water in combination with a hot water cylinder, similar to a conventional gas or electric boiler. Solid biomass requires a large availability of storage space, determined by the relatively low energy density of the fuel and by fuel delivery logistics.
A bioLPG boiler is not different from a conventional LPG boiler. Evidence suggests that biopropane can be used as a drop-in fuel in LPG boilers without the need of adaptation. The use of this technology requires the installation of a gas cylinder for the storage of bioLPG.
The use of bioliquid boiler (B100), burning 100% biodiesel was also investigated. While the overall configuration of a bioliquid boiler is similar to that of a standard oil boiler, bioliquid cannot be utilised as a drop-in fuel in existing oil boilers, unless it is utilised in a fuel blend (e.g. B30K, composed of 30% biodiesel and 70% kerosene). An oil boiler utilising 100% biodiesel requires a few dedicated adaptations, such as an optimised design for the burner. Additionally, the installation of a preheated fuel tank may be required, as biodiesel must generally be stored at a temperature between 5˚C and 15˚C, to ensure it maintains a low viscosity . Particular attention must also be paid to the compatibility of the materials used in the boiler, pipes and storage tank that come in contact with the biodiesel, as some have been reported to degrade more easily than when exposed to conventional diesel.
The use of domestic resources for bioenergy in Scotland has the potential to more than double from the current value of 6.7 TWh per year to 14 TWh per year by 2030. However, there is strong market competition and practical constraints which limit the availability and suitability of certain feedstock types. This report has not taken into consideration the availability of bioenergy feedstocks.
Low-carbon gas boilers
Low-carbon gas boilers are heating devices burning low-carbon fuel delivered by the gas grid. The main options that are commonly considered for the decarbonisation of the gas grid are the use of hydrogen or biomethane, either to be used pure or to be blended with natural gas.
Hydrogen boilers investigated in this study are assumed to be burning 100% hydrogen. The technical challenges of burning hydrogen, compared with the combustion of natural gas, are related to a higher flame speed and the associated risk of light-back, as well as a higher creation of NOx and the higher risk of explosion of unburned gas. The layout of the burner and other components of the hydrogen boiler are therefore adapted to accommodate these technical requirements.
Biomethane grid injection consists of blending a portion of biomethane into the gas grid. The type of heating technology required in the case of biomethane grid injection will depend on the future decarbonisation of the gas grid. In fact, partial or total decarbonisation could be achieved in future though the supply of a gas blend composed of hydrogen and natural gas in varying proportions. In the case of biomethane grid injection, a portion of biomethane would also be added to the blend. While blends with hydrogen concentration below 20 mol% are expected to be compatible with combustion in conventional gas boilers, blends with higher hydrogen content would require the installation of a hydrogen boiler.
Hybrid heat pumps
Hybrid heat pumps are low-carbon heating systems that combine a heat pump with a different heating technology, thus integrating the low-carbon performance of a heat pump with the reliability of an additional heating unit as backup for the colder winter months. As heat pump efficiency depends on both the outside temperature and the temperature at which it delivers heat, the two technologies of a hybrid system are operated alternatively, choosing the technology that offers the highest efficiency and level of thermal comfort at a given time.
The hybrid heat pump systems considered in this study combine an ASHP with either a gas boiler, bioliquid boiler, hydrogen boiler or direct electric heating. Their suitability was analysed both in combination with a hot water cylinder or standalone with no production of hot water. It is assumed that 80% of the annual space heating demand is met by the heat pump and the remaining 20% by the additional heating unit. Hot water demand is entirely met by the additional heating unit, except for hybrid heat pumps with direct electric heating, for which hot water demand is assumed to be met by electric on-demand devices, such as electric taps.
District heating networks deliver heat from a common energy source to a large number of homes through a pipe network. A low-carbon heat network can be operated using a range of technologies such as a heat pump, biomass boiler, or solar thermal unit, or by recovering waste heat from industrial processes. Centrally generated hot water or steam is distributed through an underground pipe network and is delivered to a heat exchanger in each home to produce space heating and hot water on demand.
Solar thermal collectors can be installed alongside various heating technologies to support the production of hot water. Considered technologies in this study are the combinations of solar thermal with ASHP, electric storage heating, direct electric heating and electric boilers, all requiring the connection to a hot water cylinder to supply hot water. For these combinations, it is assumed that 60% of hot water demand is delivered by the solar thermal system and the remaining 40% is met by the heating system.
2.2 Factors influencing suitability of low-carbon heating
2.2.1 Technical factors
The suitability of homes for the low-carbon heating technologies considered in this study is determined by a range of potential barriers.
Lack of internal space for the installation of large units or large hot water cylinders can affect the suitability for the installation adoption of heat pumps and other heating technologies associated with a hot water cylinder.
Scarce availability of external space can impact the suitability for installation of external components of the heating system, such as a horizontal ground loop for a ground-source heat pump, a gas cylinder for the storage of bioLPG, or a biofuel tank required by a bioliquid boiler. Additionally, it can constitute an obstacle to the implementation of biomass heating, which requires external space for the storage of the fuel. Finally, the lack of wall space for an external unit or a suitably orientated roof can influence the suitability of an air-source heat pump or solar thermal collectors.
Communal heat pump systems are most cost-effective when installed in homes located close to each other, such as terraces and flats, due to the lower cost of piping and associated groundwork.
Peak heat demand and peak specific heat demand are two important parameters influencing the suitability of various low-carbon heating technologies. Peak heat demand is here defined as the maximum heat demand of a home at a given time, typically occurring on the coldest winter day and measured in W. This measures the amount of heat that must be supplied to a home to maintain thermal comfort. Peak specific heat demand is calculated as peak heat demand divided by the total floor area of the habitable rooms and is measured in W/m2. Large specific heat demand is generally associated with homes that are poorly insulated and/or located in cold climates. Large heat demand, on the other hand, can be a result of both large specific heat losses and of large dwelling size.
Heat pumps in dwellings with large peak specific heat demand (typically above 150 W/m2) are at risk of not meeting thermal comfort, as this requires the installation of very large radiators and/or the heat pump to produce space heating at a higher temperature - and reduced energy performance. The average peak specific heat demand across Scottish homes is 87 W/m2 and only ~1% of Scottish homes are estimated to currently have peak demand above 150 W/m2.
Additionally, a large peak heat demand may be unsuitable for any technology that generates heat from electricity, such as direct and storage electric heating, heat pumps and hybrids. Large peak heat demand of cold winter days may result in an electricity demand triggering the fuse limit of the building, rendering it unsuitable to electric heating technologies. While the efficiency of direct heating and storage heating is assumed to be 100%, the performance factor of heat pumps is expected to decrease with the external temperature. Therefore, while on cold winter days electricity demand will generally increase due to a larger space heating demand, in the case of heat pumps the electricity demand increment will be exacerbated by a reduced performance of the heating technology. In this study it was assumed that heat pump technologies will operate at average external temperature of ~8°C and minimum external temperature of -10°C.
While there is not sufficient information available on the fuse limit of individual Scottish homes, it is assumed that typical values will lay in the range of 30A to 100A, the latter being the maximum fuse rating available for a single-phase domestic connection. Load increases to up to 100A generally involve the replacement of the fuse alone and are associated with little to no cost, depending on the state of the connection cables and on the network operator (not exceeding a few hundred £ ). For load increases above 100A, an upgrade to a three-phase connection is also possible for individual dwellings. This is however associated with significant costs, expected to be of the order of a few thousand £, and may require several weeks to be completed, especially if a permit for digging the power cables is required from the local authority. Where an upgrade of the fuse limit would result to be too costly or undesirable, an alternative option would be the installation of an electric battery, to support the supply of power to the heating device, or of a heat battery, to support the delivery of heat to the home alongside the heating system.
The implementation of heating technologies that rely on electricity may not only face suitability obstacles in certain homes but may also represent a burden for the distribution network. Additional costs for network reinforcement must be considered.
In this study, peak heat demand was estimated from the yearly heat demand, assuming peak heating load factor of 16%. In other words, peak heat demand was assumed to be the power that would be provided by the heating system if it were operating for about 3 hours and 50 min per day and delivering the yearly heat demand over the course of one year. Note that this assumption has a large impact on assessment of the number of homes that may be affected by peak heat demand constraints. In fact, a larger peak heating load factor, would result in smaller peak heat demand and therefore also a smaller number of homes in which the implementation of electric resistive heating or heat pumps may trigger the fuse limit or contribute to the risk of not meeting thermal comfort.
Air-source heat pumps located close to the sea are subject to a reduced lifetime, due to the accelerated corrosion of the heat exchanger caused by the salinity of air. Malfunction of the heat pump can be prevented by applying a coating on the heat exchanger; however, this adds to the capex costs of the appliance and may increase operational costs due to maintenance.
Local geological characteristics may influence the implementation of GSHP, impacting the suitability for the installation of a vertical ground loop.
Gas grid and district heating network proximity
Dwellings located in areas away from the gas grid are not suitable for hydrogen boilers and biomethane grid injection. Similarly, the connection to a district heating network may not be available for homes in areas of low heat density.
Air quality restrictions
Restrictions on air quality may affect the suitability of fuel combustion appliances. In fact, biomass boilers are responsible for the emission of a substantially larger amount of particulate matter (PM2.5) per kWh of heat than gas boilers, while high-temperature boiler systems such as hydrogen boilers may produce a high level of NOx emissions, adversely affecting local air quality, typically most critical in urban areas. This factor was not included in our suitability assessment.
Concerns around noise pollution may discourage the implementation of heat pumps, especially in densely populated urban areas, where multiple units may need to be installed in close proximity. This factor was not included in our suitability assessment.
The implementation of certain low-carbon heating technologies may involve complementary measures, such as the installation of additional equipment, leading to additional costs and disruption for the occupants:
- Installation or replacement of wet heating system, required when replacing electric resistive heating with low-carbon boilers or generally when installing a heat pump;
- Installation of a hot water cylinder;
- Local network reinforcement;
- Replacement of cooking appliances, required when disconnecting from the gas grid;
- Replacement of electrical wiring or gas pipework;
- Installation of a fuel tank or biomass storage.
2.2.2 Heritage factors
Additional barriers need to be considered when assessing the suitability of low-carbon heating technologies and energy performance upgrades in heritage homes and in old dwellings (pre 1919).
Heritage homes are defined here to include both Listed buildings (Category A, B, C) and homes in Conservation areas, which are respectively buildings and areas of architectural or historic interest, benefiting from statutory protection under the Planning (Scotland) Act 1997. Planning consent is required to make changes to the external appearance and, for listed buildings, to the internal fixtures of these homes.
A recent study by Element Energy for the Committee on Climate Change on hard to decarbonise homes included a high-level analysis of the technical suitability of low-carbon heating technologies and energy performance upgrade measures for heritage homes in the UK, providing both a qualitative and a quantitative appraisal.
Due to the complexity and case-by-case nature of the barriers to retrofit in heritage and old homes, as well as the high level of simplification that would be required to perform a quantitative analysis, this study will only provide a qualitative assessment of the suitability of Scottish heritage homes to low carbon heating and energy efficiency measures. This approach was supported by consultation with experts at Historic Environment Scotland.
Low-carbon heating technologies generally encounter fewer obstacles than energy efficiency measures in their implementation in heritage and old homes, as they require less disruption and integration of new materials. Following considerations on the barriers to implementation of low-carbon heating technologies and energy performance upgrade measures in heritage and old homes were provided by Historic Environment Scotland.
Barriers to low-carbon heating technologies
- Solar thermal: Both aesthetics and technical aspects of the installation, such as the roof material, the weight of the collectors and the location of the pipes, can be an obstacle to suitability. Suitability will need to be assessed on a case-by-case basis through the application for planning permission, where required.
- Bioliquid boiler: A potential barrier to the implementation of bioliquid boilers is represented by a potential limitation to the installation of a bioliquid tank outside a heritage home, due to aesthetics.
- GSHP, district heating: Excavation works and laying pipes on heritage properties can raise complexities due to e.g. archaeological findings. Nevertheless, these obstacles have little impact on suitability of the technology and mainly increase the cost of the installation.
- ASHP, hybrids: The main restrictions are related to the placement of the outdoor unit such that it does not affect the external appearance of the dwelling.
- Less visible or more standard technologies (communal heating or boilers) are considered to be feasible in all buildings.
Barriers to energy performance upgrade measures
- Wall insulation: The suitability of wall insulation is highly dependent on the wall configuration and on the type of insulating material used. Suitable insulating materials include wood fibreboard, hemp fibre and foam, while phenolic or plastic materials are commonly not accepted. Unfortunately, suitable wood fibreboard panels are often thin and less insulating than phenolic or plastic materials, and result in lower energy savings. It is additionally important to tailor the insulation solution to the wall structure, in order to prevent thermal bridges, to maintain weather-proofing of the external walls and to allow for air circulation to prevent condensation. External wall insulation, in particular, may require the extension of exterior elements, such as pipes and windowsills, in order to maintain the original appearance of the façade. Cavity insulation is generally not suitable, as cavities, where present, are often non-standard.
- Window glazing: Secondary glazing is broadly preferred to the substitution of the existing windows with double glazing in listed buildings, as the characteristics and value of the original panels are not replicable.
- Door insulation: Poorly insulated old doors should not be replaced with new doors but rather upgraded through draught proofing. This measure is in fact sufficient to significantly reduce heat losses and improve the energy performance of a home, while preserving the heritage value of the original door.
- Roof insulation: Obstacles are associated with roof insulation are generally minor. For slate roofs, slate vents should be installed, in order to prevent condensation, as the board onto which slates are mounted will become colder after roof insulation. Alternatively, another available solution is over the roof insulation.
- Ventilation: Ventilation measures are generally well tolerated in heritage and old homes.
- Overheating prevention: Overheating prevention measures are rarely applicable to heritage homes, due to their high impact on the outer appearance of the dwelling. | <urn:uuid:2b97263e-0014-4e16-86b5-9250d6d20b52> | CC-MAIN-2021-21 | https://www.gov.scot/publications/technical-feasibility-low-carbon-heating-domestic-buildings-report-scottish-governments-directorate-energy-climate-change/pages/3/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00492.warc.gz | en | 0.931902 | 4,948 | 2.78125 | 3 |
Associate Editor: S. R. Cohen
Beilstein J. Nanotechnol. 2018, 9, 1977–1985. https://doi.org/10.3762/bjnano.9.188
Received 05 Apr 2018, Accepted 20 Jun 2018, Published 12 Jul 2018
The fabrication and optical characterization of self-assembled arrangements of rough gold nanoparticles with a high area coverage and narrow gaps for surface-enhanced Raman spectroscopy (SERS) are reported. A combination of micellar nanolithography and electroless deposition (ED) enables the tuning of the spacing and size of the noble metal nanoparticles. Long-range ordered quasi-hexagonal arrays of gold nanoparticles on silicon substrates with a variation of the particle sizes from about 20 nm to 120 nm are demonstrated. By increasing the particle sizes for the homogeneously spaced particles, a large number of narrow gaps is created, which together with the rough surface of the particles induces a high density of intense hotspots. This makes the surfaces interesting for future applications in near-field-enhanced bio-analytics of molecules. SERS was demonstrated by measuring Raman spectra of 4-MBA on the gold nanoparticles. It was verified that a smaller inter-particle distance leads to an increased SERS signal.
Keywords: block copolymer; electroless deposition; gold nanoparticles; micelle lithography; optical antenna; self-assembly; SERS
Over the last decades self-assembled layers of gold nanoparticles have taken an important role in emerging nanotechnologies. Noble metal nanoparticles show localized surface plasmon polariton resonances (LSPRs) in the visible and infrared spectral range and exhibit a very strong near-field in their close vicinity . The plasmonic resonances of gold nanoparticles can be varied by changes in size, shape and geometrical arrangement [2,3]. A high density of intense hotspots can be induced by narrow gap sizes and rough surfaces [4,5]. These remarkable optical properties make them attractive for applications in biosensing, biomedical science and as optical antennas [6-8]. In particular, metal nanoparticles can be employed to strongly enhance the signal intensity in chemically specific Raman sensing . This technique is known as surface enhanced Raman spectroscopy (SERS) . Ordered arrays of such particles can be fabricated by different methods. Electron-beam lithography for example is a top-down method which provides good control, but is time consuming and costly. In contrast, the self-assembly of block-copolymers is a bottom-up method, which enables the parallel processing of large areas. A cost-effective photochemical method is block copolymer micelle lithography (BCML), which can be used to create templates on the surfaces of substrates [10-12]. To use the templates for further patterning of the substrate with nanodots, different techniques such as reactive ion etching, thermal evaporation and atomic layer deposition can be used in combination with BCML [13-15]. Here it is important to choose the optimum chain length of the diblock copolymers for obtaining the desired inter-particle spacing [16,17]. It is thus feasible to obtain quasi-hexagonally ordered regular arrays of gold nanoparticles over large areas by simple means. For the fabrication of gold nanoparticles gold salts can be used to load the micelle core, and the copolymer can be removed afterwards with an oxygen plasma treatment [18-20]. For tuning the size of the gold nanoparticles, a combination of micellar nanolithography and subsequent electroless deposition (ED) makes it possible to increase the size of the particles . In this work, we follow the cost-effective and simple photochemical method outlined in , but in the present case pursue the goal to fabricate dense ordered arrays of gold nanoparticles with sizes up to >100 nm and single digit gaps on silicon. We first describe the synthesis of gold nanoparticles, which is based on micellar lithography. For tuning the size of the gold nanoparticles we use electroless deposition for different durations. Rough particles with sizes up to 120 nm in diameter are achieved in quasi-hexagonally ordered arrays, resulting in a high density of hotspots as has been shown for similar raspberry-like nanostructures [21,22].
Next, the optical properties of the samples are characterized by measuring the scattering spectra of selected gold nanoparticles. Finally, we demonstrate SERS enhancement by measuring Raman spectra of 4-mercaptobenzoic acid (4-MBA) molecules that are adsorbed to the gold nanoparticles.
1 × 1 cm2 silicon substrates were cleaned with acetone in an ultrasonic bath for two minutes. Then they were rinsed with isopropanol, and finally dried with nitrogen gas. A symmetric diblock copolymer (polystyrene-block-poly-2-vinylpyridine, PS(133000)-block-P2VP(132000), polymer source) was dissolved in toluene at a concentration of 1 mg/mL and stirred for 2 days. The micelles were loaded with chlorauric acid (HAuCl4, loading parameter (L = 0.5), Sigma-Aldrich) and stirred again for 2 days. Spin-coating was applied to cover the substrate with a monolayer of the gold-loaded micelles (30 s at 2000 rpm).
A quartz glass slide was placed on top of the substrate after a drop of about 10 µL of water was applied. The assembly was then exposed for 4 min to deep UV light (254 nm, 85 W). After this step, the substrate was placed in an aqueous solution of enthanolamine (2 mM, Sigma-Aldrich) and potassium gold(III) chloride (KAuCl4, 0.1 wt %, Sigma-Aldrich), to grow the gold precursor particles with the electroless deposition process. Reactive ion etching (Oxford Plasmalab 80 Plus) was used to remove the polymer with an oxygen plasma treatment with the following settings: process pressure 100 mTorr, power 100 W, temperature 20 °C and duration of the etching process 60 s. To measure the inter-particle spacing and sizes of the gold nanoparticles in this work we used a Scanning Electron Microscope (SEM) (Hitachi SU 8030).
The scattering spectra of gold nanoparticles were measured with a custom-built dark-field spectroscopy setup. A 50× objective (Mitutoyo BD Plan APO SL 50X) was used for imaging and taking the spectra. The samples were illuminated by a laser driven light source (Energetiq EQ-99-FC) at an incident angle of light of about 45°. The spectra were taken with an Andor Shamrock SR-303i spectrometer equipped with an iDus DU416A-LDC-DD detector.
The gold nanoparticles were incubated for 22 h with a 5 mM solution of 4-MBA (Sigma Aldrich) in ethanol. After this process, the substrate was rinsed with ethanol and dried with nitrogen gas. The Raman spectra were measured in a confocal Raman spectrometer (LabRam HR 800, Horia JobinYvon) using a 632.8 nm He–Ne-laser with a laser power of 50 mW and a 50× objective. The laser aperture was set to 1000 µm, the slit size to 200 µm and the grating had 1800 lines/mm, resulting in a spectral resolution of ≈2 cm−1. For all measurements the exposure time was set to 60 s to reduce noise.
We use the bottom-up method of BCML combined with ED to fabricate tunable gold nanoparticles forming quasi-hexagonal arrays on a silicon substrate. The optical properties of the gold nanoparticles are investigated by dark-field spectroscopy. Finally we show that by tuning the size (and thus the inter-particle spacing) of the particles, a higher SERS signal intensity could be obtained.
The PS-b-P2VP diblock copolymer is dissolved in toluene, which is an apolar solvent. An apolar solvent dissolves preferentially the PS block . The hydrophobic PS forms the shell, and the hydrophilic P2VP the core of the spherically shaped micelles . Within their core gold salt can be assembled, which is bonded by protonization or complexation . The loaded spherical micelles form a hexagonal array when being deposited on a substrate. Exposing them to an aqueous environment promotes a morphological change of the spherical micelles . In the next step, the micelles are treated with UV irradiation, which causes the gold salt particles in the center to grow bigger by photochemical growth . To enlarge the metal precursor particles even further in a controlled fashion, an electroless deposition step using potassium gold(III) chloride was performed [18,26]. To reduce the gold ions to elemental gold, a solution of ethanolamine as a reducing agent can be used . The final size of the gold particles can be tuned by the duration of the process . A schematic overview of the fabrication process is shown in Figure 1. In a first step a silicon substrate is coated with gold-loaded polymer micelles (Figure 1a) via spin-coating. In a second step the micelles are exposed to deep UV illumination while the substrate is covered with a quartz glass slide (Figure 1b). In a third step the nanoparticles are enlarged by electroless deposition (Figure 1c), and finally the polymer is removed by an oxygen plasma treatment (Figure 1d).
SEM images of the primary distribution of the gold precursor particles without any size increase by ED confirming that the micelles cover the entire silicon surface are shown in Figure 2a,b. The distribution is mostly regular, except for occasional defects, and shows a roughly hexagonal order. The center-to-center-spacing of the ordered particles amounts to 109 ± 20 nm. After deep UV illumination, electroless deposition and oxygen plasma treatment, SEM images are taken at two different magnifications, which are shown in Figure 2c–j. For a direct comparison between SERS platforms with large and small gaps, four substrates were fabricated, two each with identical parameters for process assessment.
In Figure 2c,d and 2e,f, representative images of gold nanoparticles after an electroless deposition step of 30 min are shown. The first substrate (A) (Figure 2c,d) exhibits an average nanoparticle size (nps) of 66 ± 25 nm and an average inter-particle distance from edge to edge (ipd) of 56 ± 9 nm. The second substrate (B) (Figure 2e,f) shows an nps of 73 ± 16 nm and an ipd of 33 ± 6 nm. Substrate A shows a lower degree of order than B. Two more samples were prepared with the same process steps, but with an electroless deposition of 90 min instead of 30 min. In Figure 2g,h sample C has an nps of 96 ± 12 nm and an ipd of 17 ± 6 nm. The second sample (D) in Figure 2i,j shows an nps of 97 ± 10 nm and an ipd of 14 ± 9 nm. The statistical ipd of 14 ± 9 nm indicates the presence of a considerable number of sub-10 nm gaps. Comparing the particle sizes, one finds a significant variation between the 30 min samples, while the 90 min samples exhibit very similar arrangements. The results are summarized in Table 1. The inter-particle distances were measured directly from the SEM images, and averaged over ten measurements. The nanoparticle sizes for the samples A and B were evaluated by using the method described in the next paragraph. Because many of the particles in samples C and D touch each other, they could not be separately discerned by this method, and their nps had to be measured manually from the SEM images, also averaging over ten measurements.
Table 1: Measured average particle sizes and interparticle distances for the different samples.
|Sample||ED duration||Avg. nanoparticle size||Avg. inter-particle distance|
|A||30 min||66 ± 25 nm||56 ± 9 nm|
|B||30 min||73 ± 16 nm||33 ± 6 nm|
|C||90 min||96 ± 12 nm||17 ± 6 nm|
|D||90 min||97 ± 10 nm||14 ± 9 nm|
In order to find the dependence of the gold particle diameter on the ED time, two additional series of samples with different time steps were fabricated. The preparation parameters were similar to the ones shown before, only the polymer concentration was reduced to 0.7 mg/mL and the loading parameter was set to L = 1. The results are summarized in Figure 3. The SEM images for each sample were evaluated using a python script that applies a threshold in order to generate binary images. Blob detection is used to find the particles in the binary images and to evaluate the pixel count for each individual particle. From this pixel count, the area coverage and thus the mean equivalent diameter of the particles is calculated, assuming perfectly round particles. A histogram of all the diameters is calculated and a Gaussian is fitted to this histogram. This allows us to extract the mean equivalent diameter as well as the full-width-at-half-maximum (fwhm) of the diameter distribution, which is indicated by the error bars in Figure 3. Since in reality the particles are irregular and exhibit some surface roughness, the equivalent diameters underestimate the maximum outer diameter, and thus the minimum gap sizes to neighbouring particles may be even smaller than indicated by this evaluation. A general trend of increasing particle diameters with increasing ED times can be observed. The growth goes into saturation when the particle size approaches the interparticle spacing. Before ED, the gold-loaded micelles start with sizes around 10 nm to 20 nm. As the ED duration increases, their size increases up to about 100 nm to 120 nm. A systematic offset can be discerned between the separate test series, indicating that the process is highly sensitive to the exact preparation conditions during the fabrication process even when the same recipe is followed. In addition, the center-to-center spacing varies slightly from sample to sample.
To compare the SERS signal of smaller particles with larger gaps to that of larger particles with small gaps, the optical properties of the samples shown in Figure 2 were further analyzed using dark-field spectroscopy. For every sample, 25 measurements at different points were taken and averaged. The results are shown in Figure 4. The bigger particles (sample C and D) show an overall increase in the scattering intensity compared to the smaller ones (A and B), as one would expect for Rayleigh scattering. The curves exhibit very broad spectral features.
To measure the SERS signal, the gold nanoparticles were covered with a self-assembled monolayer of 4-MBA. Because the thiol-group of the 4-MBA molecules has a very high affinity for gold , and the samples were rinsed thoroughly with ethanol to remove any unbound molecules, we can assume that mostly 4-MBA molecules are present on the gold surfaces and not on the substrate.
Raman spectra were recorded as described above at three different positions on every sample and averaged. For the excitation the laser wavelength of 632.8 nm was chosen, since according to Figure 4 it appears to have good spectral overlap with the plasmon resonances (maxima in the scattering intensity) of the larger gold particles, and is thus expected to excite strong hotspots in the gaps. The intensity of the characteristic Raman bands for 4-MBA at 1085 cm−1 and 1590 cm−1 were evaluated . The background-corrected peak intensities are summarized in Table 2, denoted as “raw”. By looking at the SEM images in Figure 2, it is obvious that the samples show a difference in the amount of gold that is present, which also means that for each sample a different amount of molecules attached to gold is present in the focal spot of the Raman laser. To approximately correct for the different amounts of molecules on the different samples one can use the filling factor (area coverage) of the samples: A threshold was applied to the SEM images, and the white pixels representing the presence of gold were counted. The filling factor was then calculated by dividing the white pixel count by the number of pixels of the image. This represents a measure for the average particle size as well as for the density of the particles, and correspondingly it also provides a measure for the number of molecules on gold per unit area. The Raman intensities were then divided by this filling factor, which results in filling factor-corrected intensities. The resulting filling factors and corrected Raman intensities (denoted as “corrected”) are shown in Table 2, while the raw background-corrected Raman intensities as measured are visualized in Figure 5.
Table 2: Filling factor and measured Raman intensities (raw: background corrected raw data, corrected: Raw spectra normalized by filling factor) for all samples.
|Sample||Filling factor||Raman int. at 1085 cm−1 [k counts]||Raman int. at 1590 cm−1 [k counts]|
|A||0.31||3.8 ± 0.1||12.2 ± 0.3||3.1 ± 0.4||10.1 ± 1.3|
|B||0.33||4.3 ± 0.2||13.1 ± 0.6||3.8 ± 0.1||11.5 ± 0.4|
|C||0.56||10.3 ± 0.5||18.6 ± 1.0||6.8 ± 0.2||12.2 ± 0.4|
|D||0.62||12.0 ± 0.8||19.5 ± 1.3||9.3 ± 0.3||15.0 ± 0.5|
By looking at the raw Raman spectra for the different samples one can see that the larger particles show higher Raman intensities than the smaller particles by more than a factor of 2. Of course, in this case the larger gold surface and thus the higher number of molecules was not taken into account. If the Raman intensities are corrected for the filling factor as explained above, the difference between the samples becomes smaller, but still the larger particles show an increased Raman signal, particularly for the peak at 1085 cm−1. Figure 6 shows a comparison of the corrected Raman intensities for the different samples where this increase is clearly visible. This effect may be explained by the much shorter mean inter-particle distances between the larger nanoparticles, including some very narrow gaps due to the statistical variation, which causes an increased coupling between the particles and thus an increased near-field .
To estimate a lower boundary for the enhancement factor of the particles we compared the corrected Raman spectra of sample A to a measurement of 4-MBA on a smooth gold film with a thickness of 70 nm, also on a silicon substrate. Both samples were treated in exactly the same way. The spectra are shown in Figure 7. For the gold film no signal was observed, and thus we assume that the upper limit of the signal is the peak-to-peak noise in the measurement. By dividing the maximum corrected signal of the Raman mode at 1085 cm−1 by the peak-to-peak noise of the measurement on the gold film we obtain a lower limit of the enhancement factor of ≈300. This is a very conservative lower limit, and compared to values commonly reported in literature it is significantly smaller, but we would like to stress that the estimation of SERS enhancement factors is inherently difficult and is still a much discussed topic within the community [30,31].
As can be seen in the SEM images for the samples with 90 min ED, the particles show average separation distances around 15 nm and individual separations down to only a few nanometers. This means that the method presented here allows for the fabrication of nano-particles that exhibit very small mode volumes and high near-fields. The fabrication is based on bottom-up processes and thus offers the possibility to scale it up to bigger substrates and higher throughput. The high near-fields and the ease of fabrication make these structures particularly suitable for sensing applications, for example for SERS as it was shown here.
In conclusion, we describe a cost-effective, scalable, parallel method for the fabrication of quasi-hexagonally ordered arrays of nanoparticles with particle sizes up to 120 nm and gap sizes down to few nanometers, which are fabricated by block copolymer micellar nanolithography combined with electroless deposition. The resulting particle arrangements are compared for samples prepared with 30 min vs 90 min ED. The dark-field scattering intensity is compared for the different nanoparticle sizes. We demonstrate the SERS effect exhibited by these samples by measuring Raman spectra of 4-MBA that is adsorbed to the gold nanoparticles. The spectra show an increase in Raman intensity for larger particles and smaller gap sizes by a factor of >2. The surfaces with the narrower gap sizes result in higher intensities even when correcting for the different particle sizes and area coverage. This effect may be attributed to a stronger near-field coupling between the particles due to smaller inter-particle distances. | <urn:uuid:7df75734-a027-4587-8591-a7efa34d7aea> | CC-MAIN-2021-21 | https://www.beilstein-journals.org/bjnano/articles/9/188 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988986.98/warc/CC-MAIN-20210509122756-20210509152756-00496.warc.gz | en | 0.923393 | 4,519 | 2.625 | 3 |
Small Ship & Boat plans 175 plans to build. The glider was an important auxiliary of the troop carrier version of the transport airplane. They are the grey eminences of aviation. Cargo/Transport: 93 C-12C, C-12D and C-12F C-41 Aviocar: CASA Spain Turboprop Cargo/Transport: 5 C-26E Metroliner: Fairchild USA Turboprop Range Support: 12 C-27J Spartan: Alenia Aeronautica USA Italy Turboprop Cargo aircraft: 7 Former Air Force aircraft used by Army Special Operations Command for training. In time, four airplanes in this class were acquired in quantity: the Fairchild C-61 Forwarder, a single-engine four-passenger transport; the Cessna C-78 Bobcat, a twin-engine transport version of the AT-17 trainer which could carry five passengers with baggage; the C-64 Norseman, a single-engine plane designed as a “float and ski” freighter which was produced by Noorduyn Aviation Limited of Montreal and was used chiefly in the Arctic regions; and, the most satisfactorily of all, the Beechcraft twin-engine C-45 Expeditor. At first I was Air Policeman and walked the flight line under MATS aircraft. However, cargo aircraft can be as interesting and special as fighter planes or passenger aircraft. A military version of the Douglas DC-3 commercial airliner, the Skytrain, was first ordered in 1940. A variety of light utility aircraft carried the conventional symbol of the cargo airplane, though the cargo was rarely heavier than the baggage of some inspector or staff officer on a hurried mission. American C-47 aircraft transporting paratroopers to Holland during Operation Market-Garden in September 1944. Airspeed Horsa. & U.S.A.F. IWM photo. Designated the C-87 Liberator Express, the modified bomber performed important transport services for the AAF from the beginning to the end of the war. Boeing 747-8 Freighter – 447,700kg. In the European theater it was often referred to as the “Gooney Bird”. Its procurement was undertaken early in 1942 and the entire glider program was steadily expanded as airborne operations grew in size and importance. google_ad_slot = "2927761507"; Two months later, a design competition for cargo and troop-carrying gliders was held from which the Waco fifteen-place CG-4A Hadrian, emerged as the most satisfactory. The public were made to memorise all planes in WW2 so they know which ones are from the allies and which ones were from the axis. U.S. Airforce. Voir plus d'idées sur le thème avion militaire, avions ww2, aviation militaire. During World War 1 the planes were made of wood and built in the bi-plane style. They were using primarily to accompany and assist the bombers on long range aerial missions but could als… It is hand painted on a leather disc. Known for its long range and large cargo capacity, it carried a crew of 6 which included 2 relief members, with a maximum load of 28,000 pounds of cargo or 49 passengers the intended range. Planes. Meantime, great resourcefulness was displayed in meeting emergency demands using the aircraft and equipment that was readily available. Aviation Wall 109,650 views. Bydgoszcz model cargo ship plans are one of the best in the archive. https://www.militaryfactory.com/aircraft/ww2-transport-aircraft.asp North American P-51 Mustang. Transport planes included the U.S. Douglas C-47 Skytrain and the U.S. C54 Skymaster. I was a lowly Naval E-2. to them. The C-47 Skytrain operated in every theater of war and every major battle, including important roles in operations in the Pacific theater including the Battle of Guadalcanal, and the European theater including D-day at Normandy, Market Garden in the Netherlands, the Siege of Bastogne in Belgium, Operation Plunder in Germany (crossing of the River Rhine), and in the India-Burma theater flying the “Hump” (Burma-India airlift). Since 1995 we have listed thousands of airplanes for sale, private jets for sale, piston aircraft for sale, helicopters, experimental, Light sport, commercial, turboprop aircraft and Amphibian planes. Ships of World War Two Battleships, Aircraft Carriers, Cruisers, Destroyers, etc. Hélicoptères. 11 of the Largest Cargo Planes in the Sky. U.S. Air Force Fairchild C-119 Flying Boxcar cargo transport aircraft, history and photographs. Over 160 different types of aircraft were using during World War 2 and include the already mentioned Messerschmitt BF-109, Focke-Wulf 190 and Supermarine Spitfire. google_ad_height = 250; Cargo Shipped by Army by Water: Dec. 1941 to Dec. 1945. There are a total of [ 97 ] WW2 Aircraft of 1943 entries in the Military Factory. 1. Strictly a transport and cargo plane, which was not modified for troop carrier purposes and would have been uneconomical in such a service, the C-54 was not available in large numbers until 1944. - developed rapidly. The Army Air Force’s Material Division began study of the engineering aspects of the glider in February 1941 and initiated procurement of the gliders for training purposes in April 1941. The view was spectacular. Avions. Includes cargo shipped to Army commanders overseas on vessels operated by or allocated to the Army, on vessels operated by or allocated to the Navy, and on commercial vessels for the military forces or for civilian relief; also lend-lease supplies shipped on vessels operated by or allocated so the Army. When was this article published? It has a spacious cargo compartment 43.32m-long, 6.4m-wide and 4.4m-high, which allows it to carry heavy cargo weighing up to 250t. As a professional researcher and World War II historian, Bill Beigel provides research services to genealogists, historians, authors, and civilians who are looking for information found in WW2 military unit records. There are a total of [ 217 ] WW2 U.S. Aircraft (1941-1945) entries in the Military Factory. After WWII the plant became the Buick Oldsmobile Pontiac manufacturing plant for General Motors. The C-135 provided the Air Force with its first jet transport. Some WW2 U.S.Army planes mod pack. The Curtiss-Wright C-46 Commando was the military version of a yet unproved commercial transport. U.S. Focke-Wulff Fw-190 D9: Fighter Planes: 2x 20mm Cannon + 2x 13mm MG + 500 kg bomb 188,000 2,250. I spent four years at Dover. Air Forces of World War II Flag images indicative of country of origin and not necessarily the primary operator. North American P51 MustangThis plane was designed, built and flown in just a few months time and is often regarded as the design which helped the Allies to win the war. Transport / Tug Aircraft. There are a total of [ 30 ] WW2 British Transport Aircraft (1939-1945) entries in the Military Factory. He was the navigator on the flight and a WW2 veteran. This list of military aircraft of the United States includes prototype, pre-production, and operational types. The Douglas C-47 Skytrain was a military cargo version of the DC-3, a standby of the commercial airlines for a number of years before Pearl Harbor. They carried troops and supplies to different areas around the world. Many of these planes were civilian aircraft and passenger planes that were adapted to be used by the air force. Among the bombers modified for transport service, the first choice fell on the Liberator B-24 because of its long range. Its motto: VINCIT QUI PEIMUM GERIT means: “He conquers who gets there first”. When you need a lot of stuff moved very far and relatively fast, these are the planes you turn to. Avions De Chasse. Russian vs American Cargo Planes! Source: Office of Air Force History “Army Air Forces in World War II, Vol. EVOlution Graphics B.V. Hilversum, The Netherlands KvK 60955899 VAT Nr. The P-40 Warhawk was one of the few American fighter planes of ww2 to see active service with American pilots before the official start of hostilities. 3. See more ideas about blueprints, wwii aircraft, aircraft. Although the CG-4A was frequently criticized after it appeared in the fall of 1942, it proved itself in airborne operations in Europe and Burma, where it was towed by C-47s and C-53s. For a short time General Motors, under government contract, built Republic F-84Fs in … We are going to prove this. U sing the pictures from a 1943 U.S. 2002.Updated Sept. 26, 2012. /* 300-250 med rectangle TMB */ The U.S. Air Force was part of the Army during World War II, and was also called the Army Air Forces or the Air Corps. The C-47 continued service through the V… The CG-4A was the most widely used U.S. troop/cargo glider of World War II and eventually more than 12,000 were procured by the U.S. Army Air Forces. 1936. During World War II there was little debate as to what was desired of a transport aircraft: it was one that was equally useful for the delivery of either cargo or troops to their destination. Voir plus d'idées sur le thème véhicules militaires, militaire, camion militaire. By Stephen Sherman, Apr. In the European theater, C-47s towed gliders and dropped paratroopers behind enemy lines. © 1999 - 2020 AMC Museum Foundation, Inc. Air Transport Command – Airlift During WWII. //-->. Gone are the days. U.S. Air Force photo. March 14, 2003: Added five experimental jet bombers (XB-43, XB-45, XB-46, XB-47, XB-48) that were designed (but not used) during the war. Large Scale Planes, the home of large scale aircraft modeling. Realizing that the cargo came when those foreigners did certain things, they copied everything they remembered, setting up mock airstrips, building airplanes out of straw and even wearing wooden headphones in control towers that communicated with no real planes. The use of airpower tremendously increased in WWII and the size of the American military aviation rose from 25,00 airplanes to 300,000 till the end of the war. A steady and proven aircraft, the C-47 earned for itself a reputation hardly eclipsed even by the more glamorous of combat airplanes. Voir plus d'idées sur le thème avions ww2, avion militaire, militaire. WW2 planes were much sleeker and more powerful with aluminium bodies and supercharged piston engines. Includes cargo shipped to Army commanders overseas on vessels operated by or allocated to the Army, on vessels operated by or allocated to the Navy, and on commercial vessels for the military forces or for civilian relief; also lend-lease supplies shipped on vessels operated by or allocated so the Army. It was the most numerous form of military air transport during the war, and used by the U.S. Army Air Forces (C-47), U.S. Navy (R4D), as well as the RAF (Dakota). I’m doing some research on transportation. 2020 - Découvrez le tableau "AVIONS AMERICAINS 39/45" de Jcl sur Pinterest. In attempts to get cargo to fall by parachute or land in planes or ships again, islanders imitated the same practices they had seen the military personnel use. World War Two Aircraft Specs of Fighter Planes by model and type. Here is a list of the WW2 Fighter Planes. Flag images indicative of country of … The Douglas C-54 Skymaster became the outstanding four-engine transport of the war. The P51 Mustang. USAF Cargo Aircraft 1946-1970 Air ... C-47s carried over from the war, but soon new prop-planes joined the fleet. Total acceptance reached only 3,144 airplanes by August 1945. C-47s dropping hundreds of paratroops and supplies in support of Operation Plunder. Combat aircraft that were everyday companions to airmen in the World War II generation have become extraordinary treasures to many in the next: symbols of the courage and sacrifice that even younger generations have come to regard as part of the national identity. Flag images indicative of country of origin and not necessarily the primary operator. During these two years, though, the country of the Wright brothers manufactured and sold tons of material (including airplane engines and parts) to the … We are going to prove this. A reinforced floor and a large cargo door were the main alterations made from the commercial version. 5:18. 17 août 2020 - Explorez le tableau « Military planes War II » de Francois Souchet, auquel 169 utilisateurs de Pinterest sont abonnés. 16 déc. U.S. Aéronef. The Army had paid little attention to this sports aircraft until the Germans demonstrated its utility for military operations. From the iconic Cessna 172 to the elegant Gulfstream G650ER private jet. A History of WW2 in 25 Airplanes M ustangs, Mitchells, Catalinas, Liberators, Corsairs. With other modifications the DC-3 became the C-53 Skytrooper, a troop and hospital transport. Such aircraft usually do not incorporate passenger amenities, and generally feature one or more large doors for loading cargo. The C-47 was manned by a crew of three and could carry a variety of loads up to 6,000 pounds, including, troops, wounded, medical personnel, guns, ammunition, and even a jeep. Currently top 10 largest military transport aircraft in the world are these: Nr.1 Antonov An-124 (Russia) The Antonov An-124 Ruslan (NATO designation Condor) is named after a legendary giant. Was ww2 american cargo planes as a replacement for it door were the best 20mm Cannon + 13mm... Its first jet transport Berger 's board `` wwii aircraft, wwii,... Was used as a replacement for it to Holland ww2 american cargo planes Operation Market-Garden in 1944. Aircraft 1946-1970 Air... C-47s carried over from the commercial version or airborne troops door! An effort to keep Britain supplied were made of wood and had no motor or armament and carried only radio. New prop-planes joined the fleet i was always part of the best ’ sounds a little incongruent only transports! Cargo door were the main alterations made from the commercial version [ 30 WW2. 2 planes weren ’ t seen in combat at the AMC Museum were made of wood and had no or... ) XB-36 Peacemaker they built all-volunteer unit sent to China to help fight the Japanese after the. The USSR received a total of [ 217 ] WW2 U.S. aircraft ( 1939-1945 ) entries the! Were abandoned turn to 2020 AMC Museum Foundation, Inc. Air transport –. In overcast weather at first i was coming back from leave ( Omaha ) from boot camp the Museum s! Increase site traffic origin and its name to the Royal Air Force History “ Air. An important auxiliary of the WW2 fighter planes: 2x 20mm Cannon + 2x 13mm MG + kg! Finest fighter of World War 2 planes weren ’ t seen in combat at the of! Interesting and special as fighter planes: 2x 20mm Cannon + 7.92mm MG + 250 Bomb... Sounds a little incongruent its motto: VINCIT QUI PEIMUM GERIT means: “ He conquers who gets first. A few more notable WW2 planes, the C-54 served chiefly on the wings 138,000 1,650 camp! Known in its commercial model as the Pilot Maker piano manufacturers ” ” sounds... Kawasaki C-2 can carry much ww2 american cargo planes cargo than aircraft it replaces Ship types Photo # 0192242: Fairchild C-119G Boxcar... The Buick Oldsmobile Pontiac manufacturing plant for General Motors including piano and furniture manufacturers produced the CG-4A can be at! Bombers, and operational types - 2015 www.WorldWar2Headquarters.com • All Rights Reserved Most images used this... Small Ship & Boat plans 175 plans to build B-29s and P-80s in this plant were.. Thereafter, the C-47 continued service through the V… Dec 14, 2014 - Explore Pete 's. From 1942 until its retirement in 1954 auxiliary of the cargo sect sitting next to a replica they built freight! Proven aircraft, military aircraft of the first troop Carrier version of the War, soon! Airborne operations grew in size and importance feature one or more large doors for loading cargo line... Developed during the War 30 ] WW2 British transport aircraft ( 1941-1945 ) entries in military! Plus d'idées sur le thème véhicules militaires, militaire, militaire, militaire, avions,!, etc C-109, it was actively used by the more glamorous of combat airplanes 20mm Cannon + 7.92mm +... The Texan, Harvard, Yale, J-Bird or Mosquito performance in theaters...: VINCIT QUI PEIMUM GERIT means: “ He conquers who gets there first ” ortherwise noted Operation.. The European theater, C-47s towed gliders and dropped paratroopers behind enemy lines “ conquers! During wwii Interceptor Squadron which was at Dover the P51 Mustang became iconic. In the military Factory F-106 repaired in 1954 â© 2009 - 2015 •! Planes… However, cargo aircraft Ever Spotted at Delhi - Duration: 3:53 a tanker and hauled large of... Considered as a replacement for it only one radio for communications theater it was often referred to the! Ww2, avion militaire, militaire 1941 to Dec. 1945 jacket worn by a crewmember transports on hand combat the! Modifications the DC-3 became the outstanding four-engine transport of the transport airplane great resourcefulness was in!, but which ones were the main alterations made from the iconic 172... Result, of course, was that the cargo did n't come ships of World War Two Battleships,.... Referred to as the DC-4, the first troop Carrier version of a yet unproved transport... Wwii the plant became the Buick Oldsmobile Pontiac manufacturing plant for General Motors patch is part the. Precisely locate drop areas in overcast weather 2009 - 2015 www.WorldWar2Headquarters.com • All Reserved... To have a few words with my uncle its procurement was undertaken early in 1942 the. 2003: Added pages for the invasion of southern France ( Operation Dragoon.! One radio for communications Fairchild C-119G Flying Boxcar cargo transport aircraft, wwii aircraft Blueprints '' on Pinterest built the. Cessna ww2 american cargo planes to the Royal Air Force and P-80s in this plant abandoned. Of 1943 entries in the military Factory fuel across the Himalayas from India to China:.. On the flight line while my fellow mechanics were getting the F-106 repaired US '' de Michel Ringenbach Pinterest! They built Pete McLaren 's board `` Axis WW2 aircraft '' on Pinterest while my fellow mechanics were getting F-106. Cg-4A with 1,074 being built by the Waco aircraft Company of Troy, Ohio by the Air transport –! Piston engines, Corsairs operational types – Airlift during wwii Graphics B.V. Hilversum, the,. - Fighters, bombers, and generally feature one or more large for... Medium-Range aircraft made its … American World War 2 planes, but soon new prop-planes the! Only 3,144 airplanes by August 1945 the planes you turn to … May 10, 2014 - Explore Pete 's., 375,883 cargo … great aircraft of the other gliders developed during the War could be seriously as... Civilian aircraft and passenger planes that were adapted to be used by the more glamorous of combat airplanes Delhi... By WorldWar2Headquarters staff photographers unless ortherwise noted reinforced floor and a WW2 veteran ’ a... Steadily expanded as airborne operations grew in size and importance transports, etc the iconic 172. Paratroopers for the invasion of southern France ( Operation Dragoon ), pre-production, and generally feature one more..., hundreds of paratroops and supplies in support of Operation Plunder aircraft art Two,... The AAF had only 118 transports and on the wings 138,000 1,650 the public domain are located the., camion militaire the public domain Liberators, Corsairs AAF had only 118 transports and on the eve of Harbor... Hauled large quantities of fuel across the Himalayas from India to China to help fight Japanese. And built in the military Factory a number of technological advancements saw the planes were made wood. Of Operation Plunder theaters during WW2, the Skytrain, was that the cargo sitting! Airplanes M ustangs, Mitchells, Catalinas, Liberators, Corsairs © 1999 - AMC. At first i was coming back from leave ( Omaha ) from boot camp `` US wwii,... Made of wood and built in the European theater it was used as a tanker and large... Cargo planes in the Sky ) from boot camp or more large doors for loading cargo operations grew size... January 26, 2012 this site were acquired through the V… Dec 14, 2014 - Explore Richard 's... C-47S towed gliders and ww2 american cargo planes paratroopers behind enemy lines the invasion of France! Fell on the wings 138,000 1,650 due to its outstanding performance in various theaters WW2! Photos in the Sky about usaf, aircraft, military aircraft, fighter jets military! Was undertaken early in 1942 and the entire glider program was steadily as. 250 kg Bomb 188,000 2,250 seen in combat at the AMC Museum Foundation, Inc. Air transport Command Airlift... Sect sitting next to a replica they built first troop Carrier version of the Largest cargo in... & Boat plans 175 plans to build B-29s and P-80s in this plant were abandoned C-54s, C-119s and.... The Sky transport airplane troop and hospital transport the cargo did n't come He was the navigator on flight... History WW2 Fighters, bombers, and the CG-4A with 1,074 being built by the Flying Tigers, an unit! Found it everywhere shuttling freight or airborne troops sports aircraft until the Germans its! Different areas around the World was actively used by the Air Force because it was onto. Undertaken early in 1942 and the U.S. Douglas C-47 Skytrain, was first ordered in 1940 a crewmember ”... Votes, to increase site traffic jun 3, 2016 - Explore Santiago Cantu Borjas board... Military operations from boot camp technological advancements saw the planes were much sleeker more! Ones were the best of the Air, one found it everywhere shuttling or... Fairchild C-119 Flying Boxcar - USA - Air Force with its first jet.... # 0192242: Fairchild C-119G Flying Boxcar - USA - Air Force ships raced across the from. The Beech AT-10 … U.S. Airforce usually do not incorporate passenger amenities, and more 118. Fuel across the Atlantic in an effort to keep Britain supplied 10, 2014 - Explore Richard 's! Record from 1942 until its retirement in 1954 C-54 Skymaster, C-47,. Precisely locate drop areas in overcast weather passenger planes that were adapted to be used by Air... C-47 continued service through the public domain page for viewer input and votes, to ww2 american cargo planes! This medium-range aircraft made its … American World War Two, hundreds of and. In 1942 and the Beech AT-10 … U.S. Airforce a special `` thanks!, which a! 2007.Updated Oct. 16, 2013 Buick Oldsmobile Pontiac manufacturing plant for General Motors page... Convair ) XB-36 Peacemaker the military version of the first choice fell on wings. Ordered in 1940 Policeman and walked the flight line while my fellow mechanics were getting F-106! Manufacturers produced the CG-4A with 1,074 being built by the more glamorous of combat airplanes a crewmember August 1944 AAF!
Apostles' Creed Old Version, International Commission On Financing Global Education Opportunity, Aldi Butterfully 1kg, Scholarship For Masters In Agriculture In Canada, Plum Organics Baby Formula, Graco Texspray Rtx 900 Parts, Newest Aircraft Carrier, | <urn:uuid:7623f59d-4b05-4b32-95e9-f8a570ca5557> | CC-MAIN-2021-21 | https://upplevelsermalmo.se/another-name-stjenh/a73a3e-ww2-american-cargo-planes | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991537.32/warc/CC-MAIN-20210513045934-20210513075934-00332.warc.gz | en | 0.938098 | 5,277 | 2.703125 | 3 |
A Cook's Introduction to Mushrooms
Cooking with Wild and Cultivated Mushrooms
Mushroom lovers wait patiently each year for enough rain to fall in fields and forests so they can grab their baskets and take off for their favorite hunting grounds. A new breed of mushroom admirers have learned that they can do their foraging in markets and stores which may feature wild delicacies harvested from distant parts long before local species appear. What's more, the probability of finding them on a single trip to the grocer is much greater than traveling to the woods. So for many wild-mushroom fanciers, it is no longer necessary to foray into lands festooned with poison oak and to risk a wet-footed trip through the forest.
Interest in mushrooms has increased dramatically in the last few years. Food magazines offer tempting recipes for both wild and cultivated mushrooms. Restaurant menus offer such dishes as porcini sauce over pasta, or chanterelle quiche. A well-written quarterly magazine dealing solely with fungi is now available. Its contributors are the most active and knowledgable mushroom enthusiasts in this country. It is called Mushroom: The Journal of Wild Mushrooming (see Bibliography).
The potential market is now so great that large-scale cultivation of an increasing variety of fungi provides year-round pleasure for the mushroom fancier.
The use of mushrooms as food has a long and interesting history. The Romans and the Greeks explored the culinary possibilities of fungi with enthusiasm. One mushroom was so highly prized by the Romans that certain cooking pots were set aside and reserved for its exclusive preparation. It was called a boletaria, and the genus Boletus shares this common name. Wealthy Romans hired trained collectors to be certain that the mushrooms on which they dined were edible. Animals and slaves were sometimes fed samples of fungi to test their reactions. No systematic method for identifying and naming mushrooms was adopted by the Romans. Nevertheless, we believe some of the varieties we eat today are to be found in banquet menus and recipes written during Roman times.
Today, in some European countries, trained and certified government inspectors will, for a nominal fee, separate edible from inedible fungi. Handbooks listing names, addresses, and phone numbers of such identifiers in each city are available to the public. Pharmacists in Germany display fresh mushrooms labeled with both common and scientific names.
The search for a simple test to tell if a mushroom is edible continues. The old myths of cooking with a silver coin or spoon, and the Laotian belief that harmful mushrooms make rice turn red have not been substantiated. For a few years, mycologists believed they could detect a poisonous chemical compound found in mushrooms such as in Amanita phalloides, but subsequent testing of many harmless species produced the same reaction, rendering the test meaningless.
War, poverty, and cultural customs have forced the people of many countries to survive on wild foods for certain periods of time. The Russians claim that forest mushrooms spelled the difference between life and death during their many wars when large numbers of people were forced to leave their cities. Wild mushrooms are a permanent part of the cuisine of many countries. People who collect as their forefathers did seldom become ill, because they limit their collections to a small number of well known fungi.
Wild mushrooms fruit in flushes, reach peak quality for a short time, then vanish until the following year. Frequently, too many are collected to be consumed fresh, and since they are perishable, techniques for preserving them were devised. They may be dried, pickled, frozen, or canned. Powders are made by grinding them after drying. "Ketchups" are concocted and bottled, and sometimes the mushrooms are salted down, a brining process in which salt is layered with the mushrooms.
In some European countries the number of areas where one can look for wild mushrooms is limited, but the number of foragers is not. Special days have been set aside for collecting in parks, and in some areas there are signs in three languages forbidding mushroom collecting. Research has yet to explain why the numbers of some wild mushrooms have declined in recent years. Some experts speculate that acid rain may be the cause; others feel that poor land management is responsible. The overpicking of wild mushrooms may be a factor and this is being carefully controlled in Europe. Similar concerns have been expressed in the northwestern United States and Canada that fields and forests may be altered or injured from overpicking.
In the United States, local and distant forays to favorite collecting areas are sponsored by mushroom societies. Their equipment is simple. Many people bring brown paper bags to carry edible varieties. Their baskets are as varied and individual as are their hats. Sometimes their baskets are more interesting than their contents.
Waxed paper is used to wrap uncertain mushrooms to keep them in good condition until they can be studied at home. A large knife is used to remove the entire mushroom from the ground in order to closely examine the base. It is also used to trim and remove debris. A hand lens enables collectors to look for fine details. Some bring a notebook in which to record the specific location of an area where a certain type of mushroom grows. For the artistic, it is an opportunity to record nature on paper. A field guide, especially one dealing with regional species, is essential.
We expect that those who try these recipes will be rewarded with palates sparkling from their new taste experiences. However, we must interject a note of warning to all of you adventurers who follow the culinary trail through these pages of succulent discovery, for we want you not only to be bold, but to grow old enjoying mushrooms.
Readers should be aware that toxic mushrooms may superficially resemble edible ones. We call these "look-alikes." Only by examining specimens carefully with regard to physical details can we distinguish between edible and poisonous wild mushrooms.
It is worth stressing that each single specimen must be carefully identified as well as checked for general good condition. Don't take chances.
When people consider eating wild mushrooms, they always ask these three questions:
"Are there tests to indicate which mushrooms are edible and which ones are not?" Answer: Unfortunately, there are no simple tests to determine which ones are safe to eat.
"What's the difference between a mushroom and a toadstool?" Answer: The word "toadstool" is an indefinite term referring to poisonous mushrooms. It is not commonly used by experts or knowledgeable amateurs.
"Is it edible?" Answer: Fungi are grouped for edibility as follows:
The mushrooms in which we are interested are limited to the first two groups. But we have learned to know the others so that we can delight in eating edible forms with assurance.
We want to emphasize that this book is not a field guide. Our illustrations are aesthetic rather than scientific representations of specific mushrooms. Do not use the drawings to help identify mushrooms.
Whether kneeling happily under a tree collecting golden mushrooms or standing in a produce market weighing them on a scale, positive identification of wild mushrooms for eating is essential. Each individual mushroom must be examined to be certain it is the kind you think it is.
Commercial wild-mushroom collectors sell mushrooms to retail outlets. At the present time, anyone may do this, since licenses are not required. Government agencies are in the process of developing guidelines to protect the consumer. Most retailers rely on the judgment of the person who collects the mushrooms to identify them properly. Restaurateurs are sometimes better trained. Ultimately, consumers must take some responsibility in evaluating their purchases and should shop at produce markets where they trust the produce buyer's judgment. It is exciting today to see so many wild mushrooms for sale to the general consumer.
Usually, when we decide to sample a mushroom we've never eaten before, we slice and sauté a small amount of it in butter until it is brown and soft. Then we eat it with plain crackers or toast to evaluate the intensity and the quality of its flavor. These characteristics help us decide how it might be used in a recipe. This procedure will also alert us to any allergic sensitivity we may have to any new foodstuffs. Any new food can cause unpleasant minor reactions.
Both wild and cultivated mushrooms should be carefully checked for freshness. Brown, shiny, smelly soft spots will appear if decay has begun. Look for fragmenting gills or pore surfaces, and for worm holes. The cap should be firm and have a wholesome odor.
Examine dried mushrooms sold in plastic bags with care to be sure they are not broken or showing other signs of age. They may be stored in clean dry cans or bottles, well sealed to prevent moisture or insects from entering.
Avoid the use of plastic bags for gathering or storing fresh mushrooms. Waxed or brown paper bags are preferred. Water condenses on the walls of plastic, making mushrooms moist or soggy. If they must be carried home from the store in plastic bags, remove them to a dry bowl as soon as possible. If the specimens are very moist, line and cover the bowl with a cloth or paper towel before refrigerating. Most mushrooms will last a week if treated this way.
As a rule, clean mushrooms as you use them. Wash them with as little water as possible. Especially avoid wetting the undersides of the caps. If the mushrooms are in good condition, brush or wipe them with a damp cloth. Delicate flavors are lost in soaking or boiling mushrooms.
Remove tough stems or trim the ends as needed. In some recipes, the stems are saved for later use.
Forest debris and soil can be often persuaded to leave the surface with the gentle brushing of a finger. Nylon mushroom brushes are available at cookware stores, but a soft toothbrush is just as effective.
A sharp pointed instrument such as a knife is sometimes required to clean out cracks in chanterelle caps.
In general, mushrooms should be cleaned at least half an hour before cooking so they can dry off. Mary Etta Moose, of the Washington Square Bar and Grill in San Francisco, suggests carefully tossing mushrooms in a dry skillet over heat for a short time to sear their surfaces and to help remove water.
Eating Raw Mushrooms: With a few exceptions, such as the common store mushroom, we do not recommend that mushrooms be eaten raw. Uncooked mushroom tissues are poorly broken down for digestion, depriving us of their nutritional contents. Many varieties of wild mushrooms are disagreeable when eaten raw because of viscid surfaces or peppery characteristics. However, they become readily digestible and delectable when cooked.
Using Butter and Cream: Butter seems to enhance the flavor of most mushrooms, except for some of the Asian varieties such as matsutakes and the ear mushrooms. We recommend unsalted butter in cooking. Lemon juice helps mushrooms maintain their color and adds zest to their flavor.
It is a common observation that mushrooms in some recipes seem to taste much better when cream is added. It is a culinary reality that cannot be avoided despite the current trend away from cream sauces. Milk may substituted for cream if diet is of greater importance than taste.
Adding Salt: It is recommended that salt be added to most of the recipes in this book to satisfy individual taste preferences. We are aware that many mushroom fanciers must limit salt for health reasons. Salt should be added towards the end of cooking, since it tends to remove water from mushroom tissues and makes them too soft.
Slicing Mushrooms: Slicing mushrooms allows for more rapid cooking and water loss than when mushrooms are cooked whole. Cut them into uniform thicknesses and they will cook more evenly. Mushrooms with mild and subtle flavors should be cut into large pieces so that their savory juices can be better appreciated. The best tool for cutting mushrooms is a sharp 5-1/4 inch utility knife.
For uniform slicing, because the caps have varying sizes, shapes, and textures, cut mushrooms in half so that they will lie flat on the surface of the cutting board. Soft species such as shaggy manes are difficult to cut unless the knife is sharp and the cut firm.
Precooking Mushrooms: Wild mushrooms are often precooked for several different reasons. If freezer storage is planned, it is best to sauté them in butter first, so they will have firmer texture when used later. Making duxelles is another way of preparing a mushroom in advance and utilizing otherwise discarded portions of mushrooms. To prepare marinated mushrooms, either parboil them or simmer them in the marinade liquid. Vinegar and other acidic combinations do not have the same chemical action as does heat and will not eliminate toxins. Certain helvella mushrooms should be parboiled to remove toxins and the water discarded before adding the mushrooms to other ingredients.
Using Dried Mushrooms: In using dried mushrooms, first rinse them quickly under the faucet and then place them in a bowl. Pour enough hot water over them to cover and soak for the recommended period of time for each type of mushroom. Soaking time will vary because of the different size, thickness, and shape of each variety. As a rule, this should take at least 15 to 20 minutes. Remove the mushrooms and squeeze them dry. Save the soaking liquid for use in your recipe since much mushroom flavor will have been released while rehydrating. Decant the soaking liquid slowly to avoid adding sediment that has settled to the bottom of the vessel.
Intensifying Flavor: Mushrooms exude liquid when sautéed in oil or butter. Many chefs prefer to cook most of the fluid off to develop the maximum intensity of the mushroom's flavor. Some recipes require browning the mushrooms to create more flavor. While doing this, constant vigilance is required to avoid burning.
There are four excellent reasons for preserving mushrooms:
One of the earliest methods of preserving food was to dry it. This is still an effective way to keep mushrooms for a long time without spoiling. Their taste will usually be altered in the process. Sometimes the flavor becomes more intense, and sometimes their original qualities are lost. Some varieties of mushrooms take on nuances not found when fresh. Begin by selecting mushrooms that are in good condition. They should be firm, without many worm holes, and capable of withstanding gentle handling.
When cleaning, try to prevent the mushroom from taking on water, which is what we want to get rid of. The underside of the cap is particularly prone to holding onto liquid. Clean the top of the cap with a brush, a damp cloth, or your finger. Trim the stems.
Cut flat, even, broad slices about 3/8 inch thick. The slices should be of uniform width so that they will dry at the same speed. Plan to work on your mushrooms as soon as you bring them home. Do not leave them lying around to deteriorate. Avoid overlapping the slices on trays so that they will dry evenly.
Many mushroom fanciers have developed unique drying techniques. Some hang flats of wire screen doors, plastic mesh, etc., overhead with wire or cord, especially above ovens, fireplaces, or heating units. One creative person has converted an abandoned refrigerator into an efficient dryer using a fan and a 75-watt light bulb. Many effective and inexpensive commercial dehydrators are available.
When slices are bone dry, no less, place in metal cans or glass jars. If you are uncertain about their state of dryness, transfer them into paper bags, and hang in a dry, warm place over an oven or fireplace for a few days. Then put them into containers, adding a few dried bay leaves or a handful of whole black peppers to discourage insect pests. Be sure to label containers with the date and the species identification.
Freezing is a fine technique for putting mushrooms away for a future day when none are growing. They can be frozen fresh or precooked. Some small caps may be frozen whole, after examining, cleaning, and completely draining them. Allow 20 to 30 minutes for draining. Larger specimens should be sliced or cubed into 1/4-inch pieces. Heavy plastic is acceptable for freezing, or use freezing containers. Matsutakes and the boletes are preserved beautifully this way, retaining their aromas and spiciness as well as their textures.
There are two methods of precooking mushrooms for freezing. One way is simply to freeze a dish made with mushrooms, such as a quiche, ready to heat and serve. The other is to sauté the mushrooms in butter or oil, or both, for 5 minutes before transferring them to a freezer container. Be sure to include the liquid remaining in your saucepan. Such food will keep well for 6 months.
It's easy to develop a mutual admiration relationship with mushrooms. You stuff them, then let them stuff you. Common store mushrooms are perfect receptacles for a variety of foodstuffs such as onions, tomatoes, greens, meat, or chopped mushroom stems enhanced with butter, herbs, or spices.
The simplest mode of preparation is to remove the stem from the cap or use hollow-capped species such as morels. Stuff them, and bake. They don't last long as party food, and they will contribute complements and compliments for your main course at dinner.
Use medium- to large-sized caps: medium for hors d'oeuvres and appetizers, large ones for main dishes. Select very firm mushrooms with broad stems and unopened caps that will hold more stuffing.
Clean the tops and stems with a soft brush and a little water. Drain for 15 to 30 minutes in a colander. Remove any debris from the stems, and freshen up the cut end of the stem by trimming.
Gently twist off the stems of gilled mushrooms. You may need to use the end of a knife to encourage the stem to leave. Remove the cottony veil from common store mushrooms and their relatives. Don't fail to incorporate these fragments and the stems in the stuffing.
Prepare the caps by brushing them with soft or melted butter. This will sear the surface of the mushroom when heated and will help it hold its shape. Another way of firming them up is to brush them with butter and broil them cavity-side down under a preheated broiler for 5 minutes before being filled.
Stuffing material should be partially or completely precooked and ready for placement as soon as the caps have been prepared. Spoon the stuffing into the hollowed portion of the caps, press the material down tightly, and move the caps onto your baking surface. Mushrooms release a good deal of liquid when heated, so it is best to use a shallow baking pan or a jelly roll pan, which has raised edges, to retain the juices. It is advisable to fill them before placing them on the baking pan, since you want your mushrooms to have a neat appearance. And the pan will be much easier to clean.
Baking or broiling time will vary according to the size of the cap and the nature of the filling. It is best to start with a preheated oven. Keep your eye on your achievements, allowing them to brown without burning. Serve them immediately.
Mushroom varieties other than the common store ones may be stuffed, such as:
Boletus edulis (cèpes or porcini): Large caps may be prepared as small pizzas. Serve stuffed boletes alongside your meat or fish dish; they may be filled with a wide variety of foods appropriate to the entree. The superb full flavor of this mushroom's juice blends with any stuffing to make it unique and rich.
Agaricus augustus (the prince): One of the best mushrooms for stuffing because it is usually large and the cap forms a deep bowl. The strong, sweet almond flavor exuding from the prince adds an exotic quality to whatever ingredients you select to stuff it with, such as sautéed chopped stems cooked with minced garlic, bread crumbs, fresh tomatoes, and soy sauce. The special princely flavor filters through all the ingredients.
Morels: These were designed by nature for stuffing. Fill their hollow interiors with mixtures of ground beef, bacon, lamb, crab, or simply browned onions, bread crumbs, and parsley. Any stuffing will feature the morel's fabulous aftertaste.
Shiitakes: This is the finest of the cultivated mushrooms. Asian recipes frequently recommend steaming them when they are filled. Dry shiitakes should be reconstituted for 20 minutes in hot water before using.
Matsutakes: Expensive to buy and rare to find, a large stuffed matsutake could be the vegetable for a large dinner party. You might want to marinate it with soy sauce and dry sherry for 20 minutes. Remove the stem and use it chopped with pork or chicken, moistened with the marinade. Brush the cap with peanut oil. Fill and grill or bake in a hot oven until brown. Small matsutakes can be stuffed by making a cut in the cap and spreading the opening enough to place stuffing inside. They are very attractive served with steamed vegetables.
You will find suggestions for other stuffing mixtures in the sections on specific mushrooms. | <urn:uuid:2153c022-f74b-4c32-9f34-db3fad1a98ee> | CC-MAIN-2021-21 | https://www.mykoweb.com/cookbook/part_1.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988696.23/warc/CC-MAIN-20210505203909-20210505233909-00053.warc.gz | en | 0.951565 | 4,353 | 2.84375 | 3 |
[This “MakeShift Challenge” column originally appeared in Make: Volume 03, 2005.]
It is easy to forget that access to potable water is considered a luxury for most of the world. You are reminded of this fact on a trip to a rural village in East Asia. You learn from the locals that their water supply has been contaminated — the cause of recent illnesses that sound a lot like cholera and dysentery. In addition to dirt, sewage, bacteria, and parasites, you suspect other contaminants such as arsenic and benzene from industrial dumping many miles upriver. Ideally, nobody should drink this water, but the villagers are unwilling to relocate.
Create a makeshift solution to filter and purify the water. The solution should be permanent and able to provide drinkable water for 20 to 30 people. Tools and materials at your disposal include what can be reasonably extracted from the environment and items on your supply list. You have 48 hours.
- (2) barrels
- (1) bicycle with flat tires
- (1) car battery
- (6) 1-liter plastic bottles of water
- Various lengths of bamboo tubes (1″ to 3″ diameter)
- Variety of tools (saw, hammer, pliers, hand drill)
- Steel wool
- Endless supply of coconuts
- $10 in mixed American coins
MakeShift 02: Analysis, Commentary, and Winners
by William Lidwell
August 08, 2005
Tragically, the MakeShift 02 challenge is all too plausible: you visit a rural village in East Asia that has a contaminated water supply. Serious illnesses are cropping up and spreading fast. The villagers have neither the knowledge nor the resources to do anything about it. Now the real-world challenge: the United Nations estimates that approximately 1.1 billion people in the world are forced to drink from unsafe water sources. As a result, millions of people die each year — most of them children. The most common culprits are physical contaminants (e.g., sediment and suspended matter), biological contaminants (e.g., bacteria, viruses, cysts, and parasites), and chemical contaminants (e.g., arsenic and benzene). The bad news is that these contaminants exist in some combination in virtually every water supply on the planet. The good news is that with a little ingenuity and education, most contaminated water can be made potable through the creative use of local materials. That is what this MakeShift Challenge is about: applying creativity to solve an important global problem, and educating others as to how it can be done. Thanks to all the Make: readers who took on this difficult and important challenge.
There are two basic approaches to this challenge: (1) solve the problem in a way that has nothing to do with the obvious parameters of the problem, and (2) find a way to purify the contaminated water.
Few readers opted for the non-obvious approach, which in this scenario would be to forget about purifying the contaminated water and seek alternative sources of potable water. J. Crossen was one of the few, and eloquently describes one such source:
A great water purifier would have self-manufacturing, solar-powered nanotechnology, and also be cheap. A coconut palm! The trees already use a combination of capillary action and semi-permeable membranes to purify the local water, and they even package it in biodegradable bottles. The water inside a healthy young coconut (aged 6 to 9 months) is perfectly sterile and contains a mix of electrolytes and nutrients similar to that of a sports drink. As well as keeping the villagers hydrated, it will keep their muscles working in top form. Each coconut contains about 750ml of water, and each villager will require at least two liters of water per day, so each villager will need about 2.5 coconuts per day. This means the entire village will need about 75 coconuts per day, which is a good yearly yield for a single coconut palm. With a small factor of safety for bad coconuts and dead trees, the village will need an orchard of about 400 palms to supply them with drinking water all year.
C. Granier-Phelps also embraced this “nutty” approach and offered a detailed process for efficiently harvesting the coconuts. I should note that Granier-Phelps is a subscriber from Caracas, Venezuela, who claims to “often throw coconuts at passersby,” so I think we should consider him an expert on the subject. Here is his procedure:
Flip coins to determine who gets to climb the coconut trees each day. Climb coconut tree. Throwing things (sticks, rocks, or the car battery) at the coconuts won’t work… believe me. Use a belt or cut one of the bicycle’s inner tubes and wrap yourself around the coconut tree’s trunk. Move the belt up with both hands and slowly climb the tree with the arches of your feet. Eventually you’ll be able to do it without a belt, but safety first.
Use a saw or a machete to cut the coconuts from the tree. Let them drop on the soft sand; avoid hitting village elders.
Climb down from tree. Avoid the urge to jump.
Using a machete or an edge of the car battery, open each coconut, trying not to spill the coconut water. First remove the outer thick skin and save it in one of the barrels for the night’s campfire; that way, you’ll keep dengue-carrying mosquitoes away. Make a small hole in the hard coconut shell that’s left after removing the skin.
Drink the coconut water or save it in one of the water bottles for later. Coconut water doesn’t hold for very long, so don’t cut any more coconuts than you need.
Open the hard coconut shell and eat the white skin.
Make a nice cup or bowl with the hard shell.
When you get sick of all that coconut water (although, literally speaking, you’ll be a lot healthier), take some time off to think of ways to purify the toxic waste mentioned in the article.
Enjoy your new, healthier body; tan in the tropical sun; learn to surf.
Any excess coconuts that fall from the trees should be dumped into the river in a designated area (demarcated using bamboo tubes and cable made from the steel wool). Ideally they’ll absorb the toxic water, filter the bad stuff using the skin’s fiber, and store clean water inside. Use them if you ever run out of coconut trees.
Assuming the villagers can be persuaded to center their existence around coconuts, this a perfectly viable solution for the short term. Long term, a village of 20-30 people will need about one gallon per person per day to live comfortably, for a total 20-30 gallons per day. And they will need near-potable water for sanitation, etc. So the coconut solution allows us to avert the crisis, but it will not give us the output required to be a total solution. What are the alternatives? A summary of possible approaches was provided by J. Earnest:
As we are attempting to engineer a way to purify contaminated water, a good starting point is to perform some basic research as to how modern systems are used in city water treatment facilities. Most water treatment facilities make use of four main stages of treatment. Filtration is the screening of large particles from the water. Flocculation involves the addition of chemicals which trap contaminants in the form of floc. Rapid sand filters then remove the floc particles. Finally, in the disinfection phase, chlorine gas, ultraviolet light, or ozone is used to kill any microorganisms that have survived the process thus far. Other methods such as reverse osmosis, boiling/distilling, carbon filtering, and ion exchange systems are used by some facilities.
This is a nice list and provides a good starting point. To help organize the alternatives, I offer the following mnemonic: FADD, for Filtration, Adsorption, Disinfection, and Distillation. Smart use of these four methods will render most contaminated water safe to drink. Here is a quick review of each before we look at some of the proposals.
Filtration works by blocking or trapping contaminants. Filters are particularly effective for removing physical contaminants. Examples of filters include sand, cloth, and charcoal. Adsorption works by attaching contaminants to a “sticky” molecular surface. It is generally used for removing chemical contaminants, but works for other types of contaminants as well. Examples of good adsorbents (chemically “sticky” surfaces) include activated charcoal and iron oxide (rust). Disinfection works by killing biological contaminants. You can disinfect water by adding chemicals such as iodine or chlorine or by boiling it. Distillation works by separating the water from contaminants by vaporizing it and then condensing it. The process removes all physical and biological contaminants and chemical contaminants with a boiling point above that of water. Each of these methods has their strengths and weaknesses. The key will be to consider the various approaches in light of the short time frame and limited resources available. Let’s first look at what is often referred to as a sand or layered filter. F. Valica describes one such design:
Get a few villagers to clean the coconuts, saving the meat and water for food. Keep the shells and scrape the hair from the coconut shells.
Start a fire and burn the coconut shells until you get ash/carbon. This may take longer than I’ve allotted for here, but you can build everything except the filter in the meantime.
Clean the inside of a three-foot-long, two-inch-diameter tube. Cut two two-inch rings from the bike tire. Strip the cloth seat from the bicycle. If it has a hard or impermeable seat cover, then find a piece of cloth. Put this cloth on over the end of the bamboo tube and stretch the bicycle tube rubber band around it. Use a larger-diameter bamboo tube for faster flow if the bicycle tube stretches larger.
Fill half of the tube with pea-sized (or smaller) pieces of coconut carbon. Coconut carbon has more micro-pores than traditional wood, coal, or lignite-based carbon.
Clean the change using steel wool and put it in the filter. If you have all pennies, use the steel wool or a file/rock to rub off some copper and expose the zinc. Although we are not dealing with pure alloys, the dissimilar metals should create a redux process and reduce levels of chlorine, attract iron and hydrogen ions, and remove heavy metals. According to the FDA, zinc and copper are good for you anyway, although you may only get a trace amount through this process. Now pack the remaining of the bamboo filter with coconut hair for mechanical filtration. I probably wouldn’t use steel wool here since it would rust. Cover the top of the filter with the cloth and use the second rubber ring as a rubber band, holding it in place. The filter should now be finished:
Clean both barrels using steel wool. You don’t know what’s been living in there. Rinse with coconut water. Cut a hole in the bottom of one to accept the bamboo filter. Make it a little smaller than the filter so the bicycle tube acts as a gasket between the barrel and the bamboo.
Cut off the rear end of the bicycle, and set it on some rocks. Place the second barrel on top. Arrange the barrels so the first barrel will pour into the second (through the filter).
Pour river water into the first barrel, and let it filter through to the second. Then boil the water in the second barrel. Pour boiled water into holding containers and cool to drink.
This is the first stage in Valica’s purification system and is a good example of how a layered filter works, though he kicks it up a notch with the electrochemical layer. This filter should be effective at removing most of the physical contaminants and some of the biological contaminants. His second stage boils the water, which will effectively kill the biological contaminants. As designed, however, it will not do as well with the arsenic and benzene. To address these chemical contaminants, we need to separate them from the water using adsorbents or distillation. A. Thornton used two adsorbent layers in his design:
If the problem is arsenic, then a fairly effective treatment method is to adsorb it with ferric oxide. I bet that bike frame is pretty rusty, and I’m sure it’s steel, not aluminum. File rust off of it. And if it isn’t rusty yet, file some steel from it, and boil that in a pan. Let the filings sit in the sun a while, and generally do whatever you can do to oxidize them quickly. Ferric oxide is going to take most of the arsenic out, but it won’t taste very nice and will discolor the water. However, we can ameliorate that and remove a great many residual nasties by doing a final filtration with activated charcoal. Where are we going to get activated charcoal? Why, both bamboo and coconut shells are excellent sources. Make a mound of coconut husks. Cover it with coconut leaves. Start a fire under it and let it burn; occasionally pour water on it to create steam, but not enough to put it out. After a while, the charcoal at the bottom of the mound will be activated charcoal, because it will have burned in an oxygen-depleted environment. Crush it up and you’re ready to put it in your filter.
Thornton’s one-two punch of using layers of rust and activated charcoal would do a good job of removing the chemical contaminants. Interestingly, mix a lot of rust and sand together and you get better absorbency than activated charcoal — on the order of 25%! Simple and effective. Activated charcoal by contrast, though almost magical in its ability to remove a wide range of contaminants, is not simple to make. As Thornton notes, source material is no problem: coconut shells. You can make plain charcoal by simply setting a bunch of coconut shells (or bamboo, etc.) on fire. Activating the charcoal is the hard part. To activate charcoal, you need to remove all of the tarry residues and non-carbon impurities that clog up its pores. There are two basic ways to do this: (1) soak the charcoal in an acid solution and then cook at high temperatures for a few hours, and (2) immerse the charcoal in super-heated steam (around 1,800°F) for 30 minutes. Method 1 may be possible in primitive conditions. Method 2 would be very difficult to do in two days in primitive conditions. Let’s look at proposals for making activated charcoal using both methods. V. Forgione describes a process of activating the charcoal using acid from the car battery:
CAREFULLY open the vent caps on the battery. The locals should have a plastic container to collect the acid from the battery. CAREFULLY pour the acid into the container. Now, you should have anywhere from 1.8 liters to over 4 liters of acid, depending on the size of the battery. Let’s just say we only need 1 liter of acid, since any more would cost you too much of your drinking water. Battery acid is about 36% sulfuric acid and 64% water. We should use 2 liters of bottled water to get the acid down to 9%. When mixing acid with water, add the acid to the water, NOT WATER TO ACID. HOT ACID WILL SPATTER! Pour 2 liters of water into another plastic container that the locals have provided, and SLOWLY add acid to the water, stirring all the while. You have 3 liters of acid and that should treat enough charcoal for our use. Soak the charcoal in the acid, and then reheat in the charcoal pile. With luck, this will activate enough of the charcoal to get the arsenic and benzene out of the filtered water.
If you get the acid solution right and cook it long enough, you have a good shot at activating a lot of the charcoal. Method 2 involves heating steam to 1,800 °F — no small feat in a primitive environment. Among the best proposals for doing this was M. Kissler’s:
The final ingredient in the second stage of the water purification will be activated carbon. You must construct an apparatus to make the carbon and instruct the villagers on its use.
Take the first large barrel and make five two-inch holes in the bottom using the drill. Make one two-inch hole in the side of the barrel near the top, large enough to snugly fit a bamboo tube of medium diameter. Take the top off the barrel and save it. This barrel will be used to make the activated carbon.
Nail a heavy piece of bark, three to four inches square, by one corner above the hole. This will be used to cover the hole when necessary. Bend the nail over inside the barrel so it doesn’t come out. (If, for some reason, you brought a hammer but no nails, you could just tie the bark around the top of the barrel and slide it down over the hole when necessary.)
Set the barrel up on a few rocks so the bottom is level and high enough off the ground to allow air to enter the holes in the bottom.
Drill an identical hole into the side of the second barrel to allow the bamboo tube to connect the two. Set this barrel over a small pit where a fire can be made and sustained. This barrel will be used for two purposes: it will heat the filtered water and kill remaining bacteria, and the steam produced will be piped into the first barrel to activate the carbon inside.
For now, take the bamboo tube out of the first barrel. Put kindling in the bottom of the barrel and light it. If, for some reason, you do not have access to fire, you could use the car battery to start a fire by *carefully* placing a strand of steel wool between the two terminals to create a spark.
Once you have a good fire going, add the coconut shells to the barrel. Do not pack them tightly: there must be air space between them.
Once the fire is strong, heap up dirt around the base to restrict the air access. Leave about a four-inch gap. Put the lid on the barrel, leaving the hole in the side open for smoke to exit.
A dense white smoke will come out of the barrel for a time. Bang on the side of the barrel as necessary to ensure the shells move and all burn evenly.
When the smoke turns from white to a thin bluish tint, most of the water has been driven off and the charcoal is now burning. Plug the gap in the bottom with soil and plug the hole in the side with the bark covering, filling all gaps with soil to make an airtight seal. The remaining burn will take about four hours.
Let the sealed barrel sit for half a day. Then, stick the bamboo tube in the holes on the sides of the two barrels so they are connected. Put the bottled water into the second barrel, and tightly close the lid. (In the future, the villagers will use their filtered water. You will need to place large rocks on the lids of both the barrels so the pressure from the steam doesn’t push the tops off. In addition, remove the dirt from around the bottom of the first barrel to allow for steam exhaust once it has passed through the charcoal — this will help to ensure that the steam displaces the air in the charcoal barrel.
Light a fire under the second barrel (the one with the water in it). This will heat the water and create steam and pressure. The steam and pressure help to activate the charcoal inside the other barrel. Let this go for at least one hour.
Could this be made to work in a couple of days? I am skeptical that any of the proposed systems to superheat charcoal could be made to work in two days. It turns out that in Kissler’s case, however, it probably would not matter. The reason is that he cleverly incorporated another filtering material that is every bit as effective as activated charcoal:
…have another group of villagers collect a large quantity of the water hyacinth plants native to the area. Water hyacinth is a weed found in almost every water system on every continent, and is especially prevalent in East Asia. It has been found that its dried and powdered root is an excellent absorbing agent for arsenic in water. According to a report published by the Royal Society of Chemistry, filtering water using the weed reduces the arsenic content of water to below World Health Organization standards. Have the villagers dry and crush the roots.
Considered a bio-scourge to most ecosystems on the planet, the irrepressible water hyacinth is an excellent filter. If it is not in your part of the world yet, it likely will be soon. In any event, it definitely has a presence in East Asia and would be very effective at removing chemical contaminants. Finally, there are approaches that utilize distillation. The problem with distillation pertains to the chemical contaminants. R. Karnesky describes the problem:
While boiling water can be effective at killing bacteria, benzene is more volatile and arsenic less volatile than water. Therefore, neither making a still nor boiling in place will be effective at removing all contaminants. An activated charcoal filter will remove most contaminants in one shot. You don’t happen to have one of those, do you? Well, fortunately, you have plenty of coconuts, the shells of which are a popular way to obtain activated charcoal.
When Karnesky says “more volatile” and “less volatile,” he is referring the boiling points of the chemicals relative to the boiling point of water. The boiling point of water is 212°F. Arsenic is “less volatile” than water because it has a higher boiling point (1137°F). Benzene is “more volatile” than water because it has a lower boiling point (176°F). The implication of this is that boiling a water-benzene-arsenic solution would first vaporize the benzene, then the water, and then the arsenic (assuming you could get it to that high of a temperature). So with all this out of the way, let’s start with B. Doom’s solar still design:
My solution is based on the condensation of the non-potable water. If the water is evaporated and condensed into a clean container, the sewage, industrial pollution, and parasites will be separated from the water. One of the barrels is left in the sun and filled most of the way with water. Solar heat raises the temperature of the water and increases the humidity of the air trapped inside the barrel. The humid air escapes through vent ports cut in the top surface of the barrel, and travels up through bamboo shafts to the upside-down plastic bottles capping the shafts. The bottles are kept slightly cooler, and the water in the humid air condenses there, collecting in the bottom of the bottles. The level of the water must be kept at a height maximizing the surface area of the water/air junction to facilitate evaporation. To this end, a hole is drilled in one end at the given height and plugged with coconut shell sealed with coconut husk.
Stills are unique in their ability to simultaneously remove all physical and biological contaminants, as well as all chemical contaminants that are less volatile than water. Stills are also fairly simple to build and start to work quickly, which are important factors when the conditions are primitive and the timeline is short. Doom’s still would remove everything except the benzene. Volatile chemicals not only have lower boiling points than water, they have lower evaporation points. Accordingly, the benzene would evaporate into the bottles first and then condense with the water, giving the villagers benzene-water. Not good. The second problem is with output. Calculating evaporation rates is a messy business, but a quick back-of-the-envelope swag leads me to think that this system would only produce about 3-5 gallons per day. There are ways to deal with the benzene problem. The output problem is more challenging for a system based on evaporation. It is a good design, and with a few tweaks, it would be a winner. Thornton offered two such tweaks. He proposed a more traditional fire-heated still design that addressed the benzene problem using an old chemical engineering (and moonshining) trick:
So now, you heat your barrel over the fire you’ve made, taking care to keep it far enough off the fire that it doesn’t actually burn the barrel. Your water will start boiling; steam will come up the column, recondense on the steel wool, and drip back into the barrel. Eventually, however, steam will start making it out the top and you’ll get a flow out of the tube. You probably want to throw away the first bit of liquid that comes out of there. If you’ve got benzene in there, for instance, it boils at about 80 degrees Celsius. This is, at least in ethanol distillation, called throwing away the heads. Once you get a good stream going, this is mostly potable water, and if the villagers drank it, it probably wouldn’t kill them quickly. Most things biological will have been killed by the boiling, and most chemical contaminants will have sufficiently different boiling points so that by throwing away the heads and not running the still until it’s completely dry, they will generally be left behind as well. If you had to stop here, it’d be a decent stopgap solution.
“Throwing away the head” can be a very effective method of getting rid of the benzene-rich portion of the solution. Additionally, the specific density of benzene is lower than that of water. This means that if you let the condensed water stand in an undisturbed environment, most of the remaining benzene will rise to the top. Since evaporation is a surface phenomenon (unlike boiling, which occurs throughout a liquid), the benzene will evaporate first.
Thus, one approach would be to use a still, throw away the head to get rid of most of the benzene, and let the remaining solution gas out in an open barrel for a day, or boil it to burn off any remaining benzene. Draw water from the bottom of the barrel (just to be safe) and it should be potable. Stills also require far less maintenance than a filter (e.g., no cleaning or replacing of materials) and will work in a reliable, measurable fashion. The two main problems with distillation: (1) it requires a lot of energy as compared to a filter to produce the output required, and (2) it is only about 20% efficient, requiring five gallons of contaminated water to produce one gallon of distilled water. Is there a way to tell how much, if any, benzene remains? Benzene is clear and colorless, but it does have a distinctly sweet odor. Most people can smell benzene in water at two parts per million. Concentrations of benzene in water over five parts per billion are considered unsafe to drink. Suffice it to say that if you can smell benzene at this point, you shouldn’t drink the water, and you should start climbing coconut trees. Now that we are dizzy from the prospect of sniffing benzene, it is a good time to close.
Given a two day time limit with people already getting sick, drawing water from coconuts is the best short-term solution. It will provide pure drinking water with minimal cost and complexity. Long-term, however, a village needs water for sanitation, washing, cooking, and so on — more water than can reasonably be extracted from coconuts. So, short-term let them drink coconut water, but long-term we need a better solution.
Some form of rapid sand or biological filter using a combination of sand, gravel, and carbon is a logical choice. Ideally, you want to activate the carbon. I am skeptical as to whether carbon can be reliably activated in a primitive environment. To the extent that it can, I am skeptical that it can be activated in two days. This is where the water hyacinth proposal is particularly intriguing. The plant is an effective filter in both living and powdered form. So, in addition to being an element in a sediment filter, a basin filled with water hyacinths could be a simple and effective long-term component of an overall water treatment strategy. It is also a good back-up strategy in case you fail to fully activate the carbon. A problem intrinsic to all of the filtration strategies is that there is no easy way to test to what extent the water is potable except for drinking it and seeing if you get sick. This is where distillation carries some distinct advantages. Distillation enables you to visibly and reliably remove all of the contaminants except for volatile chemicals (in this case, benzene). And, as described above, there are ways of dealing with the benzene.
A conservative approach would be to have the villagers use the coconuts for drinking water and use the distilled or filtered water to meet sanitation and other needs. Longer term, I prefer the distillation process described to filtration. It is a simple method that the villagers can apply to achieve consistent results. If you go the filtration route, consider using iron oxide and water hyacinths, if available. They are as effective as activated charcoal and require far less energy to incorporate. That said, activate charcoal if you have the time and kung fu to do it! General factors to consider for all solutions: time to develop, rate of output, effectiveness at removing all types of contaminants, reliability, testability, resources, simplicity, long-term viability, process transferability, and safety.
I would like to again thank everyone who submitted solutions to the MakeShift 02 challenge. The submissions were very creative and well thought-out, and as before, selecting two winners from the batch was no easy task.
Winners receive Make: T-shirts to celebrate and show off their unique brand of genius and the ultimate MakeShift Master tool — the SWISSMEMORY USB Victorinox 512MB — equally useful for hiking and hacking. Honorable mentions get fame and recognition for their excellent contributions. In exchange, we at Make: expect great things. Go forth and solve the world’s problems!
Without further ado, the winners of the MakeShift 02 challenge are:
MakeShift Master — Plausible: Adam Thornton
MakeShift Master — Creative: Jesse Crossen
“Schmutzdecke” — Honorable Mention: Vinny Forgione
“A.A.B. Bussy” — Honorable Mention: Mac Cowell, Nick Cain, Barratt Park, and Brandon Carroll
“Eichhorina Crassipes” — Honorable Mention: Mark Kissler
Congratulations to the MakeShift Masters and the Honorable Mentions (applause…accolades…bowing). You all did a great job of taking on this difficult problem and communicating your solutions in a fun and effective manner. I encourage all readers to study these winning entries and share this link with friends. The first step in solving this global problem is education, so please help get the word out. And until the next MakeShift challenge, happy making! | <urn:uuid:1531b3cc-3cec-4b7f-af58-6e0daa3925c9> | CC-MAIN-2021-21 | https://makezine.com/2017/02/08/makeshift-02/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988741.20/warc/CC-MAIN-20210506053729-20210506083729-00135.warc.gz | en | 0.939514 | 6,633 | 3.140625 | 3 |
Food System and Circular Economy
Circular economy can play an active role in solving the unsustainability of the food production system, contributing to the creation of shorter and more resilient supply chains. Some solutions include policy regulations driving consumption towards more sustainable choices and the reduction of food waste: “best before” labels might be scrapped, food sharing initiatives incentivized and organic waste regarded as a high-value raw-material. Bio-refineries can be the catalyst of a green transition, where food waste can generate biofuels, bio-chemicals, plastics, textiles, medicines and much more. Circular practices seem to hold the potential for a win–win solution, simultaneously enhancing sustainability throughout the entire value chain (from production to consumption and post-consumption) and improving its resilience through the introduction of localized supply chains, making the food system less dependent on international trade. The European Union is working towards this direction (as its policy and social media agenda exposes) and will hopefully accelerate the transition to meet its Green New Deal expectations.
Having shown the increased importance placed on the food system in the EU’s social media agenda, as well as the growing concerns around the sustainability and resilience of the food system, we shall now discuss how this topic has been integrated into the scholarly and practitioner debate over the circular economy. By extending our analysis beyond social media, we hope to achieve a more finely grained assessment of the nexus between the COVID-19 pandemic and the food system. At the same time, by assessing the link between the food system and the circular economy, we seek to propose some actionable—albeit preliminary—solutions.
As the German philosopher Feuerbach said, “We are what we eat.” Perhaps this saying might even extend to COVID-19, as many have pointed out that our global food system (and economy) greatly increases our risk of experiencing a pandemic. In what follows, we will present the results of our comprehensive systematic literature review (drawing on Tranfield et al.’s approach) to assess causes, consequences and circular solutions pertaining to the link between the food system and the COVID-19 pandemic.
Systematic reviews differ from traditional narrative reviews in their replicable, scientific and transparent process, aimed at minimizing bias through exhaustive literature searches of published and unpublished studies and providing an audit trail of reviewers’ decisions, procedures and conclusions. Our review began with the definition of our goals: to find and integrate the most recent and relevant literature on the relationship between COVID-19, the food system and the circular economy. Subsequently, we analyzed and selected the most recent available literature on the topic, encompassing both scientific papers and gray literature, such as reports and plans from policymakers and international organizations. No subjective distinction was made between scientific papers and other documents, provided that they respected the rules described hereinafter.
The research was mainly conducted through the SCOPUS and Google Scholar public search engines. Forty-three references were selected and shortlisted on the basis of publication date (published in 2015 or later) and correspondence with specific keywords (i.e., “COVID-19,” “food,” “circular economy”), with the aim of generating collective insights through a theoretical synthesis of fields and subfields. The search was first conducted with the use of the “AND” Boolean operator, then expanded using the “OR” Boolean operator.
The data extraction process focused on synthesizing key information, based on the abovementioned goal of offering an up-to-date review of the current global food system and selecting the most recent and relevant solutions to enhance its sustainability and circularity.
The World Food Programme recently confirmed that the devastating economic impacts of COVID-19 reinforce the need for investments to prevent future outbreaks of infectious diseases. In so doing, it emphasized the interconnections between people, animals, plants and their shared environment, as well as the need for stable and sustainable architecture to make economic growth feasible, while respecting the surrounding environment .
There are two primary issues with the current industrial food system. First, intensive livestock production amplifies the risk of disease, since it involves the confinement of large numbers of animals in small spaces, narrowing genetic diversity and fast animal turnover. Second, habitat destruction, unchecked urbanization and land grabbing lead to amplified human–wildlife interaction, which eventually leads to zoonotic spillover . It is therefore clear that pandemics, like the COVID-19 one, are not random events, but the logical result of our current food system and, to a wider scale, our economic model.
Another catalyst of pandemics is urbanization, as indicated above. Thirty-five years ago, more than 60% of the global population lived in rural areas; this figure has now dropped to 46%, while the urban population is set to reach 68% by 2050 . Cities are already consuming 75% of the world’s natural resources and 80% of the global energy supply . Urbanization impacts food consumption patterns by increasing demand for processed foods, animal-based foods, fruits and vegetables. Higher urban wages also tend to increase the opportunity costs of preparing food and favor food products that require a large amount of labor, such as fast food, store-bought convenience food and food that is prepared and sold by street vendors .
China, the alleged epicenter of this and several previous disease outbreaks, has one of the highest urbanization rates in the world, having doubled its level over the past 40 years (from 22.7% to 54.4%) . This urbanization has closely paralleled rising animal protein consumption (due to higher wages), increased land conversion and livestock production, higher zoonotic risk (due to closer contact with wild animals) and a more rapid spread of pathogens through the globalized channels of world economy.
As mentioned above, when lockdown measures were first introduced, stockpiling behaviors prevailed, while governments reassured their residents about the resilience of food supply chains and business continuity in the agri-food sectors. In fact, there are diverging opinions on the actual solidity of the current food system: for some, empty grocery shelves are not just the result of the human tendency to hoard in times of danger, but also an important reminder that our food supply chains are easily disrupted and that many of our food systems lack resiliency and redundancy . Many global regions rely on highly centralized food systems, at the expense of strong local and regional systems that could provide a better buffering capacity when needed . However, other scholars have countered that if the number of importing countries has risen for most crops, so has the number of exports in many countries. This has made trade more resilient to swings in supply and demand. Supply lines may empty, but alternatives can be found. For instance, when Indian traders stopped signing new export contracts in April, Carrefour, a French supermarket group, found new rice suppliers in Pakistan and Vietnam and opened a beef import route from Romania . Nonetheless, even the most optimist commentators acknowledge that the current food system has bottlenecks (as does every global supply chain) and that good harvests in 2019 were able to account for some of the resilience of the food supply chain in the face of COVID-19 .
Over the long term, consumer food habits might change along three main directions. First, the rapid growth in online grocery delivery services might continue. While many big companies were already implementing this service pre-pandemic, their systems struggled to cope with the sudden expansion in online orders during the lockdown, leaving long time lags before delivery slots were available . The same could be said about food delivery systems, which mainly operate via mobile phone apps: since the pandemic hit, such apps have been increasingly used by restaurants, as in-person dining has been severely restricted in many countries. Therefore, to some extent, the crisis has dematerialized and “desocialized” the food sector, speeding up consumers’ adoption of online services. The duration and degree of this trend is still uncertain, but the effect could be noticeable (depending on cultural factors) .
Second, consumers might demonstrate a revived interest in “local” food supply chains. In fact, interest in “local foods” was established prior to the pandemic, as people understood this food to offer economic, social, environmental and health benefits . Local food is usually perceived as fresher and—particularly in the present context—more convenient, as it can be easily bought in smaller stores, allowing consumers to avoid long queues outside supermarkets. During the pandemic, consumers also expressed a desire to support the economic recovery of local small and medium enterprises (SMEs). Again, how rooted and long-lasting this effect will be is still unknown, also considering that local food chains are less cost efficient than global ones .
Third, the pandemic has forced people to significantly change their daily lifestyles, and these changes might persist over the long term. Staying home all day in what was previously a rushed, globalized society has tested people’s resilience and led them to question their priorities. People have been forced to slow down their rhythms and rediscover new hobbies and passions (e.g., cooking, instead of buying processed food). It seems that waste recycling has benefitted from these changes , alongside a general decrease in waste production (due also to the economic slowdown) .
4. Circular Solutions
As discussed above, the pandemic has put the current food system—focused on a linear and globalized production and consumption model—under high stress. Tjisse Stelpstra of the European Committee of the Regions has said that the devastating situation created by COVID-19 must bring all policymakers together and be the wake-up call for a new economic model that places social wellbeing and environmental sustainability at the core of the EU’s economic recovery . The circular economy could be a pivotal element of this recovery plan .
According to an EU advisory scientific study , achieving a sustainable food system means “increasing or maintaining agricultural yields and efficiency while decreasing the environmental burden on biodiversity, soils, water and air; reducing food loss and waste; and stimulating dietary changes towards healthier and less resource-intensive diets”. Jurgilevich et al. summarized that the EU Commission have identified three main stages of the food system with reference to the circular economy: production, consumption and waste.
As for the first stage, the “localization” of the food system might represent a more resilient and sustainable solution: localized food systems reduce waste and favor nutrients . Combining local and seasonal elements in short supply chains reduces storage and transportation, provides a better supply–demand balance, creates more transparency and tracking and contributes to waste reduction. In addition, consumers seem to place higher value on food purchased in local markets.
Another known issue regarding food production is packaging. Our current food system is based on single-use packaging, although recent trends have shown improvements in both the quantity and the quality of this packaging. Still, many recycling processes are insufficient, as is the case for light PET bottles and multilayer plastic (as opposed to mono-material plastic) . In this vein, policymakers should continue to incentivize the reduced use of plastic, in favor of more durable or recyclable materials, such as paper, aluminum, steel and glass, even though these materials do not altogether prevent the accumulation of unwanted metal ions through repeated recycling . For this reason, research and development (R&D) in materials science and engineering must be a priority.
As for consumption, policymakers should focus on making sustainable choices the easiest options and transferring costs to unsustainable food choices. One example of a sustainable choice is the avoidance and/or reduction of meat consumption. Through the lens of the circular economy, reduced meat consumption increases the efficiency of material flows within the food system by reducing the amount of energy, land and water used per calorie of food produced . Furthermore, policymakers should invest more in food and nutrition education, in order to raise awareness not only amongst the younger generations, but also amongst the older ones, by disseminating information campaigns through both traditional and innovative media channels.
Besides these non-binding actions, more incisive ones (i.e., fiscal and regulatory measures) could force producers and consumers to improve their practices in support of greater sustainability. Policymakers might introduce bans, impose specific production and sourcing requirements, influence demand via public procurement and impose taxes or fees. These fiscal measures might encourage producers, suppliers and retailers to make sustainable choices and/or directly add costs to unhealthy or non-sustainable food for customers, in the form of a Pigouvian tax. Indeed, the SAPEA report states that “examples of relatively imposing instruments that have become increasingly popular include the use of fiscal instruments (e.g., sugar and fat taxes), standard-setting (e.g., on the maximum amount of salt allowed in products), and outright bans (e.g., on trans fats)” (p. 98).
The final stage of the food system, relating to waste, is perhaps where the circular economy can have the largest and most immediate impact. Indeed, as stated by the European Union , “food waste takes place all along the value chain: during production and distribution, in shops, restaurants, catering facilities, and at home. This makes it particularly hard to quantify” [par 5.2]. Within the larger food system, production accounts for approximately 24–30% of total waste, while the post-harvest stage accounts for 20% and consumption accounts for 30–35%. Cereals account for 53% of the total waste; surprisingly, meat accounts for only 7%—far less than the impact of meat production on the environment . According to Stuart, 30–50% of material intended for consumption (including animal material that is fed to animals or discarded as a byproduct) is wasted in North America and the EU at different stages of the food system . According to Bajzelj , the reduction of food waste is essential for achieving a resilient food system.
It is important to distinguish between edible and non-edible food waste, as only the latter is actually defined as waste. Edible food is potentially ready to be consumed, either by its owner or by another person. To reduce food waste, food labelling policies should be changed and harmonized, as “best before” labels are likely to generate unnecessary waste due to consumer misperceptions of food quality. Indeed, according to Borrello et al. , “Even when consumers try to follow indications of producers, 20% of food is thrown away because of the confusion generated by the dates on product labelling”. [p. 2]. Policymakers should act to prevent these losses by imposing strict limitations on “best before” labels. In this vein, the EU Commission announced that it “will examine ways of promoting a better use and understanding of date marking by the various actors of the food chain. The EU has also adopted measures to prevent edible fish being thrown back into the sea from fishing vessels” [par. 5.2].
Some authors warn that food sharing initiatives might facilitate upstream food waste, as such initiatives allow consumers to get rid of their waste without preventing its generation in the first place. Thus, they act as “short-term sticking plasters” that obscure entrenched issues of food poverty. Further research is needed to verify the real impact of these actions, which are very diverse and fragmented in their nature .
As regards non-edible food waste, this should remain in the system chain and be regarded as a precious resource—not only for the production of more food, but also for the production of new energy (which can be used as fuel in countries seeking to reduce their environmental footprint) and much more. Some policymakers promote “backyard composting” , or self-composting at home. More actions and incentives may be needed to promote this activity, considering that it also facilitates the possibility of growing fruits, vegetables and other plants at home. This would enhance household engagement with the production of clean local food and reduce demand for industrial agricultural products, thereby limiting the use of water and chemical fertilizers.
That being said, food waste can take on many other forms, thanks to “green chemistry” solutions within bio-refineries, which can generate biofuels, bio-chemicals, plastics, textiles, medicines and more from organic waste . While a circular food system should primarily aim at transforming food waste into new food, where this is not possible, the system should reinvest these resources into new energy or material forms, which may be equally socio-economically beneficial.
The present analysis clearly shows that a circular food system should not be entirely self-contained, but it should incorporate a wider reconsideration of the current fossil-fueled, linear and unsustainable economic model towards one that is green, resilient and sustainable model—that is, a bioeconomy powered by circularity. Policymakers should therefore engage more with this transition, with the aim of creating a fertile ground for a more sustainable food system (and society) by:
Reshaping food production via localized supply chains and improved packaging;
Guiding consumption towards sustainable choices, through a mixture of tax and education policies;
Focusing and investing in the conversion of non-edible food waste into energy and materials, via green chemistry and bio-refineries.
The entry is from 10.3390/su12197939
- Tranfield, D.; Denyer, D.; Smart, P. Towards a Methodology for Developing Evidence-Informed Management Knowledge by Means of Systematic Review. Br. J. Manag. 2003, 14, 207–222, doi:10.1111/1467-8551.00375.
- Joint Statement on COVID-19 Impacts on Food Security and Nutrition|World Food Programme. Available online: https://www.wfp.org/news/joint-statement-covid-19-impacts-food-security-and-nutrition (accessed on 3 September 2020).
- D’Adamo, I.; Falcone, P.M.; Martin, M.; Rosa, P. A Sustainable Revolution: Let’s Go Sustainable to Get Our Globe Cleaner. Sustainability 2020, 12, 4387, doi:10.3390/su12114387.
- D’Adamo, I.; Rosa, P. How Do You See Infrastructure? Green Energy to Provide Economic Growth after COVID-19. Sustainability 2020, 12, 4738, doi:10.3390/su12114738.
- FAO. COVID-19 and the Crisis in food Systems: Symptoms, Causes, and Potential Solutions. Available online: http://www.fao.org/agroecology/database/detail/en/c/1271231/ (accessed on 11 September 2020).
- Food and Agriculture Organization of the United Nations (FAO). The Future of Food and Agriculture. Available online: http://www.fao.org/publications/fofa/en/ (accessed on 11 September 2020).
- Food, Cities and the Circular Economy. Available online: https://www.ellenmacarthurfoundation.org/explore/food-cities-the-circular-economy (accessed on 3 September 2020).
- Wu, T.; Perrings, C.; Kinzig, A.; Collins, J.P.; Minteer, B.A.; Daszak, P. Economic Growth, Urbanization, Globalization, and the Risks of Emerging Infectious Diseases in China: A Review. Ambio 2017, 46, 18–29, doi:10.1007/s13280-016-0809-2.
- Five COVID-19 Reflections from a Food System Perspective—And How We Could Take Action—The Rockefeller Foundation. Available online: https://www.rockefellerfoundation.org/blog/five-covid-19-reflections-from-a-food-system-perspective-and-how-we-could-take-action/ (accessed on 3 September 2020).
- The Economist. A Dangerous Gap: The Markets v the Real Economy. 9th May 2020. Available online: https://www.economist.com/weeklyedition/2020-05-09 (accessed on 3 September 2020).
- Hobbs, J.E. Food Supply Chains during the COVID‐19 Pandemic. Can. J. Agric. Econ. Rev. Can. D’agroeconomie 2020, 68, 171–176, doi:10.1111/cjag.12237.
- Cranfield, J.; Henson, S.; Blandon, J. The Effect of Attitudinal and Sociodemographic Factors on the Likelihood of Buying Locally Produced Food. Agribusiness 2012, 28, 205–221, doi:10.1002/agr.21291.
- Rifiuti a Roma, Meno 12% a Marzo Aumenta la Differenziata LA GUERRA AL COVID-19—Corriere.It. Available online: https://roma.corriere.it/notizie/cronaca/20_aprile_06/rifiuti-meno-12percento-marzoaumenta-differenziata-a281dccc-7760-11ea-9a9a-6cb2a51f0129.shtml (accessed on 3 September 2020).
- Coronavirus in Lombardia, Meno 27,5% di Rifiuti a Milano. Da Oggi Nuovo Ciclo di Sanificazione—La Repubblica. Available online: https://milano.repubblica.it/cronaca/2020/04/14/news/coronavirus_in_lombardia_meno_27_5_di_rifiuti_a_milano_da_oggi_nuovo_ciclo_di_sanificazione-253954812/ (accessed on 3 September 2020).
- D’Adamo, I. Adopting a Circular Economy: Current Practices and Future Perspectives. Soc. Sci. 2019, 8, 328, doi:10.3390/socsci8120328.
- EU Commission. C. No. 98, 11th M. 2020, Para. 5. 2. EUR-Lex-52020DC0098-EN-EUR-Lex. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM:2020:98:FIN (accessed on 3 September 2020).
- Jurgilevich, A.; Birge, T.; Kentala-Lehtonen, J.; Korhonen-Kurki, K.; Pietikäinen, J.; Saikku, L.; Schösler, H. Transition towards Circular Economy in the Food System. Sustainability 2016, 8, 69, doi:10.3390/su8010069.
- Geueke, B.; Groh, K.; Muncke, J. Food Packaging in the Circular Economy: Overview of Chemical Safety Aspects for Commonly Used Materials. J. Clean. Prod. 2018, 193, 491–505, doi:10.1016/j.jclepro.2018.05.005.
- Mylan, J.; Holmes, H.; Paddock, J. Re-Introducing Consumption to the ‘Circular Economy’: A Sociotechnical Analysis of Domestic Food Provisioning. Sustainability 2016, 8, 794, doi:10.3390/su8080794.
- SAPEA. A Sustainable Food System for the European Union. Available online: https://www.sapea.info/topics/sustainable-food/ (accessed on 3 September 2020).
- Vilariño, M.V.; Franco, C.; Quarrington, C. Food Loss and Waste Reduction as an Integral Part of a Circular Economy. Front. Environ. Sci. 2017, 5, 21, doi:10.3389/fenvs.2017.00021.
- Stuart, T. Waste: Uncovering the Global Food Scandal; WW Norton & Company: New York, NY, USA, 2009.
- Bajželj, B.; Quested, T.E.; Röös, E.; Swannell, R.P.J. The Role of Reducing Food Waste for Resilient Food Systems. Ecosyst. Serv. 2020, 45, 101140, doi:10.1016/j.ecoser.2020.101140.
- Borrello, M.; Caracciolo, F.; Lombardi, A.; Pascucci, S.; Cembalo, L. Consumers’ Perspective on Circular Economy Strategy for Reducing Food Waste. Sustainability 2017, 9, 141, doi:10.3390/su9010141.
- Illmer, P. Backyard Composting: General Considerations and a Case Study. In Microbiology of Composting; Springer:Berlin/Heidelberg, Germany, 2002; pp. 133–142, doi:10.1007/978-3-662-08724-4_11. | <urn:uuid:d023fc62-7a42-481c-8e7d-33c9d036eb4c> | CC-MAIN-2021-21 | https://encyclopedia.pub/2980 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988923.22/warc/CC-MAIN-20210508181551-20210508211551-00335.warc.gz | en | 0.907663 | 5,314 | 2.6875 | 3 |
(A talk given at the Annual Conference of the British Buddhist Society
held at Summer School, London on the 1st September 1973)
Ven. Balangoda Ananda Maitreya
Nibanna is a Pali word for which the Sanaskrit equivalent is Nirvana. Both these words mean cool, peace, calm, serenity, bliss, supreme happiness, emancipation, passionlessness and the Summum Bonum. Now I am going to set before you how Nibanna is explained in Theravada literature.
The term Nibanna and its equivalents Nibbuti and Vimutti are used in various Suttas to express several experiences of mind.
According to a certain classification we find six kinds of experiences under the terms Nibbana, Nibbuti and Vimutti namely, micchaditthi Nibbana, Sammati-Nibbana, Tadanga-Nibbana, Vikkhambhana Nibbana, Samuccheda Nibbana, Patippassaddhi-Nibbana and Nissarana-Nibana or Nibbana-dhatu.
In the foregoing list the first one is Micchaditthi-Nibbana. Here Micchaditthi means wrong view. Materialists ignore religious practices and value only material things such as wealth, bodily comforts and sensual enjoyments. According to them the real happiness lies in the enjoyments of senses and apart from this they recognize no other happiness, no other Nibbana. This view is reffered to in the Brahmajala-Sutta of Dighanikaya as follows:
“Whenever the soul (being), in full enjoyment and possession of the five pleasures of senses, indulges all its functions, then, the being has attained, in this visible world, to the highest Nirvana”.
Next we come to Sammuti Nibbana. In common parlance, release or relief from worries or troubles is called ease or happiness. When we read the life of the prince Siddhartha, we come across an account of an important incident in his life. One day, when he was returning in his chariot from the royal pleasure grove, a Sakyan girl called Kisa Gotami, seeing his majestic but saintly and charming complexion, breathed forth this joyous utterance:
“Nibbuta nuna sa mata
Nibbuto nuna so pita
Nibbuta nuna sa nari
Yassayam idiso pati”
(Happy and cool indeed is the mother, happy and cool indeed is the father, who has this or a similar one for her or his son; happy and cool indeed is the wife who has got this or a similar one for her husband).
In this utterance she used the word ‘Nibbuta’ to mean ‘happy, cool or fortunate’. The literal meaning of this word is ‘one that has attained Nibbuti or Nibbana’. This sort of Nibbana or cool state of one’s life is called Sammuti Nibbana, the happiness according to convention.
Inspired by her words, the prince began to ponder over how one would become perfectly happy and cool. He became immersed in this thought and at last came to the conclusion: “So long as there remain the fires of passions unquenched and uncooled in one’s heart, one could not be counted as really happy and perfectly cooled. So I must find out with no delay a way to extinguish these fires.”
In this incident we see that the Sakyan girl meant a peaceful and happy family life by the term nibbuta (happy and cooled).
Suppose a certain part of a country has been infected with some epidemic, dysentery or plague. The inhabitants of that area would no doubt spend an anxious time full of fear and dismay. But if, after some weeks, they come to learn that the epidemic has ebbed down and abated and completely passed out of the country, we can imagine what an intense joy and consolation might arise in them.
If we examine their mental attitude, we could see that their minds are devoid of the fear and anxiety which had obsessed them at the time of epidemic. Is that all? There is a positive side as well. Their minds are now pervaded by peace, consolation, hope and immense joy. This is not nothingness or mere void. This is a thing they experience.
The next higher stage of happiness is tadanga Nibbana. Suppose you do some unselfish service to a man in a serious trouble and rescue him therefrom. On such an occasion your mind becomes full of wholesome states such as pity, compassion and unselfishness, and at the very moment the unwholesome states as selfishness and the like have no opportunity to surge up in your mind. Or, suppose you pay respect to a saintly person, to your parents or teacher or any virtuous person whom you regard as worthy of respect. On such occasions your mind is full of faith, love and modesty on one hand and self-conceit, haughtiness and the like get no chance to appear in it on the other hand. When we do some philanthropic service, when we esteem those who are worthy of respect, or when we ponder over the value of abstention from selfish or cruel deeds or words, on such occasions the unwholesome states of mind like selfishness, anger and conceit find no chance to rise up in the mind because the wholesome states like generosity, loving kindness and modesty have already occupied our minds. If we perform any good deed even for five minutes, then our mind becomes happy, serene and clean. This temporary or momentary comfort or serenity and wholesomeness of mind is called Tadanga-Nibbana, the temporary peace of mind. This state of mind is not nothingness.
There is a still higher experience deeper and stronger than this preceding one, which is called Vikkhambhana-Vimutti or Vikkhambana-Nibbhana. Suppose a man sees the evils and futility of the pleasures of senses and intends to develop himself spiritually. For this purpose he starts practising concentration on a selected object, for there are forty kinds of objects for such meditations, according to Buddhist scriptures. The mind of the average man is usually not self-composed, not settled, but is drawn towards this or that object at every moment. Being scattered and constantly disturbed, it is frequented by selfishness, ill will, conceit, fear and many other lower mental conditions, owing to which it turns weaker and weaker. So, to develop and strengthen his mind, he must first control it so that it may not wander after this or that object. He must isolate his mind from other objects and fix it exclusively on the chosen object only. At the start it may seem a very hard and tiresome task. But if he tries hard for some time, he will surely come to success. His mind will forget the whole outer world and remain fixed on its only object, and become self-collected. There are eight stages of this self-collectedness of mind to be attained gradually, each successive stage being deeper than its preceding one. These are called Jhanas in Pali terminology.
In these stages of Jhanas, the meditator feels blissful and suffused with a sense of ease and pure lucidity of mind. Weaknesses of mind, sensuality, ill will, sloth and torpor, worry and restlessness and perplexity subside and the mind feels healthy, happy, strong, calm, serene and blissful. The bliss experienced at these afore-said stages of Jhana is called Vikkhambhana-Nibbana or Vikkhambhana-Vimutti, the ecstatic bliss experiences as a result of the subsidence of passions.
This Vikkhambhana-Nibbana is not nothingness, but a bliss to be experienced, more subtle, more serene than the preceding ones.
But this Jhanic bliss is vulnerable. If somehow or other, the meditator, owing to his slight negligence, turns his mind towards external objects, it is not impossible for him to fall down from the same bliss, as he has not as yet been freed from vulnerability. Now the meditator, as he knows his weaknesses, takes further steps and begins to practise Vipassana (the development of insight).
In this process of practice, first of all, he examines his own body very closely in terms of its constituent parts and analyses them part by part. He goes deeper and deeper in this process and sees at last that his physical body is a volume of waves, a form of wave-movements, that it is dynamic, and therefore impermanent, with nothing in it that is substantial. After examining his own physical body, he begins to examine his mind and its characteristics, observes how thoughts appear and vanish and discerns very clearly that the so-called mind is but a process of states, a stream of activities called thinkings, a flux of continued happenings. Mind and all its states, he sees, are subject to change, and all of them are impermanent and unsubstantial. Thus he contemplates the nature of his body and mind by turns, scrutinizes and analyses them in various ways and realizes at last that the so-called man or creature or being is a mere phenomenon. At the moment of this realization he perceives with his mind’s eye that he himself and all other beings in the world are but mind-matter processes subject to momentary change and void of any substantiality.
He clearly discerns the unsatisfactory nature (Dukkha) of the life in the world, puts out the adherence to the wrong views (miccadhitthi) and uncertainty (vicikiccha), perceives Nibbana-dhatu intuitively (Nirodha-pativedha) and cultivates the strength of the path-factors (Magga-bhavana). This moment at which the afore-said four functions are fulfilled is called entering the holy stream (Sotapatti-Magga). It is immediately followed by two or three thought moments, taking Nibbana-dhatu for their only object. These thought moments are called Sotapatti-phala-cittas, the fruition of the first Path-consciousness (Sotapatti-magga), in which the gross sansaric fatigue is extinguished.
Still he continues developing insight on the impermanence, unsatisfactoriness or entitylessness of the psycho-physical process which we call man or being, and when his meditation develops enough the function of realizing four great truths recurs in his mind. At the second time the remnants of his craving and allied unwholesome states of mind turn so thin that he is destined to be reborn here only once more and hence he is called Sakadagami (Once-returner). At this stage too, his mind is fixed on Nibbana-dhatu. This stage is immediately followed by some two or three mind-units, which fixing on Nibbana, remove the mental fatigue to a great extent. This is called the stage of the second fruition (Sakadagami-phala or dutiya phala).
Once again he meditates as usual and when his meditation develops enough, the fourfold function of realization recurs at which the craving and its allied passions are eliminated to such a degree as he becomes destined never to be reborn within the boundary of the Sphere of Sensuality (Kamaloka) and lower Brahma realms. If he does not fulfil his task of rooting out craving, he becomes destined to be reborn in a higher and subtle celestial sphere known as the Holy Abodes (Suddhavasa), where there are beings who have dispelled from their minds sensuality and ill-will entirely. He who has attained to this stage is called Anagami (Never-returner). This stage too is immediately followed by two or three mind-units, which fixed on Nibbana-dhatu, remove a great portion of the long Sansaric fatigue. This is called the stage of the third fruition (Tatiya-phala or Anagami-phala).
Now the meditator starts once more to analyse mentally both his body and mind more profoundly, contemplates their impermanence, unsatisfactoriness or entitylessness of the whole psycho-physical life. When his meditation process rises up to its culmination, he clearly perceives the phenomenal nature of the life in the world, roots out the craving for it entirely, intuits Nibbana-dhatu and fixes his mind firmly on it, thus reaching the end of the Path. This is called the Path-stage of Arahantship. This stage, as usual, is followed by two or three mind-units which, fixing on Nibbana-dhatu, removes the remainder of the sansaric fatigue that had been caused by the mental defilements so long. This last stage is called arahantship or Perfection.
Now we have to look back again. When the meditator practises Vipassana, in its preliminary stages, passions of his mind subside temporarily and he experiences a temporary peace of mind which is called Tadanga-Nibbana.
When he develops his meditation to a higher level so that the passions get no chance to Surge up, as he was in the Jhanic ecstasy, then he is said to have attained to Vikkhambhana-Nibbana.
When he reaches the four higher stages of the Four Great Truths are realized, he is said to have attained to Samuccheda-Nibbana, as, at these stages, he eradicates some passions.
In the long journey in the Samsara, the phenomenal existence, his thought-process was afflicted and consequently fatigued by the dormant and surging passions. Though those passions are radically removed at the afore-mentioned four holy stages, the fatigue that had been created by those passions still remains. So, immediately succeeding each of those four passions-dispelling mind-units (or the Path-consciousnesses), some two or three more mind-units rise up fixing themselves, too, on the Nibbana-dhatu, and as a result the afore-mentioned mental fatigue is removed thereby.
These latter four stages are called the stages of the fruition of the Path.
The peace tha pervades over the mind at these four stages is called Patippassaddhi-Nibbana, the cool of mind as experienced at the removal of mental fatigue.
Now so far we have passed over a number of stages of mental peace. None of them can be called nothingness. On one hand unwholesome states of mind are removed and on the other hand wholesome states and peace of mind are gained at those stages.
The persons who have attained to these eight Holy stages perceive Nibbana-dhatu, the Nibbana-Element with their mind’s eye, fix their mind on it and experience the bliss arisen thereby in their mind. The very same Nibbana-dhatu, on which the minds of those holy persons are fixed is called Sa-upadisesa-Nibbana.
With reference to the nature of an Arahant after his death, the very same Nibbana-dhatu is called Anupadisesa-Nibbana.
None of the aforementioned states called Nibbanas or the Nibbana-dhatu cannot be regarded as nothingness or annihilation.
Now rises the question: “How could one know the existence of Nibbana-dhatu?”
According to Theravada-teachings, the existence of Nibbana-dhatu may be known by three ways namely, agam-siddhi, anumeyya-siddhi, and paccakkha-siddhi. Of these three, agama-siddhi means the knowledge of Nibbana-dhatu through the study of the scriptures. In Itivuttaka, thus has it been said: “There exists, O Brethren, an unborn, an unbecome, an unmade, an uncompounded. If O Brethren, there were not this unborn, this unbecome, this unmade, this uncompounded no hope at all could be had by this born, by this become, by this made, by this compounded”.
One day a king met an Arhant nun and asked her to tell him as to what would happen to an Arhant, a perfected Saint after his death.
The Nun said: Permit me, O king, to ask you in return a question, and if it shall seem good to you, so do you reply. What do you think, O king? Have you among your men an accountant, a master of your treasury or any official skilled in numbers who might be able to number the sands of the Ganges, who might be able to tell you how many are the grains of sand in that great river?
“That have I not, Venerable lady,” replied the king.
“Or have you, O king, an accountant or store-keeper or arithmetician who could measure the water of the great ocean and say how many drops of water it contains.
“That have I not” replied the king.
“Why not?” returned the nun.
“It is because the great ocean is deep, immeasurable and unfathomable.”
“Even so also is the being of him who has attained to Nibbana…, the being of such a one is deep, immeasurable and unfathomable,” said the nun.
Then the king went to the Lord Buddha and told him what the nun had said to him. Thereupon the Lord Buddha said: “If you, O king, had come to me first with this same question, I would have given you exactly the same reply. O king, the nun is very learned and very wise.”
Now you can understand from this sutta that the person who has attained to Perfect Nibbana-dhatu is not annihilated His nature after death, the Anupadisesa – Nibbana-dhatu is beyond words and cannot be described by positive terms, because the So-called positive terms are words limited to the conditioned and composed things. Only the phenomenal states, of which the world is composed, can be expressed by such terms.
Thus the knowledge of the actuality of Nibbana-dhatu, gathered through the study of the scriptures is called Agama-siddhi, understanding through the study of the scriptures.
Anumeyya-siddhi is the inferential knowledge. Everything has its opposite side. Sickness has its opposite in health; heat has its opposite in cool; darkness has its opposite in light: and likewise Samsara the round of rebirths the phenomenal existence must have its opposite side in Nibbana-dhatu, the only Reality.
A man by means of the knowledge he has gathered by these two ways, the scriptural knowledge and the inferential knowledge, comes to understand that there is actually a perfect peaceful state, a reality, a hope for the suffering mortals. He then follows the path leading to that state, discovered and expounded by the Lord Buddha and consequently attains to realization of the four Great Truths, at which moment he perceives Nibbana-dhatu with his opened mind’s eye, realizes it and experiences it. This, the realization of Nibbana-dhatu, is called Paccakkha-siddhi.
Now you should understand at last that Nibbana-Dhatu, as expounded by the Lord Buddha as the Goal of the Path-goer, is not nothingness but a state to be realized and experienced, an actuality, the only Reality. | <urn:uuid:ab88ab40-12c5-47a2-9749-f16588d0f7c9> | CC-MAIN-2021-21 | https://quangduc.com/p52208a68818/10/nibbana | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00255.warc.gz | en | 0.961731 | 4,229 | 2.734375 | 3 |
Alan Stern & David Grinspoon
Our Galaxy and Beyond
Air Date: June 27, 2018
READ FULL TRANSCRIPT
HEFFNER: I’m Alexander Heffner, your host on The Open Mind. Today we invite our viewers into an enthralling space exploration with two leading scientists. Coauthors of “Chasing New Horizons: Inside the Epic First Mission to Pluto,” an intimate guide to what our guests describe as the greatest exploration project of contemporary space travel. Dr. Alan Stern is principal investigator of the New Horizons mission, which leads NASA’s exploration of the Pluto system. A planetary scientist, space program executive, and aerospace consultant and author, Stern has participated in over two dozen scientific space missions. Dr. David Grinspoon is an astrobiologist, award-winning science communicator, and prize-winning author. In 2013, he was appointed the inaugural Chair of Astrobiology at the Library of Congress. He is a frequent adviser to NASA on space exploration strategy. And he’s on the science teams for several interplanetary spacecraft missions. Welcome to you both, gentlemen. Congratulations on this book.
GRINSPOON: Thanks very much.
HEFFNER: And on the successful mission. It’s really astounding how this machine that you designed has as you just said, explained to me, traveled the depths of the solar system unlike most any other objects. Tell us about the origin of this exploration.
STERN: Well it started back in the 1980s when we started to learn enough about the Pluto system that it started to merit actually sending a spacecraft there to study it up close. And then in the 1990s, we discovered this massive region in the outer solar system that had previously been unknown called the Kuiper belt, which is the third zone of our solar system and which is populated by a whole zoo of small planets like Pluto. So the National Academy of Sciences ranked the priority for this mission to be very high to go study that new class of planets in that new region of the solar system. Our team won a competitive effort and built New Horizons and launched it in 2006, and it’s been flying across the solar system to the very frontier ever since.
HEFFNER: With photographic evidence on Twitter disseminated to all of us here on planet Earth, right?
GRINSPOON: Yeah, that’s one of the wonderful things about 21st century planetary exploration. And in particular, this mission unlike no other, the encounter with Pluto happened at a time when everybody’s connected in this new way. So previous first planetary encounters which really goes back to the Voyagers in the 1980s which was the last time we visited a planet in our solar system for the first time, happened in a different era where you had to like pick up the morning paper to see the new picture. This time, for Pluto, they were instantly on the internet and people were sharing and it was a sort of worldwide wave of the pictures and the excitement spreading in this connected way that was completely new and that was one of the sort of surprising and fun things was this instantaneous global reaction when we got to Pluto.
HEFFNER: And we won’t have our foot, or our feet on the ground and I don’t know if that would even be biologically possible at some juncture in the future, maybe…
STERN: We could send people in the future.
HEFFNER: What were the implications in terms of, I like that idea of astrobiology, right? What is relevant to the human experience about Pluto? And what can it tell us in the long run?
STERN: Well there are a couple of things that we learned that I think are directly relevant. One is that this little planet has, it’s so far away, and so cold, and so so incredibly old, it actually has many parallels to the Earth. Glaciers that have avalanches that flow just like in Greenland, clouds on the, in the sky, hazes in the sky, mountain ranges as big as the Rockies, and we believe we have pretty good evidence that down under the surface there’s a global ocean. And it could harbor life. And in the future we could send missions there to actually look for that. Pretty amazing.
HEFFNER: On the ground level though the conditions would not permit life, right? On the ground level. But beneath?
STERN: Not as we know it. But we don’t know the limits of biology.
GRINSPOON: I mean we were very surprised by the way in which Pluto turns out to be “alive” in air quotes, in a geological sense. You know it’s moving and flowing and complex in ways we didn’t understand. So it, this speaks to our inability to really project the diversity of processes happening elsewhere in the universe until we actually go there and explore. That’s true about geology, it’s probably also true about biology as well. So we can say life is impossible on the surface of Pluto, and you know it seems to us like it would be certainly. As Alan says, life as we know it. But I think we gotta be careful when we use the word impossible when we’re talking about other planets ‘cause we keep getting surprised. And certainly now we have discovered, we think hints that Pluto on the inside has conditions that actually might be able to support life.
HEFFNER: What did you make of the earlier reclassification of Pluto as the dwarf planet, and what do your findings, again, correct me if I’m wrong but you’re largely relying upon photographic evidence that is very textual, granular, did that, I believe the reclassification predated the more intimate pieces of evidence…
STEN: Absolutely, that was a decade ago and you know really the pendulum is swinging back in the other direction. As planetary scientists, we recognize that the small planets have all the attributes of planets that are larger than them. The same way that you know a Chihuahua’s still a dog, even though it’s very small. Pluto’s about the size of the continental United States. So it’s not a small object at all. And with all the types of geological features and atmospheric features and five moons, I think most planetary scientists, the real experts in this area, consider the small planets including Pluto to be full-fledged planets just smaller.
HEFFNER: What manually and in terms of of its own makeup, you know the chemicals and the properties, had to be constructed in order to withstand all the challenges that were some unknown variables.
STERN: Well you know the spacecraft had to be designed to withstand the rigors of launch, the very high acceleration. And then exposure to the space vacuum to pretty warm temperatures when it was leaving the vicinity of the Earth, close to the sun, and then extremely cold temperatures all the way out at Pluto, three plus billion miles away. Inside the spacecraft are all the systems that it requires to go on that journey, so guidance systems, computer systems, thermal control systems, communication back and forth to the Earth. And then on the outside of the spacecraft are thrusters, both to change the course and to change the way that it’s pointed, and seven scientific instruments with telescopes, cameras, spectrometers, and so forth that are used to study the fly-by targets, like Pluto.
HEFFNER: And when you think about the table of elements and what had to be contained within the craft to not dissolve or explode, what was it, what is it made of?
GRINSPOON: Well you know, it’s made out of a lot of the same components that that machines on Earth are made of, you know computers and so forth, you know silicon, and aluminum and but they’re just a lot of it had to be sort of hardened and built in a just incredibly reliable way. But there are some unusual components too that, for one thing there’s a, there’s a plutonium power source because when you’re operating that far from the sun you can’t use solar power. And the plutonium itself introduced huge challenges for the project because when you read the book you realize that there was a ticking clock. This thing had to be launched in a hurry because there was a window, or Jupiter was in the right place to, you know to sling it on to Pluto and they couldn’t do it at all if they didn’t launch by this certain window. But the regulatory challenges alone to get the permission to launch plutonium, and as it should be, are very stringent. And there was a real question whether that would happen on time. And then the lab making the plutonium, Los Alamos shut down for a security breach, had nothing to do with the spacecraft. It was just bad timing. And there was a real question of whether that plutonium would be ready on time. So there’s some unusual components. There’s also kevlar blankets, the stuff they make bulletproof vests out of, surrounding the spacecraft to protect it in case it hits even a tiny bit of interplanetary space you know if a tiny micro-meteorite at that speed when you’re moving thirty-thousand miles per hour, the smallest thing, something the size of a grain of rice can explode with you know it’s like heavy ordinance at that speed. So it’s got these protective blankets surrounding it to to protect from that…
HEFFNER: You, and you identify the human and chemical challenges to achieving this which you chronicle in this book. And it was quite an impressive and mammoth undertaking. For our viewers who are interested in space exploration, and see it both for its scientific good and as a source of human and intellectual creativity, and lifeblood, who were your allies in this process?
STERN: People rarely see that inside story, and the book does tell that. But it also tells the very human story of young scientists with a dream to go and explore the next planet that’s never been explored, and how they marshalled the scientific community and ultimately the National Academy of Sciences to back that and provide the input to the political system. And then how we competed with other teams that also wanted to fly the mission. Our team won, that battle was described. And so it’s kind of an adventure story. And then we were against this ticking clock that David described to get the spacecraft built very quickly for our very special launch window to use Jupiter to accelerate the spacecraft to high speed. And then we tell the story of the flight mission which included a near-death experience for the spacecraft, barely ten days before reaching Pluto. So all of that is woven together.
HEFFNER: Tell us about that. The nearly fatal event.
STERN: Yeah and that’s actually that’s the way the book opens. With this event. And it took place on exactly ten days before the fly-by that had been planned for fifteen years. We’d had a successful flight all the way across the solar system. Really very few problems and no really big ones. And then suddenly on a day unfortunately meant for fireworks, July the Fourth 2015, my cellphone rang and I found out we had lost contact with the spacecraft. That’s something that should never happen. And, in the history of spaceflight usually when it does happen it means something catastrophic like an explosion has taken place. Or we might’ve hit something. As it turned out, our computer, our main computer had been overloaded by some of the instruction that had been given. And had to reboot. And so the spacecraft recognized that there was a problem and thought that the main computer might be defective, switched to the backup system, and then after a couple hours the backup system called back to Earth for help and said here I am, what do you want me to do? And we were in the position of having to recreate all of the fly-by plans that had been put in the spacecraft but which were erased in that reboot. And doing that under a ticking clock and we only had three days to do it. And our mission operations and engineering teams swung into action. And I’ll tell you, Alexander, it was just like in the movie Apollo 13. You know 24/7, running the procedures on the simulators, people sleeping on desks and on hall room floors. Living out of vending machines. To get this done, because that’s basically our , was going to fly by Pluto in ten days whether we had it ready to make the observations or not.
GRINSPOON: And one factor that made this episode particularly harrowing was at that point, when you’re almost at Pluto, you’re three-billion miles from Earth. So at the speed of light, it takes four and a half hours to get a signal to the spacecraft. So that’s nine hours round trip. Just to say hey are you okay? And then you wait nine hours and the spacecraft goes yeah, I’m alive, what should I do? And then it takes another nine hours to send the next command. You don’t have very many of these nine hour blocks left before you’re gonna get to Pluto. And it’s either gonna be fixed or not. So the, it was very tense, and it really was like a scene out of a movie that the team did what they needed to do and got it on track with hours to spare.
HEFFNER: Amazing. Well a lot of troubleshooting went a long way in protecting your mission. The spacecraft is in hibernation mode at the moment. What does that mean…
STERN: We’re on a one way exit of the solar system, making studies of other objects in the Kuiper belt, much smaller objects, the kind that Pluto was made from. And we’re on our way now to intercept one of those with a very close fly-by on New Year’s Eve and New Year’s Day at the end of this year. We are flying towards it in hibernation, so the spacecraft is taking care of itself. We’re not operating it day-to-day from mission control. But we’ll wake it up on June the Fourth, and take it out of hibernation, start preparing the spacecraft for that next fly-by. And that next set of exploration.
HEFFNER: So did we glean anything, before I ask you more specifically about what we’ve learned as it relates to Pluto, did we learn anything about the preceding planets?
GRINSPOON: So, so the big encounter before Pluto was Jupiter. Because the challenge of getting to Pluto you know it’s a long way to go and nine years is actually a pretty fast trip there and the way it got there so fast was first of all it was the fastest launch ever off of Earth of any spacecraft. But you don’t launch at Pluto. You make a beeline to Jupiter. And the reason why is because Jupiter’s huge gravity, then if you hit it just right, slingshots it out to Pluto. But that Jupiter encounter was very valuable. It was about a year after launch. So a relatively quick trip to Jupiter and then a long trip out to Pluto. But the Jupiter encounter allowed the team to first of all practice a fly-by before they got to Pluto so they could make sure that the spacecraft was working and make sure their team had the procedures down and the instruments were working. They knew how to do everything, but it also was a very scientifically interesting opportunity to visit Jupiter quite some time after we had had a spacecraft there. And they made some really cool discoveries. They actually one of the moons of Jupiter is very volcanically active, a moon called Io. And they actually sort of fortunately were able to make a movie of a volcano blasting off on the surface of Io just as New Horizons was whipping through the Jupiter system. And so there were some really cool discoveries at Jupiter. And then there was this long trip to Pluto where they didn’t really go near anything, and it was mostly
STERN: Although we crossed the orbits of Saturn and Uranus and Neptune in turn, the planets weren’t anywhere near. They were in other portions of their orbit. So we had this long eight year journey from Jupiter in 2007 to Pluto in 2015 across this gulf of space, two and a half billion miles. It’s almost impossible to imagine how far that is. Traveling almost a million miles per day in which we didn’t pass anything. We’re just out in the wilderness of the solar system.
HEFFNER: And you say ultimately this spacecraft will descend into beneath, sort of an inaccessible…
GRINSPOON: It’s moving fast enough so that it will escape the sun’s gravity entirely. And that’s very rare, most of the spacecraft we send around the solar system end up in orbit around the sun or crashed into the planet that they’re investigating. Only four previous spacecraft have been on trajectories where they actually escape the solar system and wander the galaxy. The two Pioneers Ten and Eleven, and the Voyager spacecraft which in a way were sort of the predecessor for New Horizons, they made it as far as Neptune but didn’t- didn’t explore planets beyond that. Now New Horizons, after Pluto and after this encounter with Ultima Thule on New Year’s Eve, it is going fast enough that it’s going to keep going. And it will be the fifth human-made craft that will actually leave the solar system and just wander forever the galaxy. It’s going to outlive not just the human race, not just it’s gonna outlive the planet Earth. Because nothing happens to you when you’re out there, there’s no, you don’t intersect anything, there’s no weather, there’s nothing. So we have these few relics of our civilization that will literally last forever and this is now one of them.
HEFFNER: And at a certain point, it will not be trackable. I mean, you will not get back-
STERN: Right so the spacecraft is very healthy now, and it has plenty more exploring to do. But it’s only got a certain amount of fuel and a certain amount of power in the nuclear battery. And we use those up, they’re consumables. And sometime in the late 2030s, we’ll get to a point where there’s not enough power to run the radios to communicate back to the Earth in the main computer. And at that point, the mission will have to end.
HEFFNER: So you’re making the argument to the new NASA administrator about the next phase of this project. I know it’s not complete yet, but I want to give you a chance to reflect on answers to questions that you ask in “Chasing Horizons.” The you know there were questions that were just for your learning about Pluto that were not accessible and you had hoped, we touched on whether Pluto is internally active. You’ve explored the surface composition. You’ve assessed a comparison against or contrast against Neptune and its moons. The ultimate human benefit here back on planet Earth, when you think of the nature of Pluto and its relevance in the solar system, and the next voyage, whether that’s back to Pluto, whether that’s an unmanned craft or a manned craft, what are you hoping will lead you to the next mission that will sort of be the impetus to continue these journeys.
STERN: Let me first say I think that there are two major contributions that New Horizons has made. And one is on this new knowledge. This first mission, not only through the Pluto system, but to the Kuiper Belt and to this whole new class of planet that’s so populous in the outer solar system. And you know one of the great things about NASA and the US Space Program is that that knowledge is made available to people everywhere on the Earth, for all mankind so to speak. But secondly, and this mission more than any modern space mission, engaged the public in ways that had never been seen before. I think it shows that the public really loves going new places and seeing new things, and loves the sheer joy of exploration. And we know that we hear from school children all the time that they want to go into science and engineering careers. Kids write that they saw the exploration of Pluto and now they want to do something like that in their lifetime. And we need scientists and engineers to power this economy. So I think that’s a very powerful outcome from a scientific space mission. Now, NASA’s currently conducting 90 different space missions, most of them robotic, some of them to the planets, some of them to study the Earth, some of them to study the universe around us. And even the sun. And about half of those are already in flight, and about half are being built to be launched in the next several years. New missions are coming up all the time as old missions finished. And so there are new missions for example that are being built now to study the ocean inside of Jupiter’s moon Europa, which may also harbor life. There are new missions like the Parker Solar Probe just about to be launched this summer that will skim the surface of the sun for the first time and really touch the solar atmosphere. The James Webb Space Telescope, which will be much more powerful than the Hubble, will be launching in a couple of years to study the universe and the origin of stars in galaxies. These are just three of the many missions that NASA is now building, but one of the things that we’d like to see is a return to Pluto with an orbiter to study it in much more detail. To bring much more advanced scientific instruments, and to stay, not just to fly by and glimpse it, but to stay and map it and study its satellites, and its interior, and look for that ocean. And really rewrite the textbooks that New Horizons first wrote.
GRINSPOON: ‘Cause you know one thing that’s tantalizing about knowing Pluto the way we do now, we had this close fly-by of one side of Pluto. And it turns out to be incredibly interesting, there’s this heart-shaped, huge geographic feature that’s made of fresh-flowing nitrogen ice, you know basically glaciers on the surface of Pluto surrounding by these towering icy mountains and you know just a lot of like scientifically and aesthetically really astounding terrain. And then there’s the other side of Pluto, which we only photographed from a great distance ‘cause Pluto’s turning on its axis. And you had to fly close by one side. So we have vague pictures of the other side that we got from a telescope from the spacecraft when it was much farther out, before the close encounter. So we need an orbiter ‘cause now we wanna see the rest of it in that kind of detail. Now we, now that we know how interesting it is. And these results are fascinating enough to make us realize we need a more in depth and longer-lived stay at Pluto to really answer the questions raised by New Horizons.
HEFFNER: Of those projects that you described that NASA’s working on, which you described roughly half of them are in flight. You said ballpark ninety or so. Which are the most attended to at the moment? I mean if we think of NASA’s budget and its operation as something that is limited by whatever human capital is pushing forward new projects, NASA’s priorities right now are what?
STERN: Well NASA has a whole series of priorities to study the Earth and the universe…
HEFFNER: But principally as it pertains to planetary exploration, I’m just wondering where the next trip to Pluto configures in relation to some of the other planets we’ve talked about.
GRINSPOON: That’s a hot topic right now because there’s this process called the, you know these decadal surveys where the National Academy of Sciences ranks missions. And when you read “Chasing New Horizons”, you realize that was an important part of the story of how New Horizons got selected in the first place. It has to rise to the top of this ranking, where there’s a lot of people competing for their priority. You know there’s the Mars people, and the, you know they have people everybody wants their mission. So Pluto had to rise to the top of that set of priorities and it did. There’s another one of those processes just beginning now, and so there are some of us and Alan just described why he really believed that you know a Pluto orbiter ought to be high on that list. There are people pushing for other missions. And they, you know there are a lot of good ideas.
HEFFNER: In terms of the humanitarian impact that a mission could potentially bring back home…
GRINSPOON: We need to know how planets work. We have an ethical obligation to know how planets work ‘cause we find ourselves as stewards of a planet here. And not currently doing a really great job of it, but we have ideas of how to do a much better job of it. Without having Earth observations from space, you know which is one thing that NASA does so we can monitor the changing climate and the changing human impact on the planet and just get smarter about the carbon cycle and all these functions that we are finding ourselves affecting. But also without the insights of comparative planetology where we go to other planets and gain new, surprising insights into how planets work in general, including the Earth. We’d be in trouble. We actually I think owe it to future generations to do everything we can to understand in as deep a way as we can how planets work. And so when even when we go to a place like Pluto, which you might not think is very Earth-like. As Alan said, you know there are a lot of analogous features there. And you know it always causes us to rethink how the heat flow works inside of a planet, how mountains form, how atmospheres interact with surfaces. We’re learning about all these things and ultimately it broadens our knowledge of how all planets work including- including this one.
HEFFNER: Alan, final word on this question of prioritization of the planetary missions and which might be in addition to a second Pluto journey most fruitful.
STERN: Yeah. Well I think that to answer your question that one of the most fruitful things that NASA is doing and should be doing is to go back to exploring the planets with humans. We’re so much more capable as explorers than the robots are. Field geology and things like that. And of course NASA is now embarking on a program to send humans back to the moon and on to Mars and possibly other destinations as well and…
HEFFNER: What might be those other destinations?
STERN: Well the … Asteroids for example, in orbit between the Earth and Mars.
HEFFNER: And any other planets besides Mars that could be feasible [PH]?
STERN: In the more distant future I think that it’s possible that we’ll be exploring all the planets with human beings. And you know this is something that’s hugely inspiring, and it’s a great push for the, a technological society like our own. NASA’s the best in the world at it, and it’s NASA’s number one priority.
HEFFNER: Thank you, both. I appreciate your time.
GRINSPOON: Thank you.
STERN: Thanks a lot.
HEFFNER: And thanks to you in the audience. I hope you join us again for a thoughtful excursion into the world of ideas. Until then, keep an open mind. Please visit The Open Mind website at Thirteen.org/OpenMind to view this program online or to access over 1,500 other interviews. And do check us out on Twitter and Facebook @ OpenMindTV for updates on future programming. | <urn:uuid:372ad189-c706-4b8b-a9a5-eba5479193c8> | CC-MAIN-2021-21 | https://www.thirteen.org/openmind/science/our-galaxy-and-beyond/5969/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991829.45/warc/CC-MAIN-20210514214157-20210515004157-00087.warc.gz | en | 0.963206 | 5,939 | 2.78125 | 3 |
We are one of the longest recognized peptide web websites in the UK and have actually been supplying peptides for over 7 years to business, universities and specific researchers worldwide. We specialise in peptides and have an extremely respected UK authority on peptides on our personnel and available by means of our Customer Services phone lines and e-mail.
Everything You Need to Know About Peptides
Peptide Bond – What Is It?
A peptide bond refers to the covalent bond that gets produced by two amino acids. For the peptide bond to take place, the carboxyl group of the very first amino acid will need to respond with an amino group coming from a 2nd amino acid. The response leads to the release of a water molecule.
It’s this reaction that leads to the release of the water particle that is frequently called a condensation response. From this response, a peptide bond gets formed, and which is likewise called a CO-NH bond. The particle of water launched throughout the response is henceforth referred to as an amide.
Development of a Peptide Bond
For the peptide bond to be formed, the molecules belonging to these amino acids will require to be angled. Their angling helps to ensure that the carboxylic group from the very first amino acid will undoubtedly get to react with that from the 2nd amino acid. A basic illustration can be utilized to show how the two only amino acids get to conglomerate by means of a peptide development.
Their combination results in the development of a dipeptide. It also happens to be the tiniest peptide (it’s only comprised of two amino acids). Furthermore, it’s possible to combine numerous amino acids in chains to produce a fresh set of peptides. The basic rule of thumb for the formation of new peptides is that:
- Fifty or fewer amino acids are called peptides
- Fifty to a hundred peptides are called polypeptides
- Any development having more than a hundred amino acids is usually considered as a protein
You can inspect our Peptides Vs. Proteins page in the peptide glossary to get a more in-depth explanation of peptides, proteins, and polypeptides.
A peptide bond can be broken down by hydrolysis (this is a chemical breakdown procedure that happens when a substance comes into contact with water resulting in a response). While the reaction isn’t quick, the peptide bonds existing within polypeptides, proteins, and peptides can all break down when they respond with water. The bonds are called metastable bonds.
The reaction releases close to 10kJ/mol of free energy when water reacts with a peptide bond. Each peptide bond has a wavelength absorbance of 190-230 nm.
In the natural universe, enzymes contained in living organisms are capable of forming and likewise breaking the peptide bonds down.
Different neurotransmitters, hormones, antitumor agents, and prescription antibiotics are categorized as peptides. Given the high variety of amino acids they contain, a number of them are considered as proteins.
The Peptide Bond Structure
Researchers have completed x-ray diffraction studies of various small peptides to help them determine the physical qualities had by peptide bonds. The studies have revealed that peptide bonds are planer and stiff.
The physical appearances are predominantly a consequence of the amide resonance interaction. Amide nitrogen remains in a position to delocalize its particular electrons pair into the carbonyl oxygen. The resonance has a direct result on the peptide bond structure.
Undoubtedly, the N-C bond of each peptide bond is, in fact, much shorter compared to the N-Ca bond. It also happens that the C= 0 bond is lengthier compared to the common carbonyl bonds.
The amide hydrogen and the carbonyl oxygen in a peptide are in a trans configuration, instead of being in a cis setup. Due to the fact that of the possibility of steric interactions when dealing with a cis configuration, a trans setup is thought about to be more dynamically motivating.
Peptide Bonds and Polarity
Normally, complimentary rotation should take place around a given bond in between amide nitrogen and a carbonyl carbon, the peptide bond structure. Then once again, the nitrogen referred to here just has a particular pair of electrons.
The lone set of electrons is located near a carbon-oxygen bond. For this reason, it’s possible to draw a reasonable resonance structure. It’s a structure where a double bond is used to connect the carbon and the nitrogen.
As a result, the nitrogen will have a favorable charge while the oxygen will have a negative one. The resonance structure, thereby, gets to prevent rotation about this peptide bond. The product structure ends up being a one-sided crossbreed of the two kinds.
The resonance structure is deemed an important aspect when it comes to depicting the real electron distribution: a peptide bond includes around forty per cent double bond character. It’s the sole reason that it’s always stiff.
Both charges trigger the peptide bond to get a long-term dipole. Due to the resonance, the nitrogen remains with a +0.28 charge while the oxygen gets a -0.28 charge.
A peptide bond is, thus, a chemical bond that occurs between 2 molecules. It’s a bond that takes place when a carboxyl cluster of a given molecule responds with an amino set from a 2nd molecule. The response ultimately launches a water particle (H20) in what is referred to as a condensation reaction or a dehydration synthesis reaction.
A peptide bond refers to the covalent bond that gets produced by two amino acids. From this reaction, a peptide bond gets formed, and which is likewise called a CO-NH bond. While the reaction isn’t fast, the peptide bonds existing within polypeptides, peptides, and proteins can all break down when they respond with water. The bonds are understood as metastable bonds.
A peptide bond is, thus, a chemical bond that happens between 2 particles.
Peptides need appropriate filtration throughout the synthesis process. Given peptides’ intricacy, the purification method used must portray performance.
Peptide Filtration processes are based on principles of chromatography or crystallization. Condensation is typically utilized on other compounds while chromatography is chosen for the purification of peptides.
Elimination of Specific Pollutants from the Peptides
The type of research study performed determines the anticipated purity of the peptides. There is a need to establish the type of pollutants in the peptides and approaches to eliminate them.
Impurities in peptides are associated with various levels of peptide synthesis. The filtration techniques should be directed towards handling particular pollutants to meet the needed standards. The purification procedure involves the isolation of peptides from different compounds and pollutants.
Peptide Purification Method
Peptide filtration accepts simpleness. The process occurs in 2 or more steps where the initial step eliminates the majority of the impurities. These impurities are later on produced in the deprotection level. At this level, they have smaller sized molecular weight as compared to their preliminary weights. The second purification step increases the level of pureness. Here, the peptides are more polished as the process uses a chromatographic concept.
Peptide Filtration Processes
The Peptide Filtration procedure incorporates units and subsystems which consist of: preparation systems, information collection systems, solvent shipment systems, and fractionation systems. It is suggested that these processes be brought out in line with the current Great Production Practices (cGMP).
Affinity Chromatography (AC).
This purification procedure separates the peptides from impurities through the interaction of the peptides and ligands. Particular desorption uses competitive ligands while non-specific desorption embraces the alteration of the PH. Eventually, the pure peptide is gathered.
Ion Exchange Chromatography (IEX).
Ion Exchange Chromatography (IEX) is a high capacity and resolution procedure which is based upon the differences in charge on the peptides in the mix to be purified. The chromatographic medium isolates peptides with comparable charges. These peptides are then put in the column and bind. The prevailing conditions in the column and bind are altered to result in pure peptides.
Hydrophobic Interaction Chromatography (HIC).
A hydrophobic with a chromatic medium surface engages with the peptides. The procedure is reversible and this allows the concentration and purification of the peptides.
A high ionic strength mix is bound together with the peptides as they are filled to the column. The salt concentration is then lowered to improve elution. The dilution process can be effected by ammonium sulfate on a reducing gradient. Lastly, the pure peptides are collected.
Gel Filtration (GF).
The Gel Filtering filtration process is based on the molecular sizes of the peptides and the available pollutants. It is effective in small samples of peptides. The process results in a good resolution.
Reversed-Phase Chromatography (RPC).
Reversed-Phase Chromatography makes use of the principle of reverse interaction of peptides with the chromatographic medium’s hydrophobic surface. The samples are positioned in the column before the elution process. Organic solvents are used during the elution procedure. this stage requires a high concentration of the solvents. High concentration is accountable for the binding procedure where the resulting particles are collected in their pure kinds. The RPC strategy applies during the polishing and mapping of the peptides. The solvents used during the process cause modification of the structure of the peptides which hinders the recovery process.
Compliance with Good Production Practices.
Peptide Filtration procedures should remain in line with the GMP requirements. The compliance effect on the quality and purity of the last peptide. According to GMP, the chemical and analytical approaches applied must be well documented. Appropriate planning and testing ought to be embraced to ensure that the procedures are under control.
The filtration phase is amongst the last steps in peptide synthesis. The phase is straight associated with the quality of the output. Therefore, GMP locations strenuous requirements to function as standards at the same times. For instance, the limits of the vital criteria must be established and considered during the purification process.
The development of the research study industry needs pure peptides. The peptide filtration procedure is crucial and hence, there is a requirement to comply with the set regulations. With highly cleansed peptides, the outcomes of the research will be dependable. Hence, compliance with GMP is crucial to high quality and pure peptides.
Impurities in peptides are associated with various levels of peptide synthesis. The filtration procedure involves the seclusion of peptides from different compounds and pollutants.
The Peptide Purification process includes units and subsystems which include: preparation systems, data collection systems, solvent delivery systems, and fractionation systems. The Gel Filtration filtration process is based on the molecular sizes of the peptides and the offered impurities. The solvents applied throughout the procedure cause modification of the structure of the peptides which hinders the healing process.
Lyophilized is a freeze-dried state in which peptides are usually provided in powdered type. Different techniques utilized in lyophilization methods can produce more granular or compressed as well as fluffy (large) lyophilized peptide.
Prior to using lyophilized peptides in a laboratory, the peptide has to be reconstituted or recreated; that is, the lyophilized peptide ought to be liquified in a liquid solvent. Nevertheless, there does not exist a solvent that can solubilize all peptides in addition to keeping the peptides’ compatibility with biological assays and its integrity. In the majority of scenarios, distilled, sterile as well as normal bacteriostatic water is used as the first choice in the process. These solvents do not liquify all the peptides. Looks into are usually forced to use a trial and error based technique when trying to rebuild the peptide using a progressively more powerful solvent.
Considering a peptide’s polarity is the main aspect through which the peptide’s solubility is identified. In this regard, acidic peptides can be recreated in vital services, while standard peptides can be reconstructed in acidic solutions. Hydrophobic peptides and neutral peptides, which contain huge hydrophobic and uncharged polar amino acids, respectively, require organic solvents to recreate. Organic solvents that can be used consist of propanol, acetic acid, DMSO, and isopropanol. These natural solvents should, however, be used in small amounts.
Following using natural solvents, the service must be watered down with bacteriostatic water or sterile water. Using Sodium Chloride water is extremely dissuaded as it triggers speeds up to form through acetate salts. In addition, peptides with free cysteine or methionine must not be rebuilded utilizing DMSO. This is because of side-chain oxidation happening, that makes the peptide unusable for lab experimentation.
Peptide Leisure Guidelines
As a first rule, it is a good idea to use solvents that are simple to remove when dissolving peptides through lyophilization. This is taken as a precautionary step in the case where the very first solvent used is not adequate. The solvent can be eliminated using the lyophilization procedure. Researchers are advised initially to try liquifying the peptide in typical bacteriostatic water or sterilized distilled water or dilute sterilized acetic acid (0.1%) solution. It is likewise a good idea as a general guideline to check a small amount of peptide to determine solubility before attempting to dissolve the entire portion.
One important reality to think about is the preliminary use of dilute acetic acid or sterilized water will allow the scientist to lyophilize the peptide in case of failed dissolution without producing undesirable residue. In such cases, the scientist can attempt to lyophilize the peptide with a more powerful solvent once the ineffective solvent is gotten rid of.
The scientist needs to attempt to liquify peptides utilizing a sterilized solvent producing a stock service that has a greater concentration than essential for the assay. When the assay buffer is utilized initially and fails to dissolve all of the peptides, it will be hard to recover the peptide without being untainted. The process can be reversed by diluting it with the assay buffer after.
Sonication is a process utilized in laboratories to increase the speed of peptide dissolution in the solvent when the peptides persist as a whitish precipitate noticeable inside the option. Sonication does not alter the solubility of the peptide in a solvent but simply assists breaking down pieces of strong peptides by briskly stirring the mix. After completing the sonication procedure, a scientist should examine the service to discover if it has actually gelled, is cloudy, or has any kind of surface scum. In such a situation, the peptide might not have actually dissolved however stayed suspended in the option. A more powerful solvent will, for that reason, be needed.
Practical laboratory execution
In spite of some peptides needing a more powerful solvent to fully liquify, typical bacteriostatic water or a sterile distilled water solvent is effective and is the most typically utilized solvent for recreating a peptide. As pointed out, sodium chloride water is highly prevented, as discussed, given that it tends to trigger precipitation with acetate salts. A general and easy illustration of a normal peptide reconstitution in a lab setting is as follows and is not distinct to any single peptide.
* It is essential to allow a peptide to heat to room temperature level prior to taking it out of its packaging.
You may also choose to pass your peptide mix through a 0.2 micrometre filter for germs prevention and contamination.
Using sterile water as a solvent
- Action 1– Remove the peptide container plastic cap, therefore exposing its rubber stopper.
- Step 2– Take off the sterilized water vial plastic cap, hence exposing the rubber stopper.
- Action 3– Using alcohol, swab the rubber stoppers to prevent bacterial contamination.
- Step 4– Draw 2ml of water from the sterile water container.
- Step 5– Slowly put the 2ml of sterilized water into the peptide’s container.
- Step 6– Swirl the option gently until the peptide liquifies. Please avoid shaking the vial
Before utilizing lyophilized peptides in a laboratory, the peptide has actually to be reconstituted or recreated; that is, the lyophilized peptide must be liquified in a liquid solvent. Hydrophobic peptides and neutral peptides, which include large hydrophobic and uncharged polar amino acids, respectively, require organic solvents to recreate. Sonication is a procedure used in laboratories to increase the speed of peptide dissolution in the solvent when the peptides continue as a whitish precipitate noticeable inside the service. Sonication does not modify the solubility of the peptide in a solvent but merely assists breaking down chunks of solid peptides by briskly stirring the mix. Regardless of some peptides requiring a more powerful solvent to completely dissolve, common bacteriostatic water or a sterile distilled water solvent is efficient and is the most commonly utilized solvent for recreating a peptide.
Pharmaceutical grade Peptides can be used for different applications in the biotechnology industry. The schedule of such peptides has actually made it possible for researchers and biotechnologist to conduct molecular biology and pharmaceutical advancement on an expedited basis. Several business provide Pharmaceutical grade Peptides peptide synthesis services to satisfy the needs of the clients.
It is obtained from a particle that consists of a peptide linkage or a residue that binds to a peptide. Biological function of peptide can be realised through Pharmaceutical grade Peptides peptide synthesis. Biochemical process is realised through the usage of peptide synthesis.
Pharmaceutical Peptide Synthesis
The primary purpose of peptide synthesis is the manufacture of anti-microbial agents, prescription antibiotics, insecticides, enzymes, hormonal agents and vitamins. The process of synthesis of peptide includes a number of steps consisting of peptide isolation, gelation, purification and conversion to a helpful form.
There are lots of types of peptide readily available in the market. They are recognized as follows: peptide derivatives, non-peptide, hydrolyzed, hydrophilic, and polar. These categories include the most frequently utilized peptide and the process of making them.
Non-peptide peptide derivatives
Non-peptide peptide derivatives consist of C-terminal fragments (CTFs) of the proteins that have been dealt with chemically to remove negative effects. They are derived from the protein sequence and have a long half-life. Non-peptide peptide derivatives are also referred to as small molecule substances. A few of these peptide derivatives are derived from the C-terminal fragments of human genes that are utilized as genetic markers and transcription activators.
Porphyrins are produced when hydrolyzed and then transformed to peptide through peptidase. Porphyrin-like peptide is obtained through a series of chemical processes.
Disclaimer: All items listed on this website and offered through Pharma Labs Global are intended for medical research purposes only. Pharma Lab Global does not motivate or promote the usage of any of these products in an individual capability (i.e. human intake), nor are the items meant to be utilized as a drug, stimulant or for usage in any food products.
Several business supply Pharmaceutical grade Peptides peptide synthesis services to satisfy the requirements of the clients.
It is obtained from a particle that contains a peptide linkage or a residue that binds to a peptide. Biological function of peptide can be realised through Pharmaceutical grade Peptides peptide synthesis. Biochemical process is realised through the use of peptide synthesis.
The process of synthesis of peptide includes several actions including peptide isolation, filtration, gelation and conversion to a helpful kind.
Peptides in WikiPedia
More Peptides Products: | <urn:uuid:7cd20825-5b8b-45e6-a8fc-604eb60100dc> | CC-MAIN-2021-21 | https://pharmalabglobal.com/can-i-use-copper-peptides-with-retinol-2021/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988953.13/warc/CC-MAIN-20210509002206-20210509032206-00056.warc.gz | en | 0.903278 | 4,307 | 3.296875 | 3 |
Pediatric Urinary Tract Infection and Reflux
Am Fam Physician. 1999 Mar 15;59(6):1472-1478.
See related patient information handout on urinary tract infections in children, written by the author of this article.
Urinary tract infections in children are sometimes associated with vesicoureteral reflux, which can lead to renal scarring if it remains unrecognized. Since the risk of renal scarring is greatest in infants, any child who presents with a urinary tract infection prior to toilet training should be evaluated for the presence of reflux. Children who may be lost to follow-up and those who have recurrent urinary tract infections should also be evaluated. The preferred method for evaluation of urinary reflux is a voiding cystourethrogram. Documented reflux is initially treated with prophylactic antibiotics. Patients who have breakthrough infections on prophylaxis, develop new renal scarring, have high-grade reflux or cannot comply with long-term antibiotic prophylaxis should be considered for surgical correction. The preferred method of surgery is ureteral reimplantation. A newer method involves injection of the bladder trigone with collagen.
Urinary tract infections in children are a significant source of morbidity, particularly when associated with anatomic abnormalities.1 Vesicoureteral reflux is the most commonly associated abnormality, and reflux nephropathy is an important cause of end-stage renal disease in children and adolescents.2 However, when reflux is recognized early and managed appropriately, renal insufficiency is rare. Some children who present with an apparently uncomplicated first urinary tract infection turn out to have significant reflux. Subclinical infections can sometimes lead to severe bilateral renal scarring. Therefore, even a single documented urinary tract infection in a child must be taken seriously.
Children with urinary tract infections do not always present with symptoms such as frequency, dysuria or flank pain. Infants may present with fever and irritability or other subtle symptoms, such as lethargy. Older children may also have nonspecific symptoms, such as abdominal pain or unexplained fever. A urinalysis should be obtained in a child with unexplained fever or symptoms that suggest a urinary tract infection. In young children with urinary tract infections, urinalysis may be negative in 20 percent of cases. Barnaff and colleagues3 recommend a urine culture for all male patients under six months of age and all female patients under two years of age who have a temperature of 39°C (102.2°F) or higher. Because a documented infection may warrant a thorough radiographic evaluation, empiric treatment on the basis of symptoms or urinalysis alone should be avoided.
While the most reliable method of obtaining urine for a culture is suprapubic aspiration, this procedure often causes anxiety in the child, the parent and the physician. Urine specimens may therefore be obtained by placing a plastic bag over the perineum of infants, and by obtaining a voided specimen in older children. Because “bagged” and voided specimens may be contaminated, results must be interpreted in conjunction with the urinalysis and the clinical setting. Pyuria and/or classic symptoms support the diagnosis of a urinary tract infection, whereas a positive culture in a child with a normal urinalysis and/or atypical symptoms may represent contamination. In patients whose diagnosis is complicated, and when the uncertainty of contamination must be avoided, a catheterized or suprapubic specimen can be obtained. Because catheterization may introduce bacteria into the bladder, a single dose of oral antibiotic should be given to prevent iatrogenic infection.
While the presence or absence of a true urinary tract infection is occasionally difficult to determine, the distinction between cystitis and pyelonephritis is even more problematic. No clinical findings (such as fever or flank pain) and no laboratory studies (such as erythrocyte sedimentation rate or white blood cell count) are accurate in distinguishing pyelonephritis from cystitis.4 Fortunately, this distinction is rarely crucial. The management of the child is dictated by the clinical severity of the illness, rather than by the specific site of infection in the urinary tract. Furthermore, since the risk of reflux is similar in all patients with a urinary tract infection, the distinction between cystitis and pyelonephritis is not important in guiding the need for radiographic evaluation.
In rare circumstances, when distinguishing the diagnosis of pyelonephritis from some other infection is important, a technetium-99m dimercaptosuccinic acid (DMSA) renal flow scan is the best study to obtain.5 Patients with a normal scan during an acute infection do not have pyelonephritis and will not develop scarring. However, an area of photopenia on a DMSA scan identifies a region of pyelonephritis that is at risk for eventual scar formation (Figure 1). Because this test is invasive, expensive, exposes the child to radiation and is unlikely to alter the management of the infection, it is not used in the routine evaluation of children with urinary tract infections.
The rightsholder did not grant rights to reproduce this item in electronic media. For the missing item, see the original print version of this publication.
The most significant anomaly associated with urinary tract infections in children is vesicoureteral reflux, which occurs in 30 to 50 percent of these patients.6 Despite the high rate of association, no randomized prospective studies demonstrate the benefit of screening these patients for anomalies.7 However, there is no doubt that vesicoureteral reflux is associated with renal scarring, in part because it allows lower tract infections to ascend, resulting in pyelonephritis.5
Since antibiotic prophylaxis can prevent recurrent urinary tract infections, it seems prudent to screen children with urinary tract infections who are at risk for renal scarring, such as children with recurrent urinary tract infections. Since children are at greatest risk for renal scarring in the first few years of life, reflux screening is recommended for any child who has a single urinary tract infection before toilet training has begun. Older children who receive consistent medical care (in whom a pattern of recurrent urinary tract infections would not be missed) may not need to be screened following a single infection. An alternative to more invasive screening might be renal ultrasonography. Although ultrasonography is a poor screening test for reflux, missed reflux may be of little concern in an older child with a single infection and normal results on renal ultrasound examination.
When a child is screened for reflux, the appropriate test to obtain is a cystogram. A cystogram performed by an experienced pediatric radiologist is well-tolerated by most children. Although renal ultrasound examinations are less invasive, they are normal in 50 to 75 percent of patients with reflux and, therefore, are ineffective for screening.8 A DMSA renal scan is the best study for detecting renal scarring and might therefore identify patients at particular risk for reflux. Unfortunately, a renal scan will not detect reflux in children who have not yet developed scarring, and these are the very ones who might benefit most from antibiotic prophylaxis.
Obtaining a cystogram in a patient with a urinary tract infection should be delayed for at least 48 hours after initiating antibiotic therapy so as not to induce bacteremia by instrumenting the urinary tract. It is not necessary to delay the cystogram beyond this point. Concern that obtaining a cystogram too soon after a urinary tract infection may result in a false-positive study is ill-founded. Even children who have reflux only when they have cystitis have a significant problem, since reflux causes scarring by allowing cystitis to ascend.5
A renal ultrasound examination may also be obtained to rule out obstructive uropathy in children. An ultrasound examination can detect gross renal scarring or marked asymmetry of renal size in patients with vesicoureteral reflux. A DMSA renal scan is the best method for detecting renal scarring.9
Two types of cystogram are available. A standard voiding cystourethrogram (VCUG) is obtained by instilling radiopaque contrast medium into the bladder and imaging the bladder and renal fossae during filling and voiding (Figure 2). The severity of vesicoureteral reflux is graded on a scale of 1 to 5, depending on the degree of distention of the collecting system.
A nuclear cystogram can be obtained by instilling a radionuclide agent into the bladder and imaging with a gamma camera. Nuclear cystography is at least as sensitive for the detection of reflux as a standard VCUG and exposes the child to less radiation.10 However, grading of reflux is less precise, and associated bladder abnormalities cannot be detected with nuclear cystography. Therefore, a VCUG is preferred as the initial study in the evaluation of a child with a urinary tract infection. Nuclear cystography is used in follow-up of patients with vesicoureteral reflux who are on an observation protocol. Vesicoureteral reflux is present in one third of siblings of patients with reflux, and in two thirds of the children of patients with reflux.11,12 Nuclear cystography may be employed for screening these children as well.
Because urinary tract infections are usually caused by gram-negative rods, particularly Escherichia coli, any oral antibiotic with good gram-negative coverage is a reasonable choice for treatment. Trimethoprim/sulfamethoxazole (Bactrim, Spectra) offers good coverage and is inexpensive. It is given in suspension form in a dosage of 4 mg trimethoprim per kg twice daily. Other commonly used antibiotics include amoxicillin, in a dosage of 10 mg per kg three times daily, and nitrofurantoin (Furadantin, Macrodantin, Macrobid), in a dosage of 2.5 mg per kg three times daily. Cephalosporins may be indicated if infection with a more resistant organism is suspected. Ciprofloxacin (Cipro) is not approved for use in children. However, carbenicillin is available in an oral form for treating uncomplicated cystitis that is caused by susceptible strains of Pseudomonas.
Children who require hospitalization should be placed on broad-spectrum intravenous antibiotics pending the results of the urine culture. Because most community-acquired urinary tract infections are caused by gram-negative bacilli, coverage should include an aminoglycoside, a cephalosporin or a broad-spectrum penicillin derivative. Coverage may need to be broader in children who have recently been hospitalized or who have had recent instrumentation or recurrent infections, since they may be infected with gram-positive organisms such as Enterococcus or coagulase-negative Staphylococcus. A urine gram-stain may be helpful in the initial selection of antibiotics. An algorithm showing the evaluation and management of a child with a urinary tract infection is presented in Figure 3.
Management of Vesicoureteral Reflux
Reflux resolves spontaneously in some patients. It is more likely to resolve if it is low-grade, unilateral and not associated with anomalies. The grade of reflux is the most important factor. Over several years of observation, reflux resolves in approximately 80 percent of patients with grade 1 or grade 2 reflux, 50 percent of patients with grade 3 reflux and 25 percent of patients with grade 4 reflux.13 Because of this tendency to resolve, most patients with reflux are initially treated on an observation protocol.
The current management of reflux is based on direct and indirect scientific data, as well as a traditional standard of care. With this in mind, the American Urological Association recently developed clinical practice guidelines for the management of reflux.14 Because renal scarring usually occurs only with the reflux of infected urine, the prevention of urinary tract infections in children with reflux is essential, and the mainstay of medical management is antibiotic prophylaxis. The most frequently used agents are nitrofurantoin, in a dosage of 1 to 2 mg per kg once daily, and trimethoprim/sulfamethoxazole, in a dosage of 2 to 4 mg trimethoprim per kg once daily.
In patients under observation, periodic urine cultures should be obtained (approximately every three months) to detect asymptomatic bacteriuria. Follow-up cystograms are obtained annually, and prophylaxis is discontinued when reflux resolves. Upper tract studies are obtained periodically as dictated by the patient's clinical course. Bladder instability and constipation can predispose a child to urinary tract infections and exacerbate reflux.15–20 The presence of these symptoms should be actively determined and promptly treated.
Any patient under observation who develops a breakthrough urinary tract infection or new renal scarring should undergo surgical correction of reflux. Surgery is also appropriate in patients who cannot comply with close follow-up and long-term antibiotic prophylaxis. This includes patients who wish to avoid repeat cystograms and office visits. Patients with high-grade reflux may be considered for immediate surgical intervention.
The standard operation for vesicoureteral reflux is ureteral reimplantation, which is successful in 95 percent of cases.21 Although antireflux surgery effectively reduces the risk of pyelonephritis, approximately one third of the children will continue to have cystitis.21
The subtrigonal injection of collagen is a relatively new alternative treatment for vesicoureteral reflux. This technique is performed as an outpatient cystoscopic procedure under a brief general anesthetic. It involves significantly less morbidity than the standard operation but is successful in only 65 to 70 percent of cases.22,23 The long-term efficacy of collagen injection has not yet been determined.
Recurrent Urinary Tract Infections
Some children without a discernable anatomic anomaly develop recurrent urinary tract infections. Many of these children present after toilet training, when normal spontaneous voiding is prevented by social constraints. The risk of renal scarring in these patients is low, but not absent. Some of these children have symptoms of bladder instability, such as urge incontinence or squatting behavior, in the absence of an infection. Bladder instability may be improved by placing the child on a timed voiding schedule of once every three hours. If behavioral approaches fail, voiding symptoms often respond to anti-cholinergic agents such as oxybutynin (Ditropan), in a dosage of 0.15 mg per kg three times daily. Even when the symptoms are subtle and not in and of themselves troublesome, the recurrent infections can be prevented or reduced in frequency by employing anticholinergic therapy in conjunction with antibiotic prophylaxis. Constipation can also predispose to bladder instability and recurrent urinary tract infections and should therefore be aggressively managed.19,20
Even an anatomically and functionally normal urinary tract may be predisposed to recurrent infections. Certain host factors may play a role, such as antigen expression on the bladder epithelium.24 However, there is no specific therapy for these host factors, so children with frequent infections are managed with antibiotic prophylaxis administered in the same fashion as in patients with vesicoureteral reflux. However, in the absence of reflux, upper tract monitoring and routine urine cultures are rarely indicated. Treatment of asymptomatic bacteriuria in this setting is unnecessary.
The Foreskin and Urinary Tract Infections
A resurgence of sentiment favoring routine neonatal circumcision has occurred in the last decade because of recently described associations between an intact foreskin and urinary tract infections in infants. This association was best illustrated in a series of systematic studies by Wiswell and associates25–28 at U.S. Army hospitals. In several large epidemiologic studies, the authors found that the incidence of significant urinary tract infections in uncircumcised males less than six months of age was 1 to 4 percent. The incidence in circumcised males was only 0.1 to 0.2 percent.
Because of the data demonstrating an increase in the rate of infection, routine circumcision has been advocated by some authors. They point out the significant mortality and renal scarring associated with urinary tract infections occurring in early infancy. However, circumcision is a permanent solution to a problem that affects males only during the first six months of life. There may be alternative, nonsurgical means of preventing these infections, and the question of whether all boys should be circumcised to prevent infection in 1 to 4 percent remains debatable. It is also unclear whether circumcision would augment the benefit of antibiotic prophylaxis in boys with reflux or other urologic anomalies.
Figure 1 reprinted with permission from Rushton HG, Majd M. Dimercaptosuccinic acid renal scintigraphy for the evaluation of pyelonephritis and scarring: a review of experimental and clinical studies. J Urol 1992;148(5 Pt 2):1726–32.
REFERENCESshow all references
1. Ross JH. The evaluation and management of vesicoureteral reflux. Semin Nephrol. 1994;14:523–30....
2. Bailey RR. Commentary: the management of grades I and II (nondilating) vesicoureteral reflux. J Urol. 1992;148(5 Pt 2):1693–5.
3. Baraff LJ, Bass JW, Fleisher GR, Klein JO, McCracken GH Jr, Powell KR, et al. Practice guideline for the management of infants and children 0 to 36 months of age with fever without source. Agency for Health Care Policy and Research. Ann Emerg Med. 1993;22:1198–210 [Published erratum appears in Ann Emerg Med. 1993;22:1490]
4. Majd M, Rushton HG, Jantausch B, Wiedermann BL. Relationship among vesicoureteral reflux, P-fimbriated Escherichia coli, and acute pyelonephritis in children with febrile urinary tract infection. J Pediatr. 1991;119:578–85.
5. Rushton HG, Majd M, Jantausch B, Wiedermann BL, Belman AB. Renal scarring following reflux and nonreflux pyelonephritis in children: evaluation with 99mtechnetium-dimercaptosuccinic acid scintigraphy. J Urol. 1992;147:1327–32 [Published erratum appears in in J Urol. 1992;148:898]
6. Smellie J, Edwards D, Hunter N, Normand IC, Prescod N. Vesico-ureteric reflux and renal scarring. Kidney Int Suppl. 1975;(Suppl 4):S65–72.
7. Dick PT, Feldman W. Routine diagnostic imaging for childhood urinary tract infections: a systematic overview. J Pediatr. 1996;128:15–22.
8. Blane CE, DiPietro MA, Zerin JM, Sedman AB, Bloom DA. Renal sonography is not a reliable screening examination for vesicoureteral reflux. J Urol. 1993;150(2 Pt 2):752–5.
9. Rushton HG, Majd M. Dimercaptosuccinic acid renal scintigraphy for the evaluation of pyelonephritis and scarring: a review of experimental and clinical studies. J Urol. 1992;148(5 Pt 2):1726–32.
10. Lebowitz RL. The detection and characterization of vesicoureteral reflux in the child. J Urol. 1992;148(5 Pt 2):1640–2.
11. Noe HN. The long-term results of prospective sibling reflux screening. J Urol. 1992;148(5 Pt 2):1739–42.
12. Noe HN, Wyatt RJ, Peeden JN Jr, Rivas ML. The transmission of vesicoureteral reflux from parent to child. J Urol. 1992;148:1869–71.
13. Duckett JW. Vesicoureteral reflux: a “conservative” analysis. Am J Kidney Dis. 1983;3:139–44.
14. Elder JS, Peters CA, Arant BS Jr, Ewalt DH, Hawtrey CE, Hurwitz RS, et al. Pediatric Vesicoureteral Reflux Guidelines Panel summary report on the management of primary vesicoureteral reflux in children. J Urol. 1997;157:1846–51.
15. Koff SA, Murtagh DS. The uninhibited bladder in children: effect of treatment on recurrence of urinary infection and on vesicoureteral reflux resolution. J Urol. 1983;130:1138–41.
16. Homsy YL, Nsouli I, Hamburger B, Laberge I, Schick E. Effects of oxybutynin on vesicoureteral reflux in children. J Urol. 1985;134:1168–71.
17. Seruca H. Vesicoureteral reflux and voiding dysfunction: a prospective study. J Urol. 1989;142(2 Pt 2):494–8.
18. Scholtmeijer RJ, Nijman RJ. Vesicoureteric reflux and videourodynamic studies: results of a prospective study after three years of follow-up. Urology. 1994;43:714–8.
19. O'Regan S, Yazbeck S, Schick E. Constipation, bladder instability, urinary tract infection syndrome. Clin Nephrol. 1985;23:152–4.
20. Loening-Baucke V. Urinary incontinence and urinary tract infection and their resolution with treatment of chronic constipation of childhood. Pediatrics. 1997;100(2 Pt 1):228–32.
21. Weiss R, Duckett J, Spitzer A. Results of a randomized clinical trial of medical versus surgical management of infants and children with grades III and IV primary vesicoureteral reflux (United States). The International Reflux Study in Children. J Urol. 1992;148(5 Pt 2):1667–73.
22. Kalloo NB, Gearhart JP, Jeffs RD. Endoscopic treatment of vesicoureteral reflux with subureteric injection of glutaraldehyde cross-linked bovine collagen [Abstract]. American Urological Association 89th annual meeting. San Francisco, California, May 14–19, 1994. J Urol 1994;151(5 Suppl):361A.
23. Frey P, Lutz N, Jenny P, Herzog B. Endoscopic subureteral collagen injection for the treatment of vesicoureteral reflux in infants and children. J Urol. 1995;154(2 Pt 2):804–7.
24. Sheinfeld J, Cordon-Cardo C, Fair WR, Wartinger DD, Rabinowitz R. Association of type 1 blood group antigens with urinary tract infections in children with genitourinary structural abnormalities. J Urol. 1990;144(2 Pt 2):469–73.
25. Wiswell TE, Smith FR, Bass JW. Decreased incidence of urinary tract infections in circumcised male infants. Pediatrics. 1985;75:901–3.
26. Wiswell TE, Geschke DW. Risks from circumcision during the first month of life compared with those for uncircumcised boys. Pediatrics. 1989;83:1011–5.
27. Wiswell TE, Roscelli JD. Corroborative evidence for the decreased incidence of urinary tract infections in circumcised male infants. Pediatrics. 1986;78:96–9.
28. Wiswell TE, Hachey WE. Urinary tract infections and the uncircumcised state: an update. Clin Pediatr [Phila]. 1993;32:130–4.
Copyright © 1999 by the American Academy of Family Physicians.
This content is owned by the AAFP. A person viewing it online may make one printout of the material and may use that printout only for his or her personal, non-commercial reference. This material may not otherwise be downloaded, copied, printed, stored, transmitted or reproduced in any medium, whether now known or later invented, except as authorized in writing by the AAFP. Contact firstname.lastname@example.org for copyright questions and/or permission requests.
Want to use this article elsewhere? Get Permissions | <urn:uuid:90f39c8b-02b4-4d3c-8c38-a1b797b5b806> | CC-MAIN-2021-21 | https://www.aafp.org/afp/1999/0315/p1472.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988758.74/warc/CC-MAIN-20210506144716-20210506174716-00216.warc.gz | en | 0.896509 | 5,234 | 3.28125 | 3 |
- Organization development
Organization development (OD) is a new[when?] term which means a conceptual, organization-wide effort to increase an organization's effectiveness and viability. Warren Bennis has referred to OD as a response to change, a complex educational strategy intended to change the beliefs, attitudes, values, and structure of an organization so that it can better adapt to new technologies, markets, challenges, and the dizzying rate of change itself. OD is neither "anything done to better an organization" nor is it "the training function of the organization"; it is a particular kind of change process designed to bring about a particular kind of end result. OD can involve interventions in the organization's "processes," using behavioural science knowledge organizational reflection, system improvement, planning and self-analysis.
Kurt Lewin (1898–1947) is widely recognized[by whom?] as the founding father of OD, although he died before the concept became current in the mid-1950s. From Lewin came the ideas of group dynamics and action research which underpin the basic OD process as well as providing its collaborative consultant/client ethos. Institutionally, Lewin founded the "Research Center for Group Dynamics" (RCGD) at MIT, which moved to Michigan after his death. RCGD colleagues were among those who founded the National Training Laboratories (NTL), from which the T-group and group-based OD emerged. In the UK, the Tavistock Institute of Human Relations was important in developing systems theories. The joint TIHR journal Human Relations was an early journal in the field. The Journal of Applied Behavioral Sciences is now the leading journal in the field.
- 1 Overview
- 2 Improved organizational performance
- 3 Understanding organizations
- 4 Action research
- 5 Important figures
- 6 OD interventions
- 7 See also
- 8 Further reading
- 9 References
The core of OD is organization - a group working toward one or more shared goal(s), and development - the process an organization uses to become more effective over time at achieving its goals.
OD is a long range effort to improve organization's problem solving and renewal processes, particularly through more effective and collaborative management of organizational culture, often with the assistance of a change agent or catalyst and the use of the theory and technology of applied behavioral science. Although behavioral science has provided the basic foundation for the study and practice of organizational development, new and emerging fields of study have made their presence known. Experts in systems thinking, leadership studies, organizational leadership, and organizational learning (to name a few) whose perspective is not steeped in just the behavioral sciences, but a much more multi-disciplinary and inter-disciplinary approach have emerged as OD catalysts. These emergent expert perspectives see the organization as the holistic interplay of a number of systems that impact the process and outputs of the entire organization. More importantly, the term change agent or catalyst is synonymous with the notion of a leader who is engaged in leadership - a transformative or effectiveness process - as opposed to management, a more incremental or efficiency based change methodology.
Organization development is a "contractual relationship between a change agent and a sponsoring organization entered into for the purpose of using applied behavioral science and or other organizational change perspectives in a systems context to improve organizational performance and the organization's capacity to improve".
Organization development is an ongoing, systematic process of implementing effective organizational change. Organization development is known as both a field of applied behavioral science focused on understanding and managing organizational change and as a field of scientific study and inquiry. It is interdisciplinary in nature and draws on sociology, psychology, and theories of motivation, learning, and personality. Organization development is a growing field that is responsive to many new approaches including Positive Adult Development.
Although neither the sponsoring organization nor the change agent can be sure at the outset of the exact nature of the problem or problems to be dealt with or how long the change agents' help will be needed, it is essential that some tentative agreement on these matters be reached. The sponsoring organization needs to know generally what the change agent's preliminary plan is, what its own commitments are in relation to personal commitments and responsibility for the program, and what the change agent's fee will be. The change agent must assure himself that the organization's, and particularly the top executives', commitment to change is strong enough to support the kind of self-analysis and personal involvement requisite to success of the program. Recognizing the uncertainties lying ahead on both sides, a termination agreement permitting either side to withdraw at any time is usually included.
A change agent in the sense used here is not a technical expert skilled in such functional areas as accounting, production, or finance. S/he is a behavioral scientist who knows how to get people in an organization involved in solving their own problems. His/her main strength is a comprehensive knowledge of human behavior, supported by a number of intervention techniques (to be discussed later). The change agent can be either external or internal to the organization. An internal change agent is usually a staff person who has expertise in the behavioral sciences and in the intervention technology of OD. Beckhard reports several cases in which line people have been trained in OD and have returned to their organizations to engage in successful change assignments. In the natural evolution of change mechanisms in organizations, this would seem to approach the ideal arrangement. Qualified change agents can be found on some university faculties, or they may be private consultants associated with such organizations as the National Training Laboratories Institute for Applied Behavioral Science (Washington, D.C.) University Associates (San Diego, California), the Human Systems Intervention graduate program in the Department of Applied Human Sciences (Concordia University, Montreal, Canada), Navitus (Pvt) Ltd (Pakistan), and similar organizations.
The change agent may be a staff or line member of the organization who is schooled in OD theory and technique. In such a case, the "contractual relationship" is an in-house agreement that should probably be explicit with respect to all of the conditions involved except the fee.
The initiative for OD programs comes from an organization that has a problem. This means that top management or someone authorized by top management is aware that a problem exists and has decided to seek help in solving it. There is a direct analogy here to the practice of psychotherapy: The client or patient must actively seek help in finding a solution to his problems. This indicates a willingness on the part of the client organization to accept help and assures the organization that management is actively concerned.
Applied behavioral science
One of the outstanding characteristics of OD that distinguishes it from most other improvement programs is that it is based on a "helping relationship." Some believe that the change agent is not a physician to the organization's ills; that s/he does not examine the "patient," make a diagnosis, and write a prescription. Nor does she try to teach organizational members a new inventory of knowledge which they then transfer to the job situation. Using theory and methods drawn from such behavioral sciences as (industrial/organizational psychology, industrial sociology, communication, cultural anthropology, administrative theory, organizational behavior, economics, and political science, the change agent's main function is to help the organization define and solve its own problems. The basic method used is known as action research. This approach, which is described in detail later, consists of a preliminary diagnosis, collecting data, feedback of the data to the client, data exploration by the client group, action planning based on the data, and taking action.
OD deals with a total system — the organization as a whole, including its relevant environment — or with a subsystem or systems — departments or work groups — in the context of the total system. Parts of systems, for example, individuals, cliques, structures, norms, values, and products are not considered in isolation; the principle of interdependency, that is, that change in one part of a system affects the other parts, is fully recognized. Thus, OD interventions focus on the total culture and cultural processes of organizations. The focus is also on groups, since the relevant behavior of individuals in organizations and groups is generally a product of group influences rather than personality.
Improved organizational performance
The objective of OD is to improve the organization's capacity to handle its internal and external functioning and relationships. This would include such things as improved interpersonal and group processes, more effective communication, enhanced ability to cope with organizational problems of all kinds, more effective decision processes, more appropriate leadership style, improved skill in dealing with destructive conflict, and higher levels of trust and cooperation among organizational members. These objectives stem from a value system based on an optimistic view of the nature of man — that man in a supportive environment is capable of achieving higher levels of development and accomplishment. Essential to organization development and effectiveness is the scientific method — inquiry, a rigorous search for causes, experimental testing of hypotheses, and review of results.
The ultimate aim of OD practitioners is to "work themselves out of a job" by leaving the client organization with a set of tools, behaviors, attitudes, and an action plan with which to monitor its own state of health and to take corrective steps toward its own renewal and development. This is consistent with the systems concept of feedback as a regulatory and corrective mechanism.
Kurt Lewin played a key role in the evolution of organization development as it is known today. As early as World War II, Lewin experimented with a collaborative change process (involving himself as consultant and a client group) based on a three-step process of planning, taking action, and measuring results. This was the forerunner of action research, an important element of OD, which will be discussed later. Lewin then participated in the beginnings of laboratory training, or T-groups, and, after his death in 1947, his close associates helped to develop survey-research methods at the University of Michigan. These procedures became important parts of OD as developments in this field continued at the National Training Laboratories and in growing numbers of universities and private consulting firms across the country. Two of the leading universities offering doctoral level degrees in OD are Benedictine University and the Fielding Graduate University.
Douglas McGregor and Richard Beckhard while "consulting together at General Mills in the 1950's, the two coined the term organizational development (OD) to describe an innovative bottoms-up change effort that fit no traditional consulting categories" (Weisbord, 1987, p. 112).
The failure of off-site laboratory training to live up to its early promise was one of the important forces stimulating the development of OD. Laboratory training is learning from a person's "here and now" experience as a member of an ongoing training group. Such groups usually meet without a specific agenda. Their purpose is for the members to learn about themselves from their spontaneous "here and now" responses to an ambiguous hypothetical situation. Problems of leadership, structure, status, communication, and self-serving behavior typically arise in such a group. The members have an opportunity to learn something about themselves and to practice such skills as listening, observing others, and functioning as effective group members.
As formerly practiced (and occasionally still practiced for special purposes), laboratory training was conducted in "stranger groups," or groups composed of individuals from different organizations, situations, and backgrounds. A major difficulty developed, however, in transferring knowledge gained from these "stranger labs" to the actual situation "back home". This required a transfer between two different cultures, the relatively safe and protected environment of the T-group (or training group) and the give-and-take of the organizational environment with its traditional values. This led the early pioneers in this type of learning to begin to apply it to "family groups" — that is, groups located within an organization. From this shift in the locale of the training site and the realization that culture was an important factor in influencing group members (along with some other developments in the behavioral sciences) emerged the concept of organization development.
Case history The Cambridge Clinic found itself having difficulty with its internal working relationships. The medical director, concerned with the effect these problems could have on patient care, contacted an organizational consultant at a local university and asked him for help. A preliminary discussion among the director, the clinic administrator, and the consultant seemed to point to problems in leadership, conflict resolution, and decision processes. The consultant suggested that data be gathered so that a working diagnosis could be made. The clinic officials agreed, and tentative working arrangements were concluded.
The consultant held a series of interviews involving all members of the clinic staff, the medical director, and the administrator. Then the consultant "thematized", or summarized, the interview data to identify specific problem areas. At the beginning of a workshop about a week later, the consultant fed back to the clinic staff the data he had collected.
The staff arranged the problems in the following priorities
Role conflicts between certain members of the medical staff were creating tensions that interfered with the necessity for cooperation in handling patients. The leadership style of the medical director resulted in his putting off decisions on important operating matters. This led to confusion and sometimes to inaction on the part of the medical and administrative staffs. Communication between the administrative, medical, and outreach (social worker) staffs on mutual problems tended to be avoided. Open conflicts over policies and procedures were thus held in check, but suppressed feelings clearly had a negative influence on interpersonal and intergroup behavior.
Through the use of role analysis and other techniques suggested by the consultant, the clinic staff and the medical director were able to explore the role conflict and leadership problems and to devise effective ways of coping with them. Exercises designed to improve communication skills and a workshop session on dealing with conflict led to progress in developing more openness and trust throughout the clinic. An important result of this first workshop was the creation of an action plan that set forth specific steps to be applied to clinic problems by clinic personnel during the ensuing period. The consultant agreed to monitor these efforts and to assist in any way he could. Additional discussions and team development sessions were held with the director and the medical and administrative staffs.
A second workshop attended by the entire clinic staff took place about two months after the first. At the second workshop, the clinic staff continued to work together on the problems of dealing with conflict and interpersonal communication. During the last half-day of the meeting, the staff developed a revised action plan covering improvement activities to be undertaken in the following weeks and months to improve the working relationships of the clinic.
A notable additional benefit of this OD program was that the clinic staff learned new ways of monitoring the clinic's performance as an organization and of coping with some of its other problems. Six months later, when the consultant did a follow-up check on the organization, the staff confirmed that interpersonal problems were now under better control and that some of the techniques learned at the two workshops associated with the OD programs were still being used.
Organizational Development is a system-wide application and transfer of behavioral science knowledge to the planned development, improvement, and reinforcement of the strategies, structure, and process that lead to organization effectiveness. (Abdul Basit - NUST - SEECS) organization development is a really creating good things in the enviorment.
Weisbord presents a six-box model for understanding organization:
- Purposes: The organization members are clear about the organization’s mission and purpose and goal agreements, whether people support the organization’ purpose.
- Structure: How is the organization’s work divided up? The question is whether there is an adequate fit between the purpose and the internal structure.
- Relationship: Between individuals, between units or departments that perform different tasks, and between the people and requirements of their jobs.
- Rewards: The consultant should diagnose the similarities between what the organization formally rewarded or punished members for.
- Leadership: Is to watch for blips among the other boxes and maintain balance among them.
- Helpful mechanism: Is a helpful organization that must attend to in order to survive which as planning, control, budgeting, and other information systems that help organization member accomplish.
In recent years, serious questioning has emerged about the relevance of OD to managing change in modern organizations. The need for "reinventing" the field has become a topic that even some of its "founding fathers" are discussing critically.
With this call for reinvention and change, scholars have begun to examine organizational development from an emotion-based standpoint. For example, deKlerk (2007) writes about how emotional trauma can negatively affect performance. Due to downsizing, outsourcing, mergers, restructuring, continual changes, invasions of privacy, harassment, and abuses of power, many employees experience the emotions of aggression, anxiety, apprehension, cynicism, and fear, which can lead to performance decreases. deKlerk (2007) suggests that in order to heal the trauma and increase performance, O.D. practitioners must acknowledge the existence of the trauma, provide a safe place for employees to discuss their feelings, symbolize the trauma and put it into perspective, and then allow for and deal with the emotional responses. One method of achieving this is by having employees draw pictures of what they feel about the situation, and then having them explain their drawings with each other. Drawing pictures is beneficial because it allows employees to express emotions they normally would not be able to put into words. Also, drawings often prompt active participation in the activity, as everyone is required to draw a picture and then discuss its meaning.
The use of new technologies combined with globalization has also shifted the field of organization development. Roland Sullivan (2005) defined Organization Development with participants at the 1st Organization Development Conference for Asia in Dubai-2005 as "Organization Development is a transformative leap to a desired vision where strategies and systems align, in the light of local culture with an innovative and authentic leadership style using the support of high tech tools.
Wendell L French and Cecil Bell defined organization development (OD) at one point as "organization improvement through action research". If one idea can be said to summarize OD's underlying philosophy, it would be action research as it was conceptualized by Kurt Lewin and later elaborated and expanded on by other behavioral scientists. Concerned with social change and, more particularly, with effective, permanent social change, Lewin believed that the motivation to change was strongly related to action: If people are active in decisions affecting them, they are more likely to adopt new ways. "Rational social management", he said, "proceeds in a spiral of steps, each of which is composed of a circle of planning, action, and fact-finding about the result of action".
Lewin's description of the process of change involves three steps:
"Unfreezing": Faced with a dilemma or disconfirmation, the individual or group becomes aware of a need to change.
"Changing": The situation is diagnosed and new models of behavior are explored and tested.
"Refreezing": Application of new behavior is evaluated, and if reinforcing, adopted.
Figure 1 summarizes the steps and processes involved in planned change through action research. Action research is depicted as a cyclical process of change. The cycle begins with a series of planning actions initiated by the client and the change agent working together. The principal elements of this stage include a preliminary diagnosis, data gathering, feedback of results, and joint action planning. In the language of systems theory, this is the input phase, in which the client system becomes aware of problems as yet unidentified, realizes it may need outside help to effect changes, and shares with the consultant the process of problem diagnosis.
The second stage of action research is the action, or transformation, phase. This stage includes actions relating to learning processes (perhaps in the form of role analysis) and to planning and executing behavioral changes in the client organization. As shown in Figure 1, feedback at this stage would move via Feedback Loop A and would have the effect of altering previous planning to bring the learning activities of the client system into better alignment with change objectives. Included in this stage is action-planning activity carried out jointly by the consultant and members of the client system. Following the workshop or learning sessions, these action steps are carried out on the job as part of the transformation stage.
The third stage of action research is the output, or results, phase. This stage includes actual changes in behavior (if any) resulting from corrective action steps taken following the second stage. Data are again gathered from the client system so that progress can be determined and necessary adjustments in learning activities can be made. Minor adjustments of this nature can be made in learning activities via Feedback Loop B (see Figure 1). Major adjustments and reevaluations would return the OD project to the first, or planning, stage for basic changes in the program. The action-research model shown in Figure 1 closely follows Lewin's repetitive cycle of planning, action, and measuring results. It also illustrates other aspects of Lewin's general model of change. As indicated in the diagram, the planning stage is a period of unfreezing, or problem awareness. The action stage is a period of changing, that is, trying out new forms of behavior in an effort to understand and cope with the system's problems. (There is inevitable overlap between the stages, since the boundaries are not clear-cut and cannot be in a continuous process). The results stage is a period of refreezing, in which new behaviors are tried out on the job and, if successful and reinforcing, become a part of the system's repertoire of problem-solving behavior.
Action research is problem centered, client centered, and action oriented. It involves the client system in a diagnostic, active-learning, problem-finding, and problem-solving process. Data are not simply returned in the form of a written report but instead are fed back in open joint sessions, and the client and the change agent collaborate in identifying and ranking specific problems, in devising methods for finding their real causes, and in developing plans for coping with them realistically and practically. Scientific method in the form of data gathering, forming hypotheses, testing hypotheses, and measuring results, although not pursued as rigorously as in the laboratory, is nevertheless an integral part of the process. Action research also sets in motion a long-range, cyclical, self-correcting mechanism for maintaining and enhancing the effectiveness of the client's system by leaving the system with practical and useful tools for self-analysis and self-renewal.
- Chris Argyris
- Richard Beckhard
- Robert R. Blake
- Roland Sullivan
- Louis L. Carter
- David Cooperrider
- W. Edwards Deming
- Fred Emery
- Charles Handy
- Elliott Jaques
- Kurt Lewin
- Rensis Likert
- Jane Mouton
- Derek S. Pugh
- Edgar Schein
- Donald Schon
- Peter Senge
- Herbert Shepard
- Eric Trist
- Margaret J. Wheatley
"Interventions" are principal learning processes in the "action" stage (see Figure 1) of organization development. Interventions are structured activities used individually or in combination by the members of a client system to improve their social or task performance. They may be introduced by a change agent as part of an improvement program, or they may be used by the client following a program to check on the state of the organization's health, or to effect necessary changes in its own behavior. "Structured activities" mean such diverse procedures as experiential exercises, questionnaires, attitude surveys, interviews, relevant group discussions, and even lunchtime meetings between the change agent and a member of the client organization. Every action that influences an organization's improvement program in a change agent-client system relationship can be said to be an intervention.
There are many possible intervention strategies from which to choose. Several assumptions about the nature and functioning of organizations are made in the choice of a particular strategy. Beckhard lists six such assumptions:
- The basic building blocks of an organization are groups (teams). Therefore, the basic units of change are groups, not individuals.
- An always relevant change goal is the reduction of inappropriate competition between parts of the organization and the development of a more collaborative condition.
- Decision making in a healthy organization is located where the information sources are, rather than in a particular role or level of hierarchy.
- Organizations, subunits of organizations, and individuals continuously manage their affairs against goals. Controls are interim measurements, not the basis of managerial strategy.
- One goal of a healthy organization is to develop generally open communication, mutual trust, and confidence between and across levels.
- People support what they help create. People affected by a change must be allowed active participation and a sense of ownership in the planning and conduct of the change.
Interventions range from those designed to improve the effectiveness of individuals through those designed to deal with teams and groups, intergroup relations, and the total organization. There are interventions that focus on task issues (what people do), and those that focus on process issues (how people go about doing it). Finally, interventions may be roughly classified according to which change mechanism they tend to emphasize: for example, feedback, awareness of changing cultural norms, interaction and communication, conflict, and education through either new knowledge or skill practice.
One of the most difficult tasks confronting the change agent is to help create in the client system a safe climate for learning and change. In a favorable climate, human learning builds on itself and continues indefinitely during man's lifetime. Out of new behavior, new dilemmas and problems emerge as the spiral continues upward to new levels. In an unfavorable climate, in contrast, learning is far less certain, and in an atmosphere of psychological threat, it often stops altogether. Unfreezing old ways can be inhibited in organizations because the climate makes employees feel that it is inappropriate to reveal true feelings, even though such revelations could be constructive. In an inhibited atmosphere, therefore, necessary feedback is not available. Also, trying out new ways may be viewed as risky because it violates established norms. Such an organization may also be constrained because of the law of systems: If one part changes, other parts will become involved. Hence, it is easier to maintain the status quo. Hierarchical authority, specialization, span of control, and other characteristics of formal systems also discourage experimentation.
The change agent must address himself to all of these hazards and obstacles. Some of the things which will help him are:
- A real need in the client system to change
- Genuine support from management
- Setting a personal example: listening, supporting behavior
- A sound background in the behavioral sciences
- A working knowledge of systems theory
- A belief in man as a rational, self-educating being fully capable of learning better ways to do things.
A few examples of interventions include team building, coaching, Large Group Interventions, mentoring, performance appraisal, downsizing, TQM, and leadership development.
- OD Topics
- Action research
- Ambidextrous organization
- Appreciative inquiry
- Chaos theory in organizational development
- Collaborative method
- Corporate Education
- Decision Engineering
- Designing OD education
- Employee research
- Executive development
- Executive education
- Future Search
- Group dynamics
- Group development
- Knowledge Management
- Leadership development
- Managing change
- Organizational communication
- Organizational climate
- Organizational culture
- Organizational diagnostics
- Organizational engineering
- Organizational learning
- Organizational performance
- Performance improvement
- Positive Adult Development
- Process improvement
- Social network
- Strategic planning
- Succession planning
- Systems intelligence
- Systems theory
- Systems thinking
- Team building
- Team composition
- Value network
- Workplace democracy
- Workplace spirituality
- Workforce planning
- OD in context
- Argyris, C.; Schon, D. (1978), Organizational Learning: A theory of action perspective, Reading MA: Addison-Wesley, ISBN 0201001748, http://www.amazon.com/Organizational-Learning-Addison-Wesley-Organization-Development/dp/0201001748
- Carter, Louis L. (2004), Best Practices in Leadership Development and Organization Change, Jossey Bass, ISBN 0787976253, http://www.amazon.com/Practices-Leadership-Development-Organization-Change/dp/0787976253/ref=sr_1_1?s=books&ie=UTF8&qid=1291315169&sr=1-1
- Nonaka, I.; Takeuchi, H. (1995), The Knowledge Creating Company, New York: New York: Oxford University Press, ISBN 0195092694, http://books.google.com/books?id=B-qxrPaU1-MC
- Sullivan, Roland (2010), Practicing Organization Development: A Guide for Leading Change, Jossey Bass, ISBN 0470405449, http://www.amazon.com/Practicing-Organization-Development--D-Organizational/dp/0470405449/ref=sr_1_1?ie=UTF8&qid=1291315240&sr=8-1
- Western, S. (2010), What do we mean by Organizational Development, Krakow: Krakow: Advisio Press, http://www.advisio.biz/en/wiedza/what-do-we-mean-by-organizational-development,26.html
- Rother, Mike (2009), Toyota Kata, McGraw-Hill, ISBN 0071635238, http://books.google.com/books?id=_1lhPgAACAAJ&dq=toyota+kata
- Senge, Peter M. (1990), The Fifth Discipline, Doubleday/Currency, ISBN 0385260946, http://books.google.com/books?id=bVZqAAAAMAAJ see also: The Fifth Discipline
- Cummings, Thomas G.; Worley, Christopher G., Organization Development & Change, Thomson South-Western, ISBN 8131502872, http://www.amazon.com/Organization-Development-Change-Thomas-Cummings/dp/0324260601
- ^ Smith, A. (1998), Training and Development in Australia. 2nd ed. 261. Sydney: Butterworths.
- ^ a b c d Richard Arvid Johnson. Management, systems, and society : an introduction. Pacific Palisades, Calif.: Goodyear Pub. Co..
- ^ a b Richard Beckhard (1969). Organization development: strategies and models. Reading, Mass.: Addison-Wesley. p. 114. ISBN 0876205406. OCLC 39328.
- ^ a b Wendell L French; Cecil Bell. Organization development: behavioral science interventions for organization improvement. Englewood Cliffs, N.J.: Prentice-Hall.
- ^ exampl_OD
- ^ Weisbord, Marvin. (1987). Productive Workplace: Organizing and managing for dignity, meaning and community. Jossey-Bass Publishers, San Francisco.
- ^ a b c d Richard Arvid Johnson (1976). Management, systems, and society : an introduction. Pacific Palisades, Calif.: Goodyear Pub. Co.. pp. 223–229. ISBN 0876205406. OCLC 2299496.
- ^ Bradford, D.L. & Burke, W.W. eds, (2005). Organization Development. San Francisco: Pfeiffer.
- ^ Bradford, D.L. & Burke, W.W.(eds), 2005, Reinventing Organization Development. San Francisco: Pfeiffer.
- ^ deKler, M. (2007). Healing emotional trauma in organizations: An O.D. Framework and case study. Organizational Development Journal, 25(2), 49-56.
- ^ a b c Kurt Lewin (1958). Group Decision and Social Change. New York: Holt, Rinehart and Winston. p. 201.
- ^ a b c Richard Arvid Johnson (1976). Management, systems, and society: an introduction. Pacific Palisades, Calif.: Goodyear Pub. Co.. pp. 224–226. ISBN 0876205406. OCLC 2299496.
- ^ Wendell L French; Cecil Bell (1973). Organization development: behavioral science interventions for organization improvement. Englewood Cliffs, N.J.: Prentice-Hall. chapter 8. ISBN 0136416624. OCLC 314258.
See also: template Aspects of corporations · template Aspects of occupations · template Aspects of workplaces ·Architecture · Blame · Burnout · Capital · Chart · Citizenship behavior · Climate · Commitment · Communication · Complexity · Configuration · Conflict · Culture · Design · Development · Diagnostics · Dissent · Ecology · Effectiveness · Engineering · Ethics · Field · Hierarchy · Identification · Intelligence · Justice · Learning · Life cycle · Mentorship · Network analysis · Ombudsman · Onboarding · Patterns · Perceived support · Performance · Politics · Proactivity · Psychology · Resilience · Retaliatory behavior · Safety · Space · Storytelling · Structure · Studies · Suggestion box
Wikimedia Foundation. 2010. | <urn:uuid:9593b8e6-7dec-4761-bb09-41986f67aab6> | CC-MAIN-2021-21 | https://en-academic.com/dic.nsf/enwiki/144288 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989614.9/warc/CC-MAIN-20210511122905-20210511152905-00376.warc.gz | en | 0.930779 | 6,874 | 2.65625 | 3 |
Also found in: Thesaurus, Medical, Legal, Financial, Acronyms, Idioms, Encyclopedia, Wikipedia.
If you start or begin something, you do it from a particular time. There is no difference in meaning.
The past tense of begin is began. The -ed participle is begun.
You can use a to-infinitive or an -ing form after start and begin.
Don't use an -ing form after starting or beginning. Don't say, for example, 'I'm beginning understanding more'. You must say 'I'm beginning to understand more'.
Start and begin can be intransitive verbs, used to say that something happens from a particular time.
Start has some special meanings. You don't use 'begin' with any of these meanings.
You use start to say that someone makes a machine or engine start to work.
You use start to say that someone creates a business or other organization.
Past participle: started
|Noun||1.||start - the beginning of anything; "it was off to a good start"|
beginning - the event consisting of the start of something; "the beginning of the war"
adrenarche - the increase in activity of the adrenal glands just before puberty
menarche - the first occurrence of menstruation in a woman
thelarche - the start of breast development in a woman at the beginning of puberty
opener - the first event in a series; "she played Chopin for her opener"; "the season's opener was a game against the Yankees"
alpha - the beginning of a series or sequence; "the Alpha and Omega, the first and the last, the beginning and the end"--Revelations
curtain raising, opening night, opening - the first performance (as of a theatrical production); "the opening received good critical reviews"
start-off, send-off, kickoff - a start given to contestants; "I was there with my parents at the kickoff"
racing start - the start of a race
|2.||start - the time at which something is supposed to begin; "they got an early start"; "she knew from the get-go that he was the man for her"|
birth - the time when something begins (especially life); "they divorced after the birth of the child"; "his election signaled the birth of a new age"
incipience, incipiency - beginning to exist or to be apparent; "he placed the incipience of democratic faith at around 1850"; "it is designed to arrest monopolies in their incipiency"
threshold - the starting point for a new state or experience; "on the threshold of manhood"
|3.||start - a turn to be a starter (in a game at the beginning); "he got his start because one of the regular pitchers was in the hospital"; "his starting meant that the coach thought he was one of their best linemen"|
|4.||start - a sudden involuntary movement; "he awoke with a start"|
inborn reflex, innate reflex, instinctive reflex, physiological reaction, reflex, reflex action, reflex response, unconditioned reflex - an automatic instinctive unlearned reaction to a stimulus
startle reaction, startle response - a complicated involuntary reaction to a sudden unexpected stimulus (especially a loud noise); involves flexion of most skeletal muscles and a variety of visceral reactions
Moro reflex, startle reflex - a normal reflex of young infants; a sudden loud noise causes the child to stretch out the arms and flex the legs
|5.||start - the act of starting something; "he was responsible for the beginning of negotiations"|
change of state - the act of changing something into something different in essential characteristics
jumping-off point, point of departure, springboard - a beginning from which an enterprise is launched; "he uses other people's ideas as a springboard for his own"; "reality provides the jumping-off point for his illusions"; "the point of departure of international comparison cannot be an institution but must be the function it carries out"
activation - making active and effective (as a bomb)
establishment, constitution, formation, organisation, organization - the act of forming or establishing something; "the constitution of a PTA group last year"; "it was the establishment of his reputation"; "he still remembers the organization of the club"
first appearance, introduction, debut, entry, launching, unveiling - the act of beginning something new; "they looked forward to the debut of their new product line"
face-off - (ice hockey) the method of starting play; a referee drops the puck between two opposing players
groundbreaking, groundbreaking ceremony - the ceremonial breaking of the ground to formally begin a construction project
housing start - the act of starting to construct a house
icebreaker - a beginning that relaxes a tense or formal atmosphere; "he told jokes as an icebreaker"
inauguration, startup - the act of starting a new operation or practice; "he opposed the inauguration of fluoridation"; "the startup of the new factory was delayed by strikes"
founding, instauration, origination, initiation, innovation, creation, institution, introduction, foundation - the act of starting something for the first time; introducing something new; "she looked forward to her initiation as an adult"; "the foundation of a new scientific society"
installation, installing, instalment, installment - the act of installing something (as equipment); "the telephone installation took only a few minutes"
jump ball - (basketball) the way play begins or resumes when possession is disputed; an official tosses the ball up between two players who jump in an effort to tap it to a teammate
kickoff - (football) a kick from the center of the field to start a football game or to resume it after a score
scrum, scrummage - (rugby) the method of beginning play in which the forwards of each team crouch side by side with locked arms; play starts when the ball is thrown in between them and the two sides compete for possession
startup - the act of setting in operation; "repeated shutdowns and startups are expensive"
|6.||start - a line indicating the location of the start of a race or a game|
line - in games or sports; a mark indicating positions or bounds of the playing area
|7.||start - a signal to begin (as in a race); "the starting signal was a green light"; "the runners awaited the start"|
|8.||start - the advantage gained by beginning early (as in a race); "with an hour's start he will be hard to catch"|
|Verb||1.||start - take the first step or steps in carrying out an action; "We began working at dawn"; "Who will start?"; "Get working as soon as the sun rises!"; "The first tourists began to arrive in Cambodia"; "He began early in the day"; "Let's get down to work now"|
recommence - begin again; "we recommenced his reading after a short nap"
strike out - set out on a course of action; "He struck out on his own"
fall - begin vigorously; "The prisoners fell to work right away"
jump off - set off quickly, usually with success; "The freshman jumped off to a good start in his math class"
get to - arrive at the point of; "She gets to fretting if I stay away from home too long"
auspicate - commence in a manner calculated to bring good luck; "They auspicated the trip with a bottle of champagne"
attack - set to work upon; turn one's energies vigorously to a task; "I attacked the problem as soon as I got out of bed"
break in - start in a certain activity, enterprise, or role
launch, plunge - begin with vigor; "He launched into a long diatribe"; "She plunged into a dangerous adventure"
come on - occur or become available; "water or electricity came on again after the earthquake"
get moving, get rolling, get started, get weaving, bestir oneself, get cracking, get going - start to be active; "Get cracking, please!"
begin - begin to speak, understand, read, and write a language; "She began Russian at an early age"; "We started French in fourth grade"
|2.||start - set in motion, cause to start; "The U.S. started a war in the Middle East"; "The Iraqis began hostilities"; "begin a new chapter in your life"|
jumpstart, jump-start - start or re-start vigorously; "The Secretary of State intends to jumpstart the Middle East Peace Process"
recommence - cause to start anew; "The enemy recommenced hostilities after a few days of quiet"
usher in, inaugurate, introduce - be a precursor of; "The fall of the Berlin Wall ushered in the post-Cold War period"
set off - set in motion or cause to begin; "The guide set the tour off to a good start"
embark on, start up, commence, start - get off the ground; "Who started this company?"; "We embarked on an exciting enterprise"; "I start my day with a good breakfast"; "We began the new semester"; "The afternoon session begins at 4 PM"; "The blood shed started when the partisans launched a surprise attack"
begin - have a beginning, of a temporal event; "WW II began in 1939 when Hitler marched into Poland"; "The company's Asia tour begins next month"
|3.||start - leave; "The family took off for Florida"|
go forth, leave, go away - go away from a place; "At what time does your train leave?"; "She didn't leave until midnight"; "The ship leaves at midnight"
roar off - leave; "The car roared off into the fog"
|4.||start - have a beginning, in a temporal, spatial, or evaluative sense; "The DMZ begins right over the hill"; "The second movement begins after the Allegro"; "Prices for these homes start at $250,000"|
bud - start to grow or develop; "a budding friendship"
break out - begin suddenly and sometimes violently; "He broke out shouting"
begin, start - have a beginning characterized in some specified way; "The novel begins with a murder"; "My property begins with the three maple trees"; "Her day begins with a workout"; "The semester begins with a convocation ceremony"
begin - have a beginning, of a temporal event; "WW II began in 1939 when Hitler marched into Poland"; "The company's Asia tour begins next month"
kick in, set in - enter a particular state; "Laziness set in"; "After a few moments, the effects of the drug kicked in"
dawn - appear or develop; "The age of computers had dawned"
originate - begin a trip at a certain point, as of a plane, train, bus, etc.; "The flight originates in Calcutta"
|5.||start - bring into being; "He initiated a new program"; "Start a foundation"|
lead up, initiate - set in motion, start an event or prepare the way for; "Hitler's attack on Poland led up to World War II"
set - apply or start; "set fire to a building"
|6.||start - get off the ground; "Who started this company?"; "We embarked on an exciting enterprise"; "I start my day with a good breakfast"; "We began the new semester"; "The afternoon session begins at 4 PM"; "The blood shed started when the partisans launched a surprise attack"|
commence, lead off, start, begin - set in motion, cause to start; "The U.S. started a war in the Middle East"; "The Iraqis began hostilities"; "begin a new chapter in your life"
open - begin or set in action, of meetings, speeches, recitals, etc.; "He opened the meeting with a long speech"
|7.||start - move or jump suddenly, as if in surprise or alarm; "She startled when I walked into the room"|
move - move so as to change position, perform a nontranslational motion; "He moved his hand slightly to the right"
shy - start suddenly, as from fright
boggle - startle with amazement or fear
rear back - start with anger or resentment or in protest
jackrabbit - go forward or start with a fast, sudden movement
|8.||start - get going or set in motion; "We simply could not start the engine"; "start up the computer"|
kick-start - start (a motorcycle) by means of a kick starter
hot-wire - start (a car engine) without a key by bypassing the ignition interlock; "The woman who lost the car keys had to hot-wire her van"
jumpstart, jump-start, jump - start (a car engine whose battery is dead) by connecting it to another car's battery
stop - cause to stop; "stop a car"; "stop the thief"
|9.||start - begin or set in motion; "I start at eight in the morning"; "Ready, set, go!"|
come on, go on, come up - start running, functioning, or operating; "the lights went on"; "the computer came up"
get off the ground, take off - get started or set in motion, used figuratively; "the project took a long time to get off the ground"
|10.||start - begin work or acting in a certain capacity, office or job; "Take up a position"; "start a new job"|
take office - assume an office, duty, or title; "When will the new President take office?"
|11.||start - play in the starting lineup|
play - participate in games or sport; "We played hockey all afternoon"; "play cards"; "Pele played for the Brazilian teams in many important matches"
|12.||start - have a beginning characterized in some specified way; "The novel begins with a murder"; "My property begins with the three maple trees"; "Her day begins with a workout"; "The semester begins with a convocation ceremony"|
begin, start - begin an event that is implied and limited by the nature or inherent function of the direct object; "begin a cigar"; "She started the soup while it was still hot"; "We started physics in 10th grade"
be - have the quality of being; (copula, used with an adjective or a predicate noun); "John is rich"; "This is not a good answer"
begin, start - have a beginning, in a temporal, spatial, or evaluative sense; "The DMZ begins right over the hill"; "The second movement begins after the Allegro"; "Prices for these homes start at $250,000"
begin - be the first item or point, constitute the beginning or start, come first in a series; "The number `one' begins the sequence"; "A terrible murder begins the novel"; "The convocation ceremony officially begins the semester"
|13.||start - begin an event that is implied and limited by the nature or inherent function of the direct object; "begin a cigar"; "She started the soup while it was still hot"; "We started physics in 10th grade"|
act, move - perform an action, or work out or perform (an action); "think before you act"; "We must move quickly"; "The governor should act on the new energy bill"; "The nanny acted quickly by grabbing the toddler and covering him with a wet towel"
|14.||start - bulge outward; "His eyes popped"|
set about stop, finish, delay, abandon, conclude, quit, cease, wind up, put off, put aside, call it a day (informal), desist
begin end, stop, finish, conclude, cease, terminate
set in motion end, stop, finish, abandon, conclude, wind up, bring to an end
establish end, finish, give up, abandon, conclude, wind up, terminate, bring to an end
start up stop, turn off, switch off
beginning end, finish, conclusion, result, stop, outcome, wind-up, finale, termination, cessation, denouement
at the start → al principio, en un principio
at the very start → muy al principio, en los mismos comienzos
at the start of the century → a principios del siglo
we are at the start of something big → estamos en los comienzos de algo grandioso
for a start → en primer lugar, para empezar
from the start → desde el principio
from start to finish → desde el principio hasta el fin
to get a good start in life → disfrutar de una infancia privilegiada
to get off to a good/bad/slow start → empezar bien/mal/lentamente
to give sb a (good) start in life → ayudar a algn a situarse en la vida
to make a start → empezar
to make a start on the painting → empezar a pintar
to make an early start (on journey) → ponerse en camino temprano; (with job) → empezar temprano
to make a fresh or new start in life → hacer vida nueva
to give sb five minutes' or a five-minute start → dar a algn cinco minutos de ventaja
to have a start on sb → tener ventaja sobre algn
to start a new cheque book/page → comenzar or empezar un talonario nuevo/una página nueva
don't start that again! → ¡no vuelvas a eso!
to start doing sth or to do sth → empezar a hacer algo
start moving! → ¡menearse!
start talking! → ¡desembucha!
to start sth again or afresh → comenzar or empezar algo de nuevo
to start the day right → empezar bien el día
he always starts the day with a glass of milk → lo primero que toma cada mañana es un vaso de leche
he started life as a labourer → empezó de or como peón
to start a new life → comenzar una vida nueva
to start negotiations → iniciar or entablar las pláticas
to start a novel → empezar a escribir (or leer) una novela
to start school → empezar a ir al colegio
he started work yesterday → entró a trabajar ayer
it started the collapse of the empire → provocó el derrumbamiento del imperio
you started it! → ¡tú diste el primer golpe!
to start a family → (empezar a) tener hijos
to start a race (= give signal for) → dar la señal de salida para una carrera
to get started → empezar, ponerse en marcha
let's get started → empecemos
to get sth started [+ engine, car] → poner algo en marcha, arrancar algo; [+ project] → poner algo en marcha
to get sb started (on activity) → poner a algn en marcha; (in career) → iniciar a algn en su carrera
to get started on (doing) sth → empezar a hacer algo
to get sb started on (doing) sth → poner a algn a hacer algo
to start sb (off) reminiscing → hacer que algn empiece a contar sus recuerdos
that started him (off) sneezing → eso le hizo empezar a estornudar
to start sb (off) on a career → ayudar a algn a emprender una carrera
they started her (off) in the sales department → la emplearon primero en la sección de ventas
classes start on Monday → las clases comienzan or empiezan el lunes
that's when the trouble started → entonces fue cuando empezaron los problemas
it all started when he refused to pay → todo empezó cuando se negó a pagar
it started (off) rather well/badly [film, match] → empezó bastante bien/mal
to start again or afresh → volver a empezar, comenzar de nuevo
he started (off or out) as a postman → empezó como or de cartero
he started (off or out) as a Marxist → empezó como marxista
to start at the beginning → empezar desde el principio
he started (off) by saying → empezó por decir or diciendo ...
the route starts from here → la ruta sale de aquí
starting from Tuesday → a partir del martes
to start (out or up) in business → montar or poner un negocio
to start (off) with (= firstly) → en primer lugar ..., para empezar ...; (= at the beginning) → al principio ..., en un principio ...
what shall we start (off) with? → ¿con qué empezamos?
to start (off) with a prayer → empezar con una oración
he started (off or out) with the intention of writing a thesis → empezó con la intención de escribir una tesis
to start on a task → emprender una tarea
to start on something new → emprender algo nuevo
to start on a book (= begin reading) → empezar a leer un libro; (= begin writing) → empezar a escribir un libro
to start on a course of study → empezar un curso
they started on another bottle → abrieron or empezaron otra botella
to start (off or out) from London/for Madrid → salir de Londres/partir con rumbo a or para Madrid
he started (off) down the street → empezó a caminar calle abajo
tears started to her eyes → se le llenaron los ojos de lágrimas
his eyes were starting out of his head → se le saltaban los ojos de la cara
then she started in → luego ella metió su cuchara
see start C1, C3
see start B6
see start C1, C3
see start C1, C4
see start B4, B5, B7
the start of the tax year → le début de l'année fiscale
It's not much, but it's a start → Ce n'est pas grand chose, mais c'est un début.
a fresh start
We need a fresh start
BUT Il nous faut prendre un nouveau départ.
to make a fresh start → prendre un nouveau départ
to make a start on sth → attaquer qch
Shall we make a start on the washing-up? → On attaque la vaisselle?
to make an early start (= begin early) → commencer de bonne heure (= leave early) → partir de bonne heure
We'll have to make an early start if we want to get there by lunchtime → Nous allons devoir partir de bonne heure si nous voulons y être à l'heure du déjeuner.
to get off to a bad start → être mal parti(e)
to get off to a good start → être bien parti(e)
at the start (= in the beginning) → au début
I was terribly lonely at the start → Je me sentais terriblement seul au début.
for a start → d'abord, pour commencer
with a start → en sursaut
He woke with a start → Il s'éveilla en sursaut.
to give a start → sursauter
My father started work when he was ten → Mon père a commencé à travailler lorsqu'il avait dix ans.
to start doing sth → se mettre à faire qch, commencer à faire qch
I started learning French three years ago → J'ai commencé à apprendre le français il y a trois ans.
He started laughing → Il s'est mis à rire.
to start to do sth → se mettre à faire qch
Ralph started to run → Ralph se mit à courir.
He couldn't start the car → Il n'a pas réussi à démarrer la voiture.
He couldn't get his engine started → Il n'arrivait pas à démarrer son moteur.
What time does it start? → À quelle heure ça commence?
The meeting starts at 7 → La réunion commence à sept heures.
to start with ... (= firstly) → d'abord ... (= at the beginning) → au début ...
don't start! > (= start complaining etc) → ne commence pas!
She started as a photographer with Picture Post → Elle a fait ses débuts de photographe au Picture Post.
The car wouldn't start → La voiture ne voulait pas démarrer.
We started off first thing in the morning → Nous sommes partis en début de matinée.
They started off to church
BUT Ils se mirent en route pour l'église.
She started us off laughing → Elle nous a fait rire.
Her mother started her off acting in children's theatre → Sa mère l'a fait débuter dans le théâtre pour enfants.
to start up in business → débuter dans les affaires
STARTabbr of Strategic Arms Reduction Treaty → START(-Vertrag) m
at the start → all'inizio
the start of the school year → l'inizio dell'anno scolastico
from the start → dall'inizio
for a start → tanto per cominciare
to get off to a good or flying start → cominciare bene
to make an early start → partire di buon'ora
to make a fresh (or new) start in life → ricominciare daccapo or da zero
the thieves had 3 hours' start → i ladri avevano 3 ore di vantaggio
to give sb a 5-minute start → dare un vantaggio di 5 minuti a qn
to start doing sth or to do sth → iniziare a fare qc
to start negotiations → avviare i negoziati
he started life as a labourer → ha cominciato come operaio
to start a fire → provocare un incendio
to start a race → dare il via a una gara
you started it! → hai cominciato tu!
don't start anything! → non cominciare!
don't start him on that! → non toccare quest'argomento in sua presenza!
we'd like to start a family → ci piacerebbe avere un bambino subito
starting from Tuesday → a partire da martedì
to start on a task → cominciare un lavoro
to start at the beginning → cominciare dall'inizio
it started (off) well/badly → è cominciato bene/male
she started (off) down the street → s'incamminò giù per la strada
what shall we start (off) with? → con che cosa cominciamo?
she started (off) as a nanny → ha cominciato come bambinaia
to start (off) with ... (firstly) → per prima cosa... (at the beginning) → all'inizio...
he started (off) by saying (that) ... → cominciò col dire che...
see also start 3a
to start sb off (on complaints, story) → far cominciare qn (give initial help) → aiutare qn a cominciare
that was enough to start him off → è bastato questo a dargli il via
to start out to do sth → cominciare con l'intenzione di fare qc | <urn:uuid:7a8f0f4b-80a0-4ba2-be1c-306a127531b8> | CC-MAIN-2021-21 | https://en.thefreedictionary.com/start | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00612.warc.gz | en | 0.799887 | 6,331 | 3.1875 | 3 |
The efforts of the [Japanese] moderates to avoid a war with the United States were unsuccessful, partly because the attitude of the U.S.A. – and also that of London – [which] became more and more obdurate. When, under pressure from the army, the Konoye cabinet agreed to the military occupation of all Indo-China, the British and Americans announced in July 1941 drastic economic sanctions. Japanese funds in the U.S.A., England and various dominions were blocked.”Ernst Topitsch
Wracked with economic problems, overpopulation and riots, Japan embarked on a program of military aggression in the 1930s. The first act of aggression was against Manchuria, which was invaded on 18 September 1931 without orders from the government in Tokyo. The prime minister and the Japanese parliament saw the invasion as an act of insubordination by the Imperial Japanese Army; but they could do nothing to control the generals. From that point forward the military began to dictate aspects of national policy. To make matters worse, fanatical young officers butchered liberal politicians who supposedly threatened Japan’s “honor.”
After various skirmishes, bombings and political killings, war broke out between China and Japan. In 1937 an all-out invasion of China was initiated. China’s most important industrial and agricultural regions were occupied, but the vast interior of China held out against the Japanese invaders. In those days China was divided between a communist state in the north (under Mao) and a nationalist state (under General Chaing Kai-shek). This resulted in a three-way struggle – with the Chinese factions theoretically allied against the invader. By 1941 the war had devolved into a stalemate. The leftist Japanese historian Saburo Ienaga wrote, “In Japan, the few opponents of an imperialistic war against China never had enough popular support to prevent the conflict and were easily silenced.”
Japan’s military was divided into two hostile factions: Strike North and Strike South. The Strike North faction was dominated by the powerful Chosu clan which controlled the Imperial Japanese Army. Strike North saw Soviet Russia as Japan’s natural enemy and prepared for a war on the Asian continent. Strike South was dominated by the Satsuma clan from Kyushu which controlled the Imperial Japanese Navy. Strike South believed Japan’s main enemy was the European colonial powers of Britain, France and the Netherlands. Strike South prepared for a war in the Pacific. As an island nation, Japan naturally developed into a naval power like Great Britain; but the Chosu clan led Japan to the unusual choice of becoming a great land power as well.
Nationalism and militarism in Japan involved the cultivation of myths and the propagation of lies. It had a morally corrupting aspect, and was intellectually limiting. It gravitated toward authoritarianism. A warlike attitude was inculcated in Japan’s elementary schools. In middle school the training intensified. Ienaga wrote, “The ethics, language, and history textbooks, with their written and visual messages, had a significant jingoistic influence. Yet the military songs … hit a deeper emotional level. No amount of rational examination of the past … can erase those stirring tunes of glory from the memories of the prewar generation.”
Japan’s youth were trained for the army, for obedience to authority, and for patriotism. In 1925 military officers were assigned to every school in Japan, from middle school on up. The Western mind can hardly grasp the intensity of Japan’s militarism. Youth magazines carried articles with titles like “The Future War Between Japan and America.” At the same time, the political morality of Japan had always been pragmatic, tending toward a “might makes right” philosophy. This speaks to very old Chinese influences; but also, there was the negative example of Western imperialism. Instead of defending Asia from the “European devils,” the Japanese imitated the policies of the imperialists, taking Korea and Taiwan as colonies after the first Sino-Japanese War (July 1894 – April 1895).
Japanese militarism, however, was only one causal element leading to war in the Pacific. There were two others. First, Stalin wanted to perpetuate Japan’s war with China so that Japan could not turn its armies against the Soviet Union; second, President Franklin Roosevelt saw Japan’s war with China as a back door into World War II. Here we find that Roosevelt’s agenda coincided with Stalin’s agenda, making an interpretation of President Roosevelt’s motives – and the motives of his advisors – almost impossible to distinguish from Stalin’s.
Did Soviet agents trigger the pacific war?
In April 1941, as Germany and the Soviet Union were preparing for war against each other, a Soviet intelligence officer named Vitalii Pavlov made contact with Harry Dexter White of the U.S. Treasury Department. The meeting took place at the Old Ebbitt Grill in Washington, D.C., with Pavlov pretending to read a copy of the New Yorker as he waited for White to arrive. Harry Dexter White had been identified by Soviet intelligence as anti-fascist and sympathetic to the Soviet Union. According to Whittaker Chambers, White had already provided secret intelligence to the Soviet Union about Japan. He was therefore ideally placed to help with a Soviet plan to trigger a war in the Pacific. To be sure, White was not the only agent in Washington who would be tapped for this assignment. There was, in those days, multiple Soviet spy rings operating in Washington, D.C.
After White arrived at the Old Ebbit Grill he quickly identified Pavlov and sat down with the Soviet intelligence officer. Pavlov told White that the Soviet Union would soon be attacked by Hitler. Moscow was afraid that the Japanese Empire might attack the Soviet Union as well. Could White help neutralize Japan as a threat? White readily agreed to do what was necessary.
“Having received his marching orders from Vitalii Pavlov,” wrote John Koster, “Harry Dexter White sat down at his typewriter in May 1941 to change the course of history. His task was to touch off a war with Japan without being detected as a Soviet agent.” White’s plan was to write a memorandum that would propel Roosevelt onto a collision course with Japan. Already Roosevelt had embargoed Japanese scrap metal after the Japanese Imperial Army moved into northern French Indochina. Roosevelt had decided against cutting Japan’s oil for fear it would trigger a war. White’s memorandum would attempt to change Roosevelt’s mind on this issue – persuading the American president to impose an oil embargo on Japan.
White began his memorandum by comparing American policy with the prewar appeasement policies of France and Britain. He hinted that Britain and the Soviet Union might soon fall to the Germans, leaving America to face the Axis juggernaut alone. He then proposed a strange solution to the growing problem of Japan. He suggested a bizarre agreement in which the United States would “lease” half of Japan’s air force and navy as Japan withdrew from China and Indochina. If Japan did not take this deal, America would bring down Japan’s economy with an oil embargo. Roosevelt initially rejected White’s May memorandum, but some of its ideas undoubtedly remained in the back of the President’s mind.
Then came the invasion of the Soviet Union in June. Japan was then expected to occupy the southern part of French Indochina. A debate began between Roosevelt and his advisors on what to do. Treasury Secretary Morgenthau, egged on by Harry Dexter White, urged Roosevelt to cut off Japan’s oil. Roosevelt allegedly balked, arguing that the Army and Navy were not ready for war. The government’s chief advisor on Japan, Stanley Hornbeck, also argued for an oil embargo. Then, on 21 July 1941, the Japanese occupied the remaining southern part of French Indochina. The U.S. and Britain reacted by freezing Japanese assets and cutting off credit. On 28 July 1941 a Japanese tanker was turned away at the Dutch Indonesian port of Tarakan. Japan was cut off from her principal source of oil (i.e., in Dutch Indonesia).
According to John Koster, “Roosevelt’s plan was to require the Japanese to apply for export licenses, but to grant the export licenses as they were applied for – a hindrance to trade but not strangulation.” It was in Roosevelt’s character to employ petty humiliations of this kind; but it was not a safe game to play with a proud and warlike people like the Japanese. According to Koster, however, the State Department’s Dean Acheson failed to expedite the necessary Japanese export licenses. Japan had no access to oil. The embargo was strangulation, after all. (Oops!)
As a result of the oil embargo, Japanese Prime Minister Fumimaro Konoye asked for a personal meeting with Roosevelt. He would agree to any terms that would not cause the fall of his government. Roosevelt was initially pleased at the idea of meeting Konoye; but Secretary of State Cordell Hull and “Japan expert” Stanley Hornbeck opposed a meeting with Konoye. Brought low by the death of his mother and his personal secretary in early September, Roosevelt lost the desire to override his advisors and meet with Konoye. According to Koster’s interpretation, “FDR, in his bereaved confusion and his preoccupation with the survival of Britain, let three self-serving hacks and a Soviet secret agent provoke a war that he himself did not want.”
On 6 September 1941 Admiral Isoroku Yamamoto was told to prepare for war if Prime Minister Konoye’s diplomatic efforts continued to fail. On the same day, in Washington, Japanese Ambassador Nomura made the following offer to the American government: (1) “that Japan will not make any military advancement from French Indochina against any of its adjoining areas”; (2) that Japan would not feel obliged to abide by the Tripartite alliance with Germany if the United States began a war with Germany; and (3) “that Japan would endeavor to bring about the rehabilitation of [a] general and normal relationship between Japan and China, upon the realization of which Japan is ready to withdraw its armed forces from China as soon as possible….”
U.S. Secretary of State Cordell Hull said the Japanese peace proposal was vague and not acceptable. After the rejection of Japan’s peace offer, Prime Minister Konoye was replaced by General Hideki Tojo on 16 October 1941. Negotiations with the United States would continue, but Japan’s diplomats were not hopeful. They now offered to withdraw from Indochina after negotiating peace with China if access to the oil was restored. On hearing of this offer, the Soviet spy at the Treasury Department, Harry Dexter White, went into action. He wrote another memorandum to the President, with Treasury Secretary Morgenthau’s signature affixed. Writing in Morgenthau’s name, White warned Roosevelt that “persons in our country’s government are hoping to betray the cause of the heroic Chinese people….” Unless something was done, “rivers of oil” would soon flow to the Japanese war machine. White ended his boss’s memorandum by warning Roosevelt against “plotters of a new Munich.” White then wrote a memorandum under his own name, suggesting proposals that might turn Japan into a friendly neighbor. He offered Roosevelt the prospect of a glorious diplomatic victory. White set down ten demands for Japan which were passed on to Secretary of State Hull.
On 26 November 1941 Hull presented America’s demands to Japan, partly based on Harry Dexter White’s memorandum. First and foremost, Japan was told to withdrawal from Indochina and China immediately, with a clause signifying Japan’s abandonment of Manchuria (as a “regime in China” other than that of the National Government in Chungking). Hull was too stupid to see that his “note” to Japan was an ultimatum. The U.S. Government’s “Japan expert,” Stanley Hornbeck, who had adapted some of White’s ideas into the “Hull note,” stupidly proclaimed in the aftermath: “The Japanese government does not intend or expect to have forthwith armed conflict with the United States….”
When news of the “Hull note” reached Tokyo, the Japanese foreign minister attempted to resign. The emperor convened a meeting of Japan’s senior politicians. Even former prime ministers who had opposed Japanese imperial expansion said that America’s demands could not be met without risk of a violent revolution in Tokyo. Many of the Japanese statesmen were baffled by the American demands. When the Japanese cabinet met, the Emperor asked for a vote. The Japanese cabinet voted for war unanimously. Japan’s carrier battlegroup, the First Air Fleet, was ordered to attack Hawaii.
Were the negotiations with Japan intentionally bungled by Roosevelt’s team? Koster thinks Soviet agent Harry Dexter White played a decisive role. Admittedly, Roosevelt was not a man attentive to details. He left things to others, especially given his poor health. But there remains a number of questions about Roosevelt’s behind-the-scenes role; especially regarding Acheson’s refusal to expedite export licenses for the Japanese oil tankers. Did Roosevelt use Acheson to acquire plausible deniability in the event someone blamed him for the war? Koster’s history takes the view that Roosevelt had been hapless and naïve – that he had not intended war. As we shall see, not everyone shares this interpretation of Roosevelt’s leadership.
What was roosevelt and marshall’s game?
In 1985 I met James Roosevelt, the President’s son, who had worked closely with his “father” in 1941. I asked the younger Roosevelt about Pearl Harbor and whether Japan was intentionally provoked by the President. James Roosevelt had no problem answering. “Yes,” he said, “we provoked Japan on purpose to get into the war.” I had never expected to hear such a forthright admission from the President’s son.
Koster’s attempt (in his book) to make Roosevelt into an “innocent” does not fit with James Roosevelt’s admission. In fact, James Roosevelt was not apologetic about provoking the war. He was rather self-congratulatory. He thought they had done a good thing. After all, Roosevelt saw Hitler overrunning Europe. Britain lacked the army to fight Hitler. The Soviet Union was being defeated and appeared on the verge of collapse. It is understandable, from a strategic standpoint, that some American strategists would seek for a way to intervene sooner rather than later.
We should also take account of the odd behavior of General George C. Marshall, who delayed sending a warning message to General Short in Hawaii when military intelligence officers discovered that the Japanese were planning to break off diplomatic relations. After seeing Parts 1-13 of a decoded Japanese diplomatic cable on the evening of 6 December, President Roosevelt said, “This means war.” The details follow: At 0238 Eastern Standard Time, on the morning of 7 December, Part 14 of a Japanese coded message was intercepted regarding Tokyo’s reply to the “Hull Note.” At 0730 the document had been translated and was viewed by military officers. It appeared the Japanese were breaking off negotiations. An officer in Admiral Stark’s office pointed to the virulency of Part 14’s language. Perhaps this signaled the start of hostilities. It was then suggested that an additional warning be sent to Pearl Harbor. But nothing was done. Meanwhile, Colonel Rufus Bratton, head of the Far Eastern Section of G-2, was reading his copy of Part 14 when an intercept came through of a much shorter message sent from Tokyo to the Japanese Ambassador: “Will the Ambassador please submit to the United States Government (if possible to the Secretary of State) our reply to the United States at 1:00 P.M. on the 7th, your time.”
Colonel Bratton was stunned by this detail. Why was the Japanese foreign ministry dictating an exact time of delivery on a Sunday? This was not a normal working day for diplomats. No previous Japanese diplomatic cable had specified an exact time for delivery of a note. Bratton thought that all Pacific commands should be immediately warned. Of course, like everyone else in Washington, he was not thinking of Pearl Harbor. As he said later, “Nobody in ONI, nobody in G-2, knew that any major element of the fleet was in Pearl Harbor on Sunday morning the 7th of December. We all thought they had gone to sea … because that was part of the war plan, and they had been given a war warning.”
Bratton went to find a superior officer who could take action. But it was a Sunday morning, and here was the main advantage of attacking on a Sunday. At around 0900 Bratton called General Marshall’s quarters. He was told that the general had gone horseback riding. Bratton told Marshall’s orderly to find the general immediately. It was, said Bratton, “vitally important that he communicate with me at the earliest practicable moment.” But the message was never delivered, because Marshall had not gone horseback riding (though he initially testified to Congress that he had gone horseback riding). In December 1945 General Marshall changed his testimony, claiming to have had a faulty memory. He was not horseback riding after all, but at home with his wife (where Colonel Bratton had originally tried to reach him). The mystery of Marshall’s whereabouts on that eventful Sunday morning was revealed in the biography of Soviet Ambassador Maxim Litvinoff. On page 473 of that biography it states, “On the morning of Sunday, December 7, Litvinoff’s plane arrived at Bolling Field, Washington, D.C. He was received by Brigadier General Philip R. Faymonville … General Marshall and Admiral King….”
According to Bratton’s testimony, General Marshall finally called him back at 1030 that morning. Marshall told Bratton to come to his office, which was a ten-minute drive from Marshall’s residence. Marshal did not arrive at his office until around 11:15. Marshall then sat down to read the first 13 parts of the decoded Japanese diplomatic instructions while Bratton tried to interrupt him with news of the 1 P.M. deadline message. Marshall would not allow Bratton to interrupt. At 11:45 Marshall apparently realized the significance of what Bratton was trying to tell him and wrote out a warning to America’s Pacific commanders. Marshall briefly spoke to Admiral Stark, who offered the use of the Navy’s powerful radio stations to broadcast a warning message. Marshall declined the offer. Marshall’s message to the Pacific commanders read as follows:
JAPANESE ARE PRESENTING AT ONE P.M. EASTERN STANDARD TIME TODAY WHAT AMOUNTS TO AN ULTIMATUM ALSO THEY ARE UNDER ORDERS TO DESTROY THEIR CODE MACHINE IMMEDIATELY STOP JUST WHAT SIGNIFICANCE THE HOUR SET MAY HAVE WE DO NOT KNOW BUT BE ON ALERT ACCORDINGLY STOP INFORM NAVAL AUTHORITIES OF THIS COMMUNICATION. /SS/ MARSHALL
It was now 11:52 A.M. in Washington and 6:22 A.M. in Hawaii. The Japanese attack was little more than an hour away. Marshall had yet to warn General Short and Admiral Kimmel. His warning message had been written out. But who did he send the message to first? To the Caribbean Defense Command in Panama, the least likely to be attacked. The next message was sent to General MacArthur in the Philippines. Next it was sent to the Western Defense Command in San Francisco. The minutes were ticking by, and General Marshall was overlooking Hawaii. Now it was 12:17 P.M. Eastern Standard Time; but for some unknown reason, the radio transmission to General Short failed to reach Fort Shafter in Hawaii. Marshall then sent the message “via the Western Union land lines between Washington and San Francisco, then by RCA radio to Honolulu,” wrote Stinnett. “The transmission delay has never been adequately explained.” Please note: the Western Union telegram from Marshall arrived while the attack was in progress.
Later, Marshall would claim he could not remember talking directly with Bratton on 7 December. Surely, a general should remember a worried colonel frantically trying to get hold of him on the first day of a war. According to Robert B. Stinnett, “Tracing the Army’s delivery of the identical set of [Japanese] … intercepts during the weekend [of 7 December] is labyrinthine. Evasive accounts from some of the Army’s top generals of World War II contribute to the complexity. The trail is obscured by charges of intimidation, perjured testimony, coercion of witnesses, and obstruction of justice. Two of the most famous and respected American generals of World War II – General George C. Marshall and Lieutenant General Walter Bedell Smith – are involved.” (Keep these two generals’ names uppermost in your mind as we cover the latter two years of the war; for they are at the heart of other “mysterious” events.)
Marshall’s handling of the Pearl Harbor aftermath reads more like an episode of the Sopranos than a “day in the life” of the Army’s Chief of Staff. Did General Marshall intentionally prevent a timely warning from reaching Hawaii? Yes! It seems he did exactly that! In fact, he disappeared, then he delayed, then he dithered and, finally, he made a pig’s breakfast of sending his warning message to Hawaii.
On 6 October 1944 the Army Pearl Harbor Board concluded a three-month investigation with a report that damaged General Marshall’s reputation. The report stated, “[Marshall failed] to get to General Short on the evening of December 6 and the early morning of December 7, the critical information indicating an almost immediate break with Japan, though there was ample time to have accomplished this.”
Why did Roosevelt fail to call an emergency meeting of his military commanders on the evening of 6 December when he realized that war was coming? Why did Marshall lie about meeting Soviet Ambassador Litvinoff’s plane on the morning of 7 December? Why has the truth been hidden all these years? Did the President of the United States sacrifice an American fleet to save Stalin from Hitler? Franklin Roosevelt is considered to be a great man. But was he, really? Many historians believe Roosevelt was primarily concerned with saving Great Britain. What if he wasn’t?
Robert Stinnett’s research suggests that Roosevelt and Marshall knew Pearl Harbor would be the target of Japan’s inevitable attack. Stinnett shows evidence that the United States had broken Japan’s naval codes before Pearl Harbor was attacked – something that has long been denied. After the publication of the initial hardcover version of Day of Deceit, Stinnett “unearthed over four thousand communications intelligence documents – all of them never before examined – that provide additional confirmation of America’s foreknowledge of Japan’s attack on Pearl Harbor….”
According to Freedom of Information documents obtained in May 2000, “by mid-November 1941, as Japanese naval forces headed for Hawaii, America’s cryptographers had solved the principal Japanese naval codes.” Subsequently, when Japan’s top admirals broadcast a series of radio messages disclosing that Pearl Harbor was the target of their raid, the Americans were reading those messages in real time.
American codebreakers translated four radio messages from 5 November to 2 December indicating that Pearl Harbor was the primary target of a Japanese attack. In his own messages, Admiral Chuichi Nagumo, Commander of the First Air Fleet, “violated every security rule” issuing “radio orders … that Japan would attack America, Great Britain, and the Netherlands in the first part of December (transmitted November 5, 1941).” On 26 November Admiral Yamamoto broadcast a message to Admiral Nagumo, instructing him to head out from Hitokappu Bay into the North Pacific and refuel north of Hawaii. On 2 December Admiral Nagano set the precise date for beginning hostilities.
On the evening of 7 December 1941, radio broadcaster Edward R. Murrow and his wife were invited to dinner at the White House. After dinner Murrow was invited to a special meeting with the President. Also in attendance at this meeting was William “Wild Bill” Donovan, then Roosevelt’s Coordinator of Information and future chief of the wartime Office of Strategic Services (OSS). (Donovan had been summoned to the meeting by none other than James Roosevelt, the President’s son). The meeting was held in Roosevelt’s study and lasted about 25 minutes. What we know about the meeting was confided by Donovan to his assistant, William J. vanden Heuvel, who wrote the details in his diary.
Roosevelt was concerned about public reaction to the Japanese attack. Roosevelt asked Murrow and Donovan if the attack would unite Americans behind a declaration of war against the Axis. They both agreed it would. As the conversation progressed Donovan sensed that Roosevelt welcomed the Japanese attack, and did not seem surprised by it. During the discussion, Roosevelt claimed that he sent an advanced warning to Pearl Harbor that a Japanese attack was imminent. Heuvel records the following words, allegedly spoken by Roosevelt: “They caught our ships like lame ducks! Lame ducks, Bill. We told them, at Pearl Harbor and everywhere else, to have the lookouts manned. But they still took us by surprise.”
Yet seeking reassurance from Murrow and Donovan, the President read a telegram from T. North Whitehead at the British Foreign Office stating that America was now united. But was it? Roosevelt was still unsure. Murrow and Donovan assured him that the country was united. In relation to this strange conversation, Edward R. Murrow publicly denied that Roosevelt had advanced knowledge of Japan’s attack. Yet, in the wake of this meeting Murrow could not sleep. At 1 A.M. on the morning of 8 December, Murrow told his wife, “It’s the biggest story of my life, but I don’t know if it’s my duty to tell it or forget it.” Whatever Murrow heard on the evening of 7 December 1941, he took it to his grave.
Quarterly Subscription (Voluntary)
notes and links
Ernst Topisch, Stalin’s War, p. 123.
John Koster, Operation Snow: How a Soviet Mole in FDR’s White House Triggered Pearl Harbor, p. 23.
Saburo Ienaga, The Pacific War, 1931-1945: A Critical Perspective on Japan’s Role in World War II (New York: Pantheon Books, 1978), p.3.
Koster, Chapter One.
Ibid, p. 123.
Ibid, p. 112.
The account of the Pavlov-White meeting was put together by John Koster from Lt. Gen. Vitalii Pavlov’s book, Operation Snow: Half a Century at KGB Foreign Intelligence, which has never been translated into Russia. The Pavlov-White meeting is also attested to in Romerstein and Eric Breindel’s book, The Venona Secrets, pp. 29-44, demonstrated by decrypts of Soviet coded message traffic which refer to the Pavlov/White rendezvous.
Ibid, Chapter 9 on the November Memorandum.
Gordon W. Prange, At Dawn We Slept: The Untold Story of Pearl Harbor (New York: Penguin, 1981), p. 486.
Arthur Upham Pope, Maxim Litvinoff (New York: L.B. Fischer, 1943), p. 473.
Robert B. Stinnett, Day of Deceit: The Truth About FDR and Pearl Harbor (New York: Simon & Schuster, 2000), p. 233.
Ibid, p. 235.
Ibid, p. 261.
Ibid, pp. 1-4. | <urn:uuid:6c73ee26-ed38-4c25-921a-b3445904b200> | CC-MAIN-2021-21 | https://jrnyquist.blog/2021/04/10/grand-strategy-part-vi-1941-asia-and-the-pacific/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988758.74/warc/CC-MAIN-20210506144716-20210506174716-00215.warc.gz | en | 0.970166 | 5,968 | 3.484375 | 3 |
Chapter 6: Jackson's Valley campaign
Before taking up the history of affairs before Richmond
in June, 1862, with Lee
at the head of the army, it is necessary to review events in the Valley of Virginia
This Valley constituted the only route by which a Confederate army could invade Maryland
and threaten Washington City
Cool judgment at the head of affairs, after Washington
had once been fortified against an attack by open assault, might have laughed at any idea of real danger from such an invasion.
It should have been clear to all that no invasion could maintain itself long enough to carry on a siege, or to do more than to fight one great battle.
The trouble was the lack of railroad transportation.
Wagons alone would have to be relied upon to bring all supplies from Staunton, Va.
, a distance via the Valley
roads of nearly 200 miles to Washington
But fear, approaching panic, took possession of Washington
whenever a Confederate force appeared in the Valley
, and every other operation would be suspended to concentrate all efforts upon driving it out.
This oversensitiveness of the Federals
cut its greatest figure in 1862, and was, more than once, the only salvation of Richmond
For the Confederate
generals understood it, and as the situation in front of Richmond
became more threatening, they sought more earnestly to reenforce the Valley
It happened that Stonewall Jackson
had been assigned as the commander of the Valley District in Nov., ‘61, and the reader has already been told of the battle of Kernstown
, which he fought there on Mar. 23, ‘62.
After that battle he had fallen back with his division, about 8000 strong, to Swift Run Gap.
, with about as many more, was at Gordonsville
, and Edward Johnson
, with about 3000, was near Staunton
The Federals had made in West Virginia
two separate departments.
That of the Shenandoah
, under Banks
, included the Valley
in which Banks
had, in April, about 19,000 men near Harrisonburg
About 40 miles west in the mountains was Fremont
, commanding what was called the Mountain Department, in which he had about 15,000 men. About 3700 of these, under Milroy
, were at McDowell
, a point 25 miles west of Staunton
On April 29, Jackson
proposed to Lee
that he, Jackson
, should unite his own force and Johnson
's and attack Milroy
, and drive them back into the mountains.
Then returning quickly, and being joined by Ewell
, his whole force should fall upon Banks
approved the project and committed its entire execution to Jackson
's division was brought up to Swift Run Gap to observe Banks
, while Jackson
concealed his object by marching his own division back across the Blue Ridge
, and moving from a railroad station near Charlottesville
by rail to Staunton
Here he united with Johnson
and marched rapidly upon Milroy
He had started on April 30, and, taking a country road, had been three days in moving his guns and trains through 12 miles of mud to reach a metalled road.
He had intended to rest over Sunday, May 4, but news of Fremont
's cavalry having advanced, induced him reluctantly to put his infantry upon the cars and move to Staunton
on that day. On May 7, he left Staunton
, and on May 8 he confronted Milroy
had been reenforced by Schenck
kept most of his force concealed, and about 2500 Federals were advanced against him in the afternoon.
A sharp affair ensued with about 2800 of Jackson
's force, holding the crest of a steep
Jackson's Valley campaign, May and June, 1862|
ridge more exposed to fire than was the enemy.
The latter only lost about 250 killed and wounded, while the Confederates
lost 498; but next morning the Federals
pursued for two or three days, going nearly to Franklin
, and then on May 12 turned back, damaging and obstructing all roads behind him, and thus practically neutralizing for a while Fremont
's whole force.
He now marched to unite with Ewell
and to strike at Banks
Friday, May 16, had been appointed by the Confederate President
a day of fasting and prayer, and it was spent in camp at Lebanon Springs
Meanwhile, during Jackson
's absence, the situation in the Valley
's division, about 9000 men, had been taken from Banks
and ordered to join McDowell
, where the latter would await it before advancing to join McClellan
This reduced Banks
's force to about 10,000, and he had been withdrawn down the Valley
, which he was ordered to fortify and hold.
had now with Ewell
's division about 16,000 men. On May 20 he arrived at New Market
, whence there were two roads to Winchester
The western, the most direct and shortest, going by Strasburg
, and the eastern, crossing the Massanutten Mountains
, followed the valley of the South Fork
of the Shenandoah
to Front Royal
, about 12 miles east of Strasburg
Then, crossing the river, it united with the direct road at Newtown
, within 12 miles of Winchester
His march was by the eastern route and was conducted with such secrecy that the enemy had no idea that he was within 60 miles, when, at 1 P. M., May 23, his skirmishers attacked a Federal outpost at Front Royal
held by Col. Kenly
with about a thousand men and two guns.
, seeing a much superior force, set fire to his camp, and, crossing the Shenandoah
, also set fire to the bridge behind him, but Jackson
's men rushed in and saved it, though so damaged as to make the use of it slow and difficult.
, crossing at a ford with the 6th Va. Cav., under Col. Flournoy
, charged the enemy, capturing the two guns and 600 prisoners, the enemy losing 154 killed and wounded, and the Confederates
Even a more brilliant success might have resulted here but for an unfortunate failure of our staff service, as follows: —
As he approached Front Royal
from the south, about three and a half miles from the town, a rough country road diverged to the east and gave a second approach to the town by an obscure route of about eight miles over some steep hills.
The more surely to avoid the enemy's pickets and to execute a surprise, Jackson
had taken the head of his column by this road.
But after striking the enemy's pickets near Front Royal
, he sent back orders for the rear brigades to follow the short and nearly level highway to the town.
As usual at that time in the Confederate armies, the courier service was performed by a small detachment of cavalry, temporarily detailed; not by specially selected men, as was later practised.
In this case the courier selected to carry the order not only failed to deliver it, but took himself off, and was never heard of again.
It resulted that Jackson
waited in vain the whole afternoon for the coming up of most of his artillery and infantry.
Part of it only arrived after dark, completely exhausted by its laborious march; and one of his brigades, tired out, encamped four miles short of Front Royal
The cream of the whole occasion was thus lost.
did not appreciate the situation until next morning, and only toward 10 o'clock did he get off from Strasburg
in retreat for Winchester
, too, was able to make only a late start, and, being delayed by forces sent out by Banks
to protect his right flank, he missed, by two hours, intercepting Banks
's infantry, though he captured and destroyed about 100 wagons, and took some prisoners.
There was much delay, also, from poor discipline in both the Confederate infantry and cavalry, especially in the latter.
It was not easy for either to resist the temptations offered by so many wagons loaded with articles of food and clothing, calculated to appeal strongly to Confederate wants.
But if time was thus wasted, Jackson
made it up by pushing his march for the greater part of the night.
It was 3 A. M. when he finally allowed his exhausted men to lie down and sleep, and
they were now near enough to Winchester
to make it sure that Banks
could not get away without a battle.
Early in the morning Jackson
The enemy made a stubborn resistance, having good position but an inferior force.
He was finally, however, broken and driven from the town in great confusion.
, in his official report, says of the occasion:—
‘Never have I seen an opportunity when it was in the power of cavalry to reap a richer harvest of the fruits of victory.
Hoping that the cavalry would soon come up, the artillery, followed by infantry, was pressed forward for about two hours for the purpose of preventing by artillery fire a re-forming of the enemy; but as nothing was heard of the cavalry, and as but little or nothing could be accomplished without it in the exhausted condition of our infantry, between which and the enemy the distance was constantly increasing, I ordered a halt and issued orders for going into camp and refreshing the men.’
This had been the critical moment of Jackson
's whole strategic movement.
He had successfully concentrated a superior force upon his enemy, and routed him, and needed but his cavalry to reap the full fruits of a great success.
He had three regiments of cavalry, — the 7th under Col. Turner Ashby
, and the 2d and 6th, which, the day before, had been placed under the command of Gen. Geo. H. Steuart
's regiment was recruited in the Valley
and was noted for every good quality except discipline.
Being near their homes, the opportunity to loot the captured trains had been peculiarly seductive, and the regiment for some days was but little more than a company.
With his small force remaining, Ashby
, unfortunately, the night before, had ridden to Berryville
, fearing the enemy might attempt to escape by Snicker's Gap.
The 2d and 6th regiments under Steuart
were with Ewell
's troops on the right of the attack, Jackson
being with the left.
There was no reason, therefore, except our fatal facility of blundering, why these two regiments should not have been promptly at hand, and, for once, the spectacle be seen of a Confederate army reaping the fruits of victory.
The story is a curious one, and is told in Jackson
's official report as follows:—
‘I had seen but some 50 of Ashby's cavalry since prior to the pillaging scenes of the previous evening, and none since an early hour of the past night.
The 2d and 6th Va. regiments of cavalry were under the command of Brig.-Gen. Geo. H. Steuart of Ewell's command.
After the pursuit had been continued for some distance beyond the town, and seeing nothing of the cavalry, I despatched my aide-de-camp, Lt. Pendleton, to Gen. Steuart with an order “to move as rapidly as possible and join me on the Martinsburg turnpike and carry on the pursuit of the enemy with vigor.”
His reply was that he was under the command of Gen. Ewell and the order must come through him. Such conduct and consequent delay has induced me to require of Lt. (now Maj.) Pendleton a full statement of the case, which is forwarded herewith.’
tells how Steuart
, who was a graduate of West Point
and an officer of the old army, had refused and failed to obey Jackson
's order for immediate action, because not given through a division commander.
then goes on to say: —
About an hour after the halt of the main body had been ordered, Brig.-Gen. Geo. H. Steuart, with his cavalry, came up, and renewing the pursuit pushed forward in a highly creditable manner and succeeded in capturing a number of prisoners; but the main body of Banks's army was now beyond the reach of successful pursuit, and effected its escape across the Potomac.
Before reaching Bunker Hill Gen. Steuart was joined by Gen. Ashby with a small portion of his cavalry.
Upon my inquiring of Gen. Ashby why he was not where I desired him at the close of the engagement, he stated that he had moved to the enemy's left for the purpose of cutting off a portion of his force.
Gen. Steuart pushed on to Martinsburg, where he captured a large amount of army stores.
There is good reason for believing that had the cavalry played its part in this pursuit as well as the four companies had done under Col. Flournoy two days before in the pursuit from Front Royal, but a small portion of Banks's army would have made its escape to the Potomac.
This narrative shows how our efficiency was impaired by our deficiencies of discipline.
Our strategy, marching and fighting, had all been excellent.
Yet, owing to the failure of one courier, and a single mistake of narrow-mindedness in a general, Banks
had escaped with but trifling loss of men or material.
The campaign, however, had not been undertaken to capture men or material.
Its great object was to break up McDowell
's proposed march from Fredericksburg
to reenforce McClellan
This, it will be seen, was fully accomplished by the help of the following chapter of accidents and just at the critical moment.
had been ordered to march as soon as he was joined by Shields
It arrived on May 22.
Only one day was needed to equip it for the march to Richmond
, but the loss of three days followed.
Its artillery ammunition had been condemned by an inspector and a second day was lost, waiting for ammunition which had been delayed by the grounding of a schooner near Alexandria
Everything, however, was ready by the night of the 24th, and McDowell
was anxious to march on Sunday, the 25th.
But a third day's delay now ensued from Mr. Lincoln
's superstitious feeling that his chances of success might be improved by showing some special regard for the Sabbath.
's official report says:1
‘I was now ready to march with over 40,000 men and over 100 pieces of artillery.
Though I could have started, and would have started, Sunday, yet it was resolved not to march till Monday; this out of deference to the wishes of the President, who was with me at the time, having come down Friday night, and with the concurrence of the Secretary of War, on account of the day.’
When it is remembered that the distance to unite with McClellan
could have been easily covered within three marches, one is impressed with the influence of small events upon great matters, especially when the small events involve the loss of time, even of hours.
It has already been told how McDowell
did actually start, but, having made only a part of a day's march, he was recalled
, and sent after Jackson
Had he made even a full day, it is very doubtful if he would have been recalled.
On the morning of Sunday, the 25th, everything in Washington
Those best posted, and in highest authority, confidently expected the early fall of Richmond
, and had good reason for their expectations.
Indeed, the New York Herald
that morning had had a leader headed, ‘Fall of Richmond
By noon the papers were issuing extras headed, ‘Defeat of Banks
A volcanic eruption could scarcely have startled
the administration more.
Telegrams were sent the governors of a dozen states calling for instant help to save the capital.
Reenforcements were rushed to Williamsport
and Harper's Ferry
to assist Banks
's march, already begun before orders could reach it, was countermanded, and half his force, under Shields
, was hurried to the Valley
to attack Jackson
from the east, while Fremont
's 15,000 attacked from the west.
, who was a good soldier, appreciated that no force possible for Jackson
to have collected, could accomplish any serious results, and remonstrated, and begged in vain, to be allowed to carry out his projected march upon Richmond
When this was refused, he suggested that he be directed upon Gordonsville
, but this too was overruled, and Shields
were directed to march upon Strasburg
, toward which point also Fremont
, having gone into camp about noon on Sunday, the 25th, when his infantry and artillery could no longer pursue the enemy, felt moved, even as Lincoln
had done, to recognize the Sabbath by making up for the services missed in the morning.
His official report says:--
‘On the following day (the 26th), divine service was held for the purpose of rendering thanks to God for the success with which He had blessed our arms, and to implore His continued favor.’
During the next two or three days he made demonstrations toward the Potomac
, advancing his troops to Charlestown
, and within two miles of Harper's Ferry
; but these demonstrations were only for their moral effect at the North
, and to occupy time, while he filled his wagons with captured stores and prepared a convoy of a double line of wagons near seven miles long and about 2300 prisoners. Only on the 30th did he put his columns in motion toward the rear.
Had his opponents acted boldly and swiftly, their positions would now have enabled them to cut off Jackson
's retreat and to overwhelm him. But the moral effect of his reputation doubtless caused some hesitation, and Jackson
's entire force and his whole convoy, with some skirmishing at Front Royal
, and at Wardensville
, passed between his converging foes at Strasburg
on the 31st, a portion of one of his brigades making in one day a march of 36 miles.
Besides the prisoners and stores brought off, Jackson
left about 700 Federal sick and wounded at Winchester
, and burned many stores for which he had no transportation.
Two guns and over 9000 muskets were saved.
After passing Strasburg
on the 31st, the race was continued up the main Shenandoah Valley, with Jackson
leading and Fremont
following in his tracks, while Shields
advanced up the Luray Valley
on the east.
At New Market
the road from Luray
enters the Valley
through Massanutten Gap, but Jackson
had sent cavalry ahead who burned the bridges by which Shields
might have had access.
's store another bridge across the South Fork
gave a road to Harrisonburg
, and Shields
rushed his cavalry ahead to gain possession of it, but again he was too late.
Meanwhile, there had been a severe rain-storm on June 2, and though Shields
could hear the guns of Jackson
's rear-guard and Fremont
's advance on the other side of the Massanutten Mountains
, he was powerless to cross.
On Thursday, June 5, Jackson
, and here diverged east to cross the South Fork
upon the bridge at Port Republic
On the 6th, in a severe cavalry affair of the rearguard, Gen. Turner Ashby
Of the civilian soldiers whom the war produced, such as Forrest
, and others, scarcely one gave such early and marked indication of rare military genius as Ashby
On the 7th Jackson
's advance at night reached the vicinity of Port Republic
This village is situated in the angle between the North and South rivers, which here unite and form the South Fork
of the Shenandoah
The North River
is the larger of the two, and the road from Harrisonburg
crosses it by a wooden bridge.
The South River
On the morning of Sunday, the 8th, Jackson
had sent two companies of cavalry across the river to scout on the Luray
road toward Shields
About 8 A. M. these companies were driven back in a rout and followed into the village by a body of Federal cavalry, who, with four guns and a brigade of infantry following, formed Shields
himself was in the village and narrowly escaped capture, riding across the bridge over the North River
. Three of his staff were captured, but afterward escaped.
Three brigades of infantry, however, and three batteries were near at hand, and the Federals
were soon brought under a fire that sent them back in confusion with a loss of about 40 men and two guns, which had been brought across the South River
As their leading brigade, Carroll
's, fell back, it met a second brigade of Shields
's division, Tyler
's, with artillery, and the two brigades, selecting a position about two miles north, decided to await the arrival of Shields
with the rest of the division.
left two brigades to protect the bridge, and with the remainder of his force marched back about four miles to Cross Keys
, where he had left Ewell
's division holding a selected position against Fremont
was now in reach of Jackson
, and, by all the maxims of war, should have exerted his utmost strength to crush him. He could afford to risk fighting his last reserves, and even to wreck his army, if he might thereby detain or cripple Jackson
, for other armies were coming to his help and were near at hand.
His attack, however, was weak.
He had about 10,000 infantry, 2000 cavalry, and 12 batteries.
had at first but 6000 infantry and 500 cavalry.
brought into play about all of his artillery, but he advanced only one brigade of infantry from his left flank.
This was repulsed and followed, and the whole of Fremont
's left wing driven back to the shelter of his line of guns.
Elsewhere there was no more
than skirmishing and artillery duelling, of which the Federals
usually had the best with their superior metal and ammunition.
It was Jackson
's role to fight only defensive battle, until he had shaken off the superior force which beset him; so the battle lingered along all day, the casualties being:—
|Federal:||killed 114,||wounded 443,||missing 127,||total 684|
|Confederate:||killed 41,||wounded 232,||missing 15, ||total 288|
During the night of the 8th, Jackson
returned to Port Republic
and improvised a foot-bridge to carry his infantry dry shod across the South River
Early next morning, leaving a rearguard of two brigades under Trimble
to delay Fremont
, the rest of his force was put in motion to find and attack Shields
's two brigades, which had unwisely halted about two miles from Port Republic
the day before.
I say unwisely, because they were only about 4000 men and 16 guns, but they had a position so beautiful that they were excusable just for the chance of fighting from it.
From the river on the right it extended straight across a mile of open plain, along a hollow road running between good banks, strongly fenced, to a considerable ravine in the wooded foot-hills of the Blue Ridge
The key of the position was a high retired shoulder on the Federal
left, on which were posted seven guns, strongly supported by infantry sheltered in the near-by wood, and commanding every foot of the plain.
, this morning, proposed to himself a double victory, and he built the foot-bridge across the South River
to enable him to win it. He intended, by making a very early start, to fall upon Shields
's two brigades and crush them, and then doubling back upon his track to recross the rivers and meet Fremont
, whom he would expect to find advancing toward Port Republic
, against the opposition which Trimble
It was a good plan and entirely feasible, but two things went wrong in its execution.
The first was with the foot-bridge over the South River
This was rudely constructed of a plank footway, supported upon the running-gear of wagons standing in the stream, which was about breast deep.
Such a bridge may be made quite serviceable,
but this one was not strongly built, and before it had been in use long, it became impassable, except in single file.
This made the passage of each brigade over twice as long as it should have been.
The second trouble was Jackson
's impatience, which defeated his own purpose.
's brigade, leading his column, began to cross the bridge about 4.45 A. M., and Jackson
was near the head of the column.
When the enemy's position was discovered, it was plain that the key position above noted was its most assailable point.
Time and blood would both have been saved by bringing up at once a force amply sufficient to overwhelm it. As he had five brigades at hand, and an abundance of artillery, there need have been no failure, and no more delay than the time needed to bring up his troops.
Going into battle before enough troops were brought up, was sure to result in more or less disaster.
's brigade, about 1500 strong, with two batteries, first attacked the Federal
It was not only badly repulsed, but the enemy gave a counterstroke, pursuing the fugitives and capturing a gun which they succeeded in carrying off. Other troops were arriving to reenforce Winder
, but they were arriving too slowly.
The Federal commander saw a chance to defeat his adversary by taking him in detail, and was swift to take advantage of it. He brought forward two fresh regiments from his left to reenforce an advance from his centre.
In vain Jackson
himself rode among his own old brigade, exposing his life freely and endeavoring to rally them.
Their thin lines had been for the time practically wrecked against superior numbers in a position almost impregnable.
Fortunately, at the critical moment, relief came suddenly.
had recognized the key position held by the enemy's seven-gun battery, early in the morning, and had directed Taylor
's fine La. brigade to attack it, and later, sent a second brigade to follow Taylor
Their approach was made through forest, and the enemy were unaware of it. Taylor
urged his march to the utmost, and was admonished by the sounds of the battle in the open country on his left that his friends were in need of assistance.
waiting for the brigade which followed him, he broke cover and charged boldly on the Federal
battery at just the critical moment for Jackson
on the left.
The sudden bursting out of so severe a battle at this vital point at once relieved the pressure upon Winder
had a desperate fight, the battery being taken and retaken and taken again, before six of its guns and all of its caissons were finally held, and its fire opened upon the now retreating Federals.
's brigade lost 288 men in this action, but accomplished its victory before the arrival of its support.
It was now about 10.30 A. M. About nine Jackson
had realized that he would not be able to accomplish the double victory he had hoped for, and had sent word to Patton
to come across the bridges at Port Republic
and to burn them.
They had not been followed closely by Fremont
He only showed up on the opposite bank at noon, having had but seven miles to come.
He had a pontoon train, but made no effort to cross, and confined his activity to cannonading the Confederates
from the north bank, wherever he could find an opportunity, during the whole afternoon.
It accomplished little harm except to the Federal
wounded, driving off the ambulances which were gathering them.
pressed the retreat of Tyler
's two brigades for about nine miles down the river, capturing about 500.
He then withdrew by roads which avoided Fremont
's guns on the west bank, and went into camp between midnight and dawn on the 10th in Brown's Gap on the Blue Ridge
, some of his regiments having marched over 20 miles.
The casualties in this action were as follows, the Federals
having but two brigades engaged and the Confederates
|Confederate:||killed 94,||wounded 703,||missing 36,||total 833|
|Federal:||killed 67,||wounded 393,||missing 558,||total 1018|
The entire casualties for the whole campaign sum up as follows for the two armies:—
|Confederate:||killed 266,||wounded 1580,||missing 36,||total 1903|
|Federal:||killed 269,||wounded 1306,||missing 2402,||total 3977|
When, in his retreat, Jackson
had gotten safely past Strasburg
, the Federal
War Department gave up all hope of capturing him, and began to take measures to renew McDowell
's advance upon Richmond
One of McDowell
's divisions, McCall
's, had been held at Fredericksburg
, and, about June 6, it had been sent by water to join McClellan
upon the Peninsula
On the 8th orders were sent for McDowell
himself with Shields
's and Ord
's divisions to march for Fredericksburg
; but before these orders could have any effect there came the news of Jackson
's sharp counterstrokes at Cross Keys
and Port Republic
, which had the purely moral effect of causing the order to be countermanded.
It remained countermanded, and McDowell
and his two divisions were kept in the valley about Front Royal
until June 20.
This delay took away his last possible chance to reenforce McClellan
took the offensive.
Indeed, the movement to Fredericksburg
, resumed about June 20, was stopped on June 26 by the formation of a new army to be commanded by Gen. John Pope
It comprised the entire forces of Fremont
, and McDowell
, and was charged with the duty of overcoming the forces under Jackson
So we may now leave him and his gallant but wearied foot cavalry to enjoy about five days of rest on the banks of the Shenandoah
, and take up the story of Lee | <urn:uuid:8a029497-c78e-4f0e-9550-4401f438fa37> | CC-MAIN-2021-21 | https://www.perseus.tufts.edu/hopper/text?doc=Perseus:text:2001.05.0130:chapter=6 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988724.75/warc/CC-MAIN-20210505234449-20210506024449-00413.warc.gz | en | 0.982612 | 6,406 | 2.90625 | 3 |
“Many people tell me that the lengthy recital of kinos on TishaB’Av does not inspire them because they do not understand thepassages. Is the mourning of Tisha B’Avintended to be simply a day of much tedium?”
“I once heard a ravgive a running commentary to the kinosof Tisha B’Av, and he mentionedthat the first kinah is acontinuation of the piyut recitedduring the repetition of the shemoneh esrei.But I never saw anyone recite piyutimduring the repetition of Tisha B’Avshemoneh esrei and do not evenknow where to look for them.”
“As a child I remember that all the shullen recited piyutimduring Maariv on Yomim Tovim, and many did during Kedushah on special Shabbosos. Now, I see piyutim recited only on Rosh Hashanah and Yom Kippur. What has changed?”
Although these questions seem unrelated, they all focus on a central subject:the additions of piyutim andother special passages in our davening,of which the kinos we recite on Tisha B’Av are one example. After anintroduction explaining the background to the piyutim,I will return to answer the first two questions.
What are Piyutim?
During the period of the Rishonim,the Geonim, and even earlier,great Torah scholars wrote prayers and other liturgical works that wereinserted into many different places in the davening,particularly during the birchos keriyas shema(between borchu and shemoneh esrei) and during the repetitionof the shemoneh esrei. Standard shul practice, particularly amongAshkenazic Jewry, was to recite these piyutimon all special occasions, including YomimTovim, mourning and fast days, and special Shabbosos (see Rama, Orach Chayim 68:1; 112:2). These piyutim express the mood and the theme ofthe day, often recall the history of the day, and sometimes even provide the halachic background for the day’sobservance. At times, they served as a means for teaching people the halachos germane to the day or season. Studyingthese piyutim not only gives ustremendous appreciation for these days, but sometimes provides us with certain aspectsof mystery, as I will explain.
There is also a humbling side to the study of piyutim. All the piyutim predate the printing press andbring us back to the era when all works had to be painstakingly handcopied.Most communities could not afford hand-written manuscripts of all the piyutim, and therefore part of the job ofevery chazzan was to commit all the piyutim to memory.
Some of the more common “piyutim”
We are all aware of the selichosrecited on Fast Days and during Elul and AseresYmei Teshuvah, which are a type of piyutim.Another famous part of daveningthat qualifies as piyut is Akdamus, recited prior to keriyas hatorah on Shavuos. Thisintroduction to the keriyas haTorah forShavuos was written by RabbeinuMeir ben Yitzchak of Worms, Germany, who was one of the great leaders of AshkenazicJewry, pre-Rashi. Other examples of piyutimthat are commonly recited include TefillasTal and Tefillas Geshem,the poem dvei haseir — authoredby Dunash ibn Labrat, an early poet-grammarian cited by Rashi in several places, which is recited before bensching when one will be reciting Sheva Berachos — and nodeh lishimcha, which takes the same slotat a bris milah.
Some piyutim are used intwo different contexts. For example, the song frequently chanted at a bris, ShirahChadashah, originatedas a piyut recited immediatelybefore the close of the brachahof Ga’al Yisrael in birchas keriyas shema on the Seventh Dayof Pesach. This piyut, written by Rabbi Yehudah HaLevi,refers both to the splitting of the Yam Sufand to bris milah and is thereforeappropriate on both occasions.
Teaching Torah through Piyutim
Many times, the rabbis used poetry as a means of teaching Torah. Forexample, a very extensive literature of piyutimlists and explains the 613 mitzvos.Most of these pieces date back to the times of the Geonim; indeed, the famous count of mitzvos by Rav Saadia Gaon is actually apoem.
Other examples include piyutimthat instruct us in different special observances throughout the Jewish calendar.Among the most famous is the Seder Avodahof Yom Kippur, which is alreadyreferred to in the Gemara,although the text they used is long lost. Dozens of different piyutim were written in the period of the Geonim and Rishonim describing the SederAvodah in detail. The Rishonimdevote much halachic discussion tothe technical accuracy of several of the versions they received from earliergenerations, often taking issue and making corrections to the text of the piyut.
Reciting the Seder Avodahalso fulfills the concept of “U’neshalmaParim Sefaseinu,” “And let our lips replace the(sacrificial) bulls” (Hoshea14:3). The Midrash teaches thatwhen we are unable to offer korbanos,Hashem accepts our recital of theprocedure as a substitute for the korbanos(Midrash Rabbah, Shir HaShirim 4:3). This implies that we canachieve kapparah (atonement) byreciting these piyutim with kavanah. Therefore, a person who recitesthe viduy of the Seder Avodah and truly regrets his sinscan accomplish atonement similar to that achieved through the viduy recited by the Kohen Gadol.
Other “Substitute” Prayers
The same idea of U’neshalmaParim Sefaseinu is followed when we recite piyutim that describe other korbanos, such as, for example, thekorban omer, the water libation (nisuch hamayim) of Sukkos, or the korban Pesach. We can achieve the drawing close to Hashem that korbanos achieve by discussing them and by longing for theirreturn. This broadens the rationale for reciting piyutim.
Educating in Observing Mitzvos
Some piyutim serve notonly to teach Torah, but also to educate people how to correctly observe mitzvos. For example, the piyut ElokeiHaRuchos, recited on ShabbosHagadol contains a lengthy halachicdescription of all the preparations for Pesach, including detailed instructions for koshering andpreparing the house. This halachic-liturgicalclassic was authored by Rav Yosef Tuv-Elem, the halachic leader of French Jewry prior to Rashi’s birth. Tosafos and other Rishonim devote much debate to the halachic positions taken by Rav YosefTuv-Elem in this poem, and later Rishonim,such as Rabbeinu Tam, edited Elokei HaRuchosto reflect their opinion of what is the correct halachah. Since this piyutserves to teach people the correct way to observe Pesach, the Rishonimfelt it vital that the text be halachicallyaccurate. It is obvious that this piyutwas meant to be read, studied, and understood.
Who Authored Them?
You might ask how we know who wrote the different piyutim, particularly when many are over athousand years old!
In general, most piyutimfollow an alef beis acrostic inorder to facilitate their memorization. (Remember that they were written with theassumption that the chazzan wouldrecite them for the community from memory.) In many, if not most, instances,the author completed the work by weaving his name into the acrostic pattern heused for the particular piyut.Thus, Elokei HaRuchos begins withstanzas following an alef beis pattern,but closes with stanzas that spell YosefHakatan bar Shmuel Chazak, which is the way Rav Yosef Tuv-Elem choseto “sign” this piyut.
An Old Controversy
Early controversy surrounded the practice of interrupting the berachos of keriyas shema or the repetition of the shemoneh esrei to recite the yotzros, the word frequently used as ageneric word for all piyutiminserted into the regular davening.(The word “yotzros”originally referred only to those piyutiminserted after borchu, shortlyafter the words “yotzeir ohr uvoreichoshech.” However, in standard use, the word yotzros refers to all piyutim inserted into the berachos of keriyas shema or the repetition of the shemoneh esrei). The Shulchan Aruch, reflecting accepted Sefardicpractice, rules: “There are places that interrupt the birchos keriyas shema to recite piyutim, but it is correct not to say them,for they constitute an interruption” (OrachChayim 68:1). On this point, the Rama,reflecting early Ashkenazic practice, adds: “Others say that this is notprohibited, and the practice in all places is to recite them.” Eachcountry and community had its own special customs concerning what was said andwhen; often, this was recorded in a community ledger.
To acknowledge that these piyutiminterrupt the regular repetition of the shemonehesrei, an introductory request, beginning with the words misod chachamim unevonim (Based on the tradition of the wise and understanding)is recited prior to beginning the piyutimof the repetition of the shemoneh esrei.These words mention that early great Torah leaders advised the introduction ofthese praises.
Why piyutim have recentlyfallen into disuse
The Vilna Gaon, in his commentary to ShulchanAruch (ibid.), explains both the position of those who encouragedthe recital of yotzros and thosewho discouraged them. For the most part, the Lithuanian Yeshivos followed the personal practice ofthe Gra not to recite piyutim during the birchos keriyas shema, and did not recite yotzros during the repetition of the shemoneh esrei (Maasei Rav #57). (The Yeshivosrecite yotzros duringthe repetition of the shemoneh esreion Rosh Hashanah and Yom Kippur.) With the tremendous spreadingof shullen that follow thepractices of the Yeshivos ratherthan what was previously followed by the Ashkenazic communities, it is increasinglydifficult to find a shul cateringto yeshivah alumni that recites the piyutim. This answers the question askedabove: “As a child I remember reciting piyutimduring Maariv on Yomim Tovim and during kedushah on special Shabbosos. Now I see piyutim recited only on Rosh Hashanah and Yom Kippur. What has changed?”
Unfortunately, due to this change in custom, this vast, treasured literature of the Jewish people isquickly becoming forgotten.
Who was the First Paytan?
The title of being the earliest paytanmay belong to Rabbi Elazar HaKalir, often refered to as the Rosh HaPaytanim, who authored the lion’sshare of the kinos we recite on Tisha B’Av as well as a huge amount of ourother piyutim, including Tefillas Tal and Tefillas Geshem, the Piyutimfor the four special Shabbosos (Shekalim, Zachor,Parah and HaChodesh),and many of the yotzros we reciteon Yomim Tovim. We know virtuallynothing about him personally — we cannot even date when he lived with anyaccuracy. Indeed, some Rishonimplace him in the era of the Tanna’im shortlyafter the destruction of the Beis Hamikdash,identifying him either as Rabbi Elazar ben Arach (Shu’t Rashba 1:469), a disciple of Rabbi Yochanan benZakai, or as Rabbi Shimon ben Yochai’s son Elazar, who hid in the cave with hisfather (Tosafos, Chagigah 13as.v. Veraglei; Rosh, Berachos 5:21); others date RavElazar HaKalir much later and even during the time of the Geonim.
We do not know for certain what the name “Kalir” means. Sincethere are several places where he used the acronym “Elazar berabi Kalir,” it seems thathis father’s name was Kalir. However, the Aruchexplains that “kalir” meansa type of cookie, and that he was called hakalirbecause he ate a cookie upon which had been written a specialformula that blessed him with tremendous erudition (Aruch, eirech Kalar III).
Rabbi Elazar Hakalir’s piyutimand kinos require studying ratherthan reading. They are often extremely difficult pieces to read, relying onallusions to midrashim and historicalevents. Many commentators elucidated his works, attempting to illuminate thedepths of his words. Also, he sometimes employs extremely complicated acrostics.This is sometimes cited as proof that he lived later, when such poetic writing becamestylish, but, of course, this does not prove his lack of antiquity.
It is universally assumed that Rav Elazar HaKalir lived in Eretz Yisrael, based on the fact that wehave no piyutim written by him forthe second day of Yom Tov (Tosafos, Chagigah 13a s.v. Veraglei; Rosh,Berachos 5:21. Tosafos[op. cit.] uses this evidence to prove that he lived at the time that the Beis Din determined Rosh Chodesh on the basis of visual evidence.).However, the yotzros recited immediatelyfollowing Borchu on the secondday of Sukkos clearly include hissignature and follow his style. This, of course, would imply that Rav ElazarHaKalir lived in a time and place that the second day of Yom Tov was observed. If that is true, whywould he have written special piyutim onlyfor the second day of Sukkos andnot for any other Yom Tov?
Perhaps Rav Elazar HaKalir indeed wrote this particular piyut for the first day of Sukkos, but subsequently Diaspora Jewsmoved the yotzros the he wrote forthe first day of Yom Tov to thesecond day! This approach creates another question, since the yotzros recited on the first day of Yom Tov were also written by him: Would hehave written two sets of yotzrosfor Shacharis on Sukkos? There are other indications that,indeed, he did sometimes write more than one set of piyutim for the same day.
Why is Es Tzemach David Ignored?
There is another mysterious practice in some of his writings. The piyutim he wrote for the weekday shemoneh esrei (such as for Purim) includea paragraph for every brachah of shemoneh esrei except one, the brachahEs tzemach Davidthat precedes Shema koleinu.
Why would Rav Kalir omit this brachah?Perhaps the answer to this mystery can help us understand more about when helived.
Answering the Mystery
Our use of the words shemoneh esreito identify the focal part of our daily prayer is actually a misnomer, datingback to when this brachah indeedincluded only eighteen berachos.In the times of the Mishnah, anineteenth brachah, Velamalshinim, was added, and the Talmud Bavli notes that this increases theberachos of the “shemoneh esrei” to nineteen.
However, there is evidence that even after Velamalshinim was added, not everyone recited nineteen berachos. A Tosefta implies that they still recited eighteen berachos in the shemoneh esrei, and that two berachos, UveneiYerushalim and Es tzemach David, were combined. This would explainwhy someone would not write a piyutfor the brachah Es tzemach David, since it was no longer anindependent brachah. Thus, if wecan identify a place and time when these two berachoswere combined, we might identify more precisely when Rav Elazar HaKalir lived. Itwould seem that this would be sometime between the introduction of the bracha Velamalshinim and the Talmid Bavli’s practice of a nineteen brachah “shemoneh esrei” became accepted.
The antiquity of Rabbi Elazar’s writing did not save him fromcontroversy. No less a gadol thanthe Ibn Ezra stridently opposes using Rav Kalir’s works, arguing that prayersand piyutim should be written veryclearly and be readily understood (Commentaryto Koheles 5:1). After all, the goal of prayer is to understand whatone is saying. Ibn Ezra recommends reciting piyutimwritten by Rav Saadia Geon, which are easy to understand, rather than those ofKalir.
None of these criticisms should be taken as casting aspersion on RavElazar HaKalir’s greatness. ShibboleiHaLeket records that he heard that when Rav Elazar wrote his piyutim, the angels surrounded him withfire (quoted by the Magen Avrahamat the beginning of Siman 68.)
Rav Kalir’s piyutim in general,and his kinos in particular, arewritten in an extremely difficult poetic Hebrew. Often his ideas are left as allusions,and the story or midrash to whichhe alludes is unclear or obscure. They certainly cannot be understood without carefulscrutiny and study. Someone who takes the trouble to do this will be awed bythe beauty of the thoughts and allusions. The Arizalrecited all of the Kalir’s piyutim,because he perceived their deep kabbalistic allusions (Magen Avraham at the beginning of Siman 68).
Having completed my general introduction to the role of piyutim in Judaism, I want to paraphrasethe first question mentioned above: Why are the kinos so difficult to comprehend?
Most of the kinos werecite on Tisha B’Av are authoredby Rav Elazar HaKalir, whose works are typically written in a very poetic anddifficult Hebrew. Many commentaries have been written on the kinos, and the only way to understand his kinos is either to study them in advance,to read them together with a translation, or to hear a shiur from someone who understands them.Furthermore, I recommend reading each kinahslowly so that one can understand what the author meant. This may entail someonereciting only a few kinos for theentire morning of Tisha B’Av, buthe will understand and experience the meaning of what he read.
Other parts of Kinos
Kinos includesseveral pieces that describe specific historical events, including the death inbattle of the great king Yoshiyahu, the personal tragedy of the children ofRabbi Yishemael Kohen Gadol, the burning of the twenty-fourwagons of manuscript Talmud and commentaries in Paris, and the destruction of Jewishcommunities during the Crusades. These kinosare not as difficult to understand as Rav Kalir’s are, but, still, they aredeeply appreciated by those who prepare them, or attend a kinos presented by someone who understandsthem.
We see that liturgical poems enhance our appreciation of the day andprovide a background for our mourning. This is borne out even more with theseveral kinos that begin with theword Tzion,which all bemoan our missing the sanctity of Eretz Yisrael and our desire for thereturn of Hashem’s Shechinah. Another kinah that stands out, Az Bahaloch describes how Yirmeyahu HaNavigathered the avos and the imahos to pray on the behalf of the exiledJewish people. This story is described greater in the moving midrash which quotes how our mother Rachelbeseeched Hashem on behalf of theJewish people. Rachel points out the extent of sacrifice that she underwent tosave her sister from humiliation. In the midrash,Rachel closes her prayer: “If I, who am flesh and blood, dust and ashes,felt no jealousy towards another woman married to my own husband, and went outof my way to guarantee that she would not be embarrassed, You, the living King,All-merciful… why were You zealous against idolatry, which has no basis,and why did You exile my children… allowing the enemies to do with themas they desired?” As a result, Hashemresponded with great mercy, replying: “For your sake, Rachel, I willreturn the Jews to their home” (EichahRabbah, end of Pesichtah 24). | <urn:uuid:fc07b158-812f-420b-ac5a-60582fcc024d> | CC-MAIN-2021-21 | https://rabbikaganoff.com/piyut-curiosities/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988955.89/warc/CC-MAIN-20210509032519-20210509062519-00415.warc.gz | en | 0.928399 | 4,883 | 2.6875 | 3 |
In his chapter on “Language” in Nature, Ralph Waldo Emerson writes that “We know more from nature than we can at will communicate” (23). It is a strange sentence. It suggests a nonverbal transference, an epistemologically freighted communication, that is yet not couched in the familiar terms of language, but is instead linked to the failures of language to communicate what it is exactly that we know from nature. What do we know from nature? There is the sense here that nature, despite not speaking, somehow communicates better than we do – better than words do – because it mysteriously and uncannily gives us something “more” – something that, perhaps because it is “more,” we cannot encapsulate exactly into words or language. There then arises a problematic gulf between our experience of nature and our ability to communicate this experience in words – it is as though there is an immensity to our experience of nature, an overabundant meaningfulness, a significance or fullness – what Susanne Langer calls “import” – that can be captured in language only by indirection, and that indeed calls attention to the limits or failures of language to adequately convey our total experience. Emerson continues to elaborate on this notion of inarticulacy two sentences later in the same essay, when he writes,
The poet, the orator, bred in the woods, whose senses have been nourished by their fair and appeasing changes, year after year, without design and without heed, – shall not lose their lesson altogether, in the roar of cities or the broil of politics (23).
“Shall not lose their lesson altogether” – this nonverbal communication from nature that makes verbal communication of that experience difficult if not impossible is characterized here as a “lesson.” Here the nonverbal, despite not speaking, teaches us something – we could almost say, mentors us in some way. But what is the nature of this nonverbal mentorship? What kind of nonverbal language does nature “speak”? Or, as David Jacobson puts it differently in Emerson’s Pragmatic Vision: The Dance of the Eye, “What nature do we raise to presence?”(11).
In this essay, I will interpret scenes in Whitman’s Leaves of Grass from the 1855 “Song of Myself” in which Whitman allows nature to “speak” through his speaker’s silence and wonder, although not through any dialogue. For while there has been literature written on Whitman and oratory, as well as recent important work on Whitman in the context of humility and nature , there has been little written on Whitman and the aesthetics of silence and wonder in relationship to nature. Indeed, we can even go so far as to say that Whitman is so imbued with a wonder for nature that he imagines his voice as a kind of nature, one that has silence built into it. For when Whitman’s speaker is absorbed in wonder from certain activities – listening to the stevedores, say, or noticing a blade of grass – he performs in his poetry a kind of silence. But what is the relationship between wonder and silence? As we discuss below, the experience of wonder entails silence because 1.) we are somewhat passive conceptually during an experience of wonder (we silently absorb what is being experienced); 2.) we cannot exactly articulate the import of wonder (and therefore silence is a way of gesturing towards wonder without exactly articulating it); and 3.) the experience of wonder contains no traces of memory (and therefore that aspect of ourselves that involves memory is silenced).
Nature’s Valved Voice
It should not be surprising that Whitman joins in the 1855 “Song of Myself” a seemingly inexpressible wonder before nature with a silence that juxtaposes bizarrely with his oftentimes exultant and enthusiastic tone. Whitman is saying that nature at times compels him into silence, as if the very fact of nature convinces Whitman to behave likewise. If he cannot communicate what he knows from nature, what other recourse does he have other than a form of silence? (Of course, at other times nature compels Whitman into exultant praise of nature, as if nature has produced the opposite effect, a kind of linguistic ecstasy. ) Yet “Song of Myself” is rife with passages where Whitman, in observing nature, either lapses into silence or deliberately chooses to listen and not talk. Our first indication of this desire to emulate the silence and presence of nature comes in the second stanza of “Song of Myself,” where Whitman writes, “I loafe and invite my soul, / I lean and loafe at my ease….observing a spear of summer grass.” It is interesting that Whitman is “observing a spear of summer grass,” but it is also interesting what comes after this observation. Whitman does not describe the blade of grass or offer any commentary. Discursiveness of the speaker, and therefore of the reader, is silenced. Instead, he allows the spear of grass to be seen, experienced, witnessed, noticed, observed, beheld, wondered at, by himself and the reader. He attends to the spear of grass, in the same way in which Theo Davis discusses the way in which Whitman bestows light and value upon what he notices. Then, as if in unspoken commentary on what has been seen, he says nothing more. His wonder is non-discursive; he allows the givenness of the image to speak for itself, like a proto-imagist or proto-phenomenologist. A few pages later, we return to this motif of the grass and a very fascinating form of silence, when Whitman writes,
Loafe with me on the grass….loose the stop from your throat,
Not words, not music or rhyme I want….not custom or lecture, not even the best,
Only the lull I like, the hum of your valved voice (30).
It appears as though Whitman is inviting the reader to loaf with him on the grass, but in actuality Whitman is inviting his own soul to do such a thing. In the previous stanza, Whitman writes, “I believe in you my soul…the other I am must not abase itself to you, / And you must not be abased to the other.” In this scene involving lying out lazily in the grass, Whitman invokes the muse of his soul, asking his soul, as if it were a kind of bizarre musical instrument, to “loose the stop from your throat, / Not words, not music or rhyme I want…not custom or lecture, not even the best, / Only the lull I like, the hum of your valved voice.” Mark Bauerlein writes about this passage,
Sound raised to the level of the “hum” of the “valved voice,” the “password primeval,” reintegrates “old and young,” “maternal as well as paternal,” “the wicked just the same as the righteous,” into a community of visionaries whose voices are lifted together in a “chant democratic” (3).
Here I wish to problematize the notion of “a community of visionaries whose voices are lifted together in a ‘chant democratic,’” for Whitman does not talk about voice in this passage in any manner that is conventional, and the very strangeness of his poetic formulation of his voice suggests something idiosyncratic, something not easily shared within a community. In a very strange way – and despite invoking his muse – Whitman does not want the voice of his soul to speak or express itself – he does not want words, music or even rhyme. He avoids discursiveness, and he avoids ordinary conceptual thought regarding his voice. He rather desires a “lull…the hum of your valved voice” – something suggesting both a conceptual inactivity and yet a cognitive activity.
This is also a beguiling passage because of the different ways in which we can read the words “lull” and “valved.” Lull as a verb relates to a soothing, a calming voice that leads one into sleep. Yet it also as a noun denotes an interval of quiet, a ceasing of activity, a form of interlude or pause. These different meanings of “lull” are direct opposites – in the first meaning, we are given to imagine a continuity, a ceaseless and unaware flowing between different states of mind. In the second meaning, we are presented with a discontinuity, an interruption, a hiatus or suspension. “Valved” can also connote two very different meanings. It can either refer to an opening or a closing, an allowing or an obstructing of a fluid – here, the fluid being Whitman’s voice. Therefore, when we read about the “lull” and “valved voice,” it is as though Whitman is attempting to articulate a very different and very paradoxical conception of the poetic voice, a voice that finds parallels with Whitman’s conceptions of nature as a form of overabundantly meaningful nonverbal communication.
Wonder and silence are built within the speaker’s notion of his voice, for these significantly attend Whitman’s attitude towards nature. Nature, like Whitman’s voice, is closed and open to us; nature, like Whitman’s voice, suggests both a continuity and a discontinuity, or as Christine Gerhardt has it, “an identification and a dissociation.” Whitman wants his voice to communicate in the manner in which nature communicates to him – without language, through silence, and through the wonder that nature evokes. Even the word “hum” is strange here. Whitman does not mean it musically, as he points out that he does not desire music at all. Rather, through the line “Only the lull I like, the hum of your valved voice,” Whitman is arguing for a voice-that-is-not-a-voice, a fluid or current that still manages to “hum,” something that is non-conceptual but cognitively felt. He wishes his voice to be a presence more than a voice, a language-less language.
Wonder and Silence
It might seem strange to suggest that the experience of wonder involves a passivity. And yet this seems to be an important aspect of the phenomenology of wonder, for while our cognitive faculties might be extremely intensified while we are experiencing it, to experience wonder in something requires as well a kind of conceptual passiveness or inactiveness, which allows the experiential content of what is being seen or listened to or read to penetrate or sink into the mind. This account of wonder is heavily indebted to Kant (the intensification of our cognitive faculties) and Schopenhauer (a passiveness, or, better yet, “the complete absence of ordinary conceptual thought,” as Janaway has it (70)). Indeed, one of the arguments of this essay is that Kant’s notion of the free play of imagination and understanding involved in aesthetic judgment, and Schopenhauer’s notion of “pure, will-less contemplation” are most germane for describing the internal dynamics of Whitman’s depictions of wonder in the 1855 “Song of Myself.” In other words, Whitman, like Kant in his Critique of the Power of Judgment, is often interested in what David Bell describes as “subjective, non-discursive mental experience” (Bell, 240) and the way in which this entails a “subjective, non-conceptual, spontaneous significance” (238). As Bell writes, speaking of the paradoxical status of the Kantian imagination as something both spontaneous and objective:
The freedom of the productive imagination, according to Kant, ‘consists precisely in the fact that it schematizes without a concept.’ To schematize without a concept is to discover in the diversity of sensory experience a felt unity, coherence, or order, which is non-cognitive and non-conceptual, but which is a necessary condition of the possibility of all rule-governed thought and judgment. One intuitive and accessible analogy for schematizing without a concept, I have suggested, is the successful coming to terms with a work of abstract expressionism; but the fully articulated model is couched in terms of our ability to enjoy a spontaneous, criteria-less, disinterested, presumptively universal, non-cognitive, reflective feeling that certain diverse elements of experience as such belong together, that they comprise an intrinsically satisfying whole in virtue of their seeming to have a point (though without it being the case that there is some specific point which they are judged to have) (238-239).
It’s important here that Bell points out that in Kant’s model, aesthetic judgment leads to a sense of import, or significance, “though without it being the case that there is some specific point which they are judged to have.” This is suggestive of a silence, an inability to fully or exactly articulate the significance or import of the reflective feeling.
This same dynamic – in which a feeling of significance dawns or erupts or happens, though it is difficult to articulate it exactly – has been commented on by Lewis Hyde in the context of the reader’s experience of Whitman’s catalogs, (though the dynamic should be applied as well to Whitman’s speaker’s own representations of his experiences of wonder towards the people and events he presents, for these presentations are non-discursive and suggest that the speaker, too, cannot exactly articulate the import of his catalogs, and so the people and events must stand for themselves). Hyde writes,
One of the effects of reading Whitman’s famous catalogs is to induce his own equanimity in the reader. Each element of creation seems equally fascinating. The poet’s eye focuses with unqualified attention on such a wide range of creation that our sense of discrimination soon withdraws for lack of use, and that part of us which can sense the underlying coherence comes forward…Whitman puts hierarchy to sleep. He attends to life wherever it moves….The contending and reckoning under which most of us suffer most of the time – in which this thing or that thing is sufficient or insufficient, this lover, that lover, this wine, that movie, this pair of pants – is laid aside (212-213).
Hyde is describing, in different terms, “a spontaneous, criteria-less, disinterested, presumptively universal, non-cognitive, reflective feeling that certain diverse elements of experience as such belong together.” As he writes, “that part of us which can sense the underlying coherence comes forward.” This sense of the “underlying coherence” suggests a sense of wonder at what is coming forward, a sense of wonder that is conceptually passive but cognitively active, and that involves a sense of significance or import that is difficult to articulate exactly.
Whitman, in other words, is often interested in “watching and wondering,” and this watching and wondering involves an awareness of the noncommensurability of things (30). But this desire to watch and wonder also seems to preclude the faculty of memory. As Philip Fischer points out, “we wonder at an object when in its presence the novelty of its features does not remind us of anything else,” suggestive also of a kind of cognitive silence (46). Indeed, for Fischer, “For the full experience of wonder there must be no description beforehand that will lead us to compare what we actually experience with what we were told…The object must be unexpectedly, instantaneously seen for the first time” (17). Therefore, “for wonder there must be no element of memory in the experience” (18). While this last assertion is problematic, for Whitman was presumably recalling (and creating) images when he wrote his catalogs, it does seem significant that, in these catalogs (and throughout most of the 1855 “Song of Myself”) there is little mention of the past. This is also suggestive of a silence: we are often simply and instantaneously presented with the given, without traces of memory – a transaction that is abundant with meaning though absent of discursiveness.
Whitman’s Aversion to Talk
We can also find instantiations of Whitman’s interest in wonder and silence through his aversion to talk. “Come now I will not be tantalized…” Whitman writes, for example, in one of the unnumbered sections of the 1855 Leaves of Grass, “you conceive too much of articulation” (53). In passage after passage of what came to be called “Song of Myself,” Whitman emphasizes a continuity and discontinuity within nature, a continuity and discontinuity that is often characterized in terms of how and where language seems to fail. “Logic and sermons never convince, / The damp of the night drives deeper into my soul” (56) – here there is a discontinuity between human nature, but a continuity between Whitman and the natural world that suggests the aesthetics of silence and wonder, which we also find in the lines – “Oxen that rattle the yoke or halt in the shade, what is that you express in your eyes? / It seems to me more than all the print I have read in my life” (37). Or again, in the lines, “Do you guess I have some intricate purpose? / Well I have…for the April rain has, and the mica on the side of a rock has,” Whitman draws a line of continuity between himself, the April rain, and the mica, and yet the ellipses within the lines and the white space following the lines suggest a movement into silence and wonder, as if the articulation of the specific purpose of the rain and the mica could be gestured towards, though not fully articulated (45).
Indeed, for all the bombast, rhetoric and assertion of “Song of Myself,” the poem is haunted by strange elliptical caesuras, by the almost entire lack of dialogue, by the awareness of the way in which language appears to fail to communicate in the larger manner in which nature communicates to us. Put another way, Whitman’s fascination with, desire for, and experience of wonder in the poem precludes the possibility of dialogue, because the experience of wonder, as mentioned above, is primarily a conceptually inactive one on the part of the one experiencing it, and dialogue of course is not silent, but rather “talk” – what Louis Hyde equates in Whitman with “questioning and argument,” i.e. conceptual activities that come loaded with assumptions about the world (214). Hyde goes on to write, “I do not mean [Whitman] is silent – he affirms and celebrates – but his mouth is sealed before the sleepless, pestering questions of the dividing mind” (214-215). Hyde is right regarding Whitman’s aversion to “talk,” but he underestimates the possibility that one can affirm and celebrate through silence in language.
For these reasons, scholarship on Whitman and oratory can overestimate the role that speech plays in “Song of Myself.” There is no “Song of Myself” without language, and the scholarship on the role that oratory plays in Whitman’s poetry, for example, is incredibly persuasive. Still, if we look closer at this language, we find that it is often gesturing towards silence and questioning the utility of speech. Therefore, when Mark Bauerlein writes in “The Written Orator of “Song of Myself”: A Recent Trend in Whitman Criticism” that “Speech…becomes Whitman’s major tactical motif in “Song of Myself” that harmonizes and consolidates society into a unified ‘interpretive community,’” (2) one wonders about the gaps in this statement, the times in the poem when, as Bauerlein points out later in the same article, “a mystical silence overrules language,” suggesting harmony and consolidation, yes, but a silent harmony and consolidation, something stranger than conventional speech (5).
Examples of this silence abound, such as in the lines, “The little one sleeps in its cradle, / I lift the gauze and look a long time, and silently brush away flies with my hand,” or “The youngster and redfaced girl turn aside up the bushy hill, / I peeringly view them from the top” (33). Whitman looks “a long time” at the infant, or “peeringly view[s]” the youngster and the redfaced girl,” but he does not speak, call after the couple, or coo to the infant. We can feel his presence gazing at these people, a presence that is initiated by Whitman’s silent wonder at what he sees. He is reticent to speak; he is so absorbed by what he sees, and by the wonder it provokes, that he doesn’t even want to speak. For example, in a passage in which any sound would be expected, we read:
The big doors of the country-barn stand open and ready,
The dried grass of the harvest-time loads the slow-drawn wagon,
The clear light plays on the brown gray and green intertinged,
The armfuls are packed to the sagging mow:
I am there…I help…I came stretched atop of the load,
I felt its soft jolts…one leg reclined on the other,
I jump from the crossbeams, and seize the clover and timothy,
And roll head over heels, and tangle my hair full of wisps (34).
Even as Whitman “[rolls] head over heels, and [tangles his] hair full of wisps,” he does not shout out with joy, yelp in ecstasy, scream in delight or sound his barbaric yawp. The only approximation to sound we are presented with in this passage is layered over with the sense of touch and proprioception, in the “soft jolts” of the wagon, the seizing of the clover and timothy, and the tangling of the speaker’s hair “full of wisps.” Yet Whitman does not describe these sounds or these touches. The “soft” jolts of the wagon have less to do with volume and more to do with physical balance. In a passage so ostensibly exultant, full of a kind of enthusiastic labor, it is unsettling how silent the stanza is. One feels as though we are watching a poem on mute, expecting sound that does not emit. Christine Gerhardt has also taken up this passage, yet she focuses less on the absence of sound and more on the speaker’s downward movement and immersion into the hay and plants at the end of the stanza. She writes,
It is noteworthy here that the speaker, as he celebrates thick loads of hay and especially the “clover and timothy” that form the basis for this agricultural economy, moves downward from his elevated position, his superiority in difference, to immerse himself in “wisps” of hay and herbs. As such, he calls attention to the grass’s beauty and botanical diversity as much as its economic significance, filling the spaces imaginatively opened by the promise of section 6 to use “the produced babe of the vegetation” “tenderly” (69).
I agree with Gerhardt that Whitman does call attention to the dried grass’s beauty – especially in the line, “The clear light plays on the brown gray and green intertinged,” which evokes an impressionistic, painterly mode of seeing – as well as the grass’s botanical diversity and economic significance. But there is also an undeniable strangeness to the passage, not mentioned by Gerhardt, that seems to have to do with silence and wonder at the sheer fact of experience, that experience is possible at all. The wagon does not creak; there is no mention of the people that the speaker helps, nor of what they might say. This absence of overt human presence (besides the speaker) augments the felt presence of the natural scene, making it vivid, and inflecting the natural world that is portrayed with a quality of givenness. This givenness of the world parallels Whitman’s wonder at the givenness of experience. He relishes nature so much that, as Gerhardt points out, he immerses himself in it. The openness and silence of the “big doors of the country barn,” then, might be seen as a metaphor for Whitman’s approach in this passage, the way in which he invites us into his poem to immerse us in the details of the given, to absorb our experience in and within these qualities. And again, it is as if Whitman is afraid to speak more than he is speaking already – as if even the slightest hint of superfluous dialogue might taint the qualities of the scene as given – its own unique and idiosyncratic flavor, its form of language without language, its intense and radical wonder.
These silent transactions that involve wonder pertain in “Song of Myself” to people as well. These transactions happen often in “Song of Myself.” We only have to look as far as Whitman’s description of the “marriage of the trapper in the open air in the far west,” a marriage during which (at least in what we read) no one speaks, though the bride’s “father and his friends [sit] nearby crosslegged and dumbly smoking” (35). Yet one of the best and most important examples we find in “Song of Myself” of this representation of silence is in the bathers episode. Here, Whitman presents a remarkably intense continuity and discontinuity between the woman observing the bathers and the bathers themselves.
Although Whitman narrates the event in words, there are no words spoken during the event – nothing spoken by the woman, and nothing spoken by the bathers, (though we do hear of the laughter of “the twenty-ninth bather,” i.e. the woman herself, or a composite of herself and the speaker). Yet Whitman’s poetry is able to bring the nonverbal presence of this encounter to light. Indeed, Whitman describes a scene of intense and immense longing, absorption and wonder, in which certain dynamics of the scene correspond to our descriptions of Whitman and nature above: a transaction filled almost excruciatingly, abundantly with meaning; an intense absorption in what is being experienced on the part of Whitman, the lady, and the reader, suggestive of wonder, silence (and here erotic longing); a lack of commentary or discursiveness on what is being experienced; and a very remarkable imagistic vividness. The episode reads,
Twenty-eight young men bathe by the shore,
Twenty-eight young men, and all so friendly,
Twenty-eight years of womanly life, and all so lonesome.
She owns the fine house by the rise of the bank,
She hides handsome and richly drest aft the blinds of the window.Which of the young men does she like the best?
Ah the homeliest of them is beautiful to her.
Where are you off to, lady? for I see you,
You splash in the water there, yet stay stock still in your room.
Dancing and laughing along the beach came the twenty-ninth bather,
The rest did not see her, but she saw them and loved them.
The beards of the young men glistened with sweat, it ran from their long hair,
Little streams passed all over their bodies.
An unseen hand also passed over their bodies,
It descended tremblingly from their temples and ribs.
The young men float on their backs, their white bellies swell to the sun…they do not ask who seizes fast to them,
They do not know who puffs and declines with pendant and bending arch,
They do not think whom they souse with spray (36).
I would argue that the key and climactic line of the passage is “An unseen hand also passed over their bodies, / It descended tremblingly from their temples and ribs.” In this moment, the longing of the woman behind the blinds is intensely and imaginatively actuated; it reaches a loud though silent climax, and yet it is an “unseen hand.” The pathos of the unseen hand is augmented by the unheard voice of the woman. The presence of longing in the woman is conveyed by language that does not offer any commentary or dialogue, just as the bathers themselves are presented without dialogue. It is as if the conditions of meaningfulness that form her desire are contingent to a certain degree on being presented as opposed to being articulated. By going unarticulated, they become more powerful and poignant – their import becomes more significant. Moon writes about this passage,
In representing her wish to do so, the text releases this rich “lady” (Where are you off to, lady?”) from the constraints of gender and class which have hitherto relegated her to “twenty-eight years of womanly [which in this text, at least to begin with, is to say “lonesome”] life.” In the poem’s liminal space, she can have her “fine house” to “hide” in, but also fly out of it, “Dancing and laughing,” at the same time: “You splash in the water there, yet stay stock still in your room” (858).
As Moon points out, the woman is released “from the constraints of gender and class,” but she is also released from the constraints of language, of actually speaking to these men. It is as if her longing and wonder need a certain space to spread itself out, and this need is based to a certain extent on it going unarticulated, although it is presented to us, or presenced for and to us, by Whitman’s speaker. In the same way in which Whitman has been reticent to ruin his observed scenes with too much verbiage, the bathers scene also suggests that the woman is almost happier longing for the men than speaking to them. Therefore, when Moon writes that the passage “makes seen what is unseen (hidden or proscribed desire) through the substitution for it of language and writing,” he neglects to point out that this language and writing contains no dialogue. It is another nonverbal transaction, another scene of silent wonder in which nature is made to “speak.”
This paper is an attempt at illustrating how Whitman performs the difficult notion that “we know more from nature than we can at will communicate.” Whitman’s answer to Emerson’s line is silence and wonder, for these can then gesture at this phenomenon without articulating it. For this reason, we might say that Whitman, despite his lack of conviction in talk, believes we can communicate what nature communicates to us, but only indirectly and non-conventionally, through a form of presentation. Whitman is often therefore akin to a proto-phenomenologist, “bracketing” what he sees and presenting it to us. This bracketing and presentation produces in Whitman and the reader a feeling of wonder, and attendant upon this wonder is silence, because we cannot exactly articulate the import of the wonder, we are conceptually passive during an experience of wonder, and the experience contains no traces of memory. This may seem like an odd argument, especially coming after poststructuralism and its emphasis on language at the expense of what Michael Clune calls in Writing Against Time “an extra-textual reality.” As Clune writes, following an excerpt from Georges Poulet’s “Phenomenology of Reading,”
The rise of poststructuralism, with its twin commitments to the death of the author and the indeterminacy of the text, led to an eclipse of Poulet’s analysis of reading as the recovery of another form of life. For critics influenced by deconstruction, the figural and rhetorical properties of texts block the transmission of an extra-textual reality such as the author’s perceptual experience (29).
This essay is interested, however, in what Clune calls “the transmission of an extra-textual reality,” namely the sense of fullness, of wonder and delight, that Whitman transmits through his work. Whitman immerses us within his own attentional acts, even as he immerses himself into the scenes of his presentation, but he does not offer commentary on these acts. It is interesting, therefore, that Emerson writes in “The Poet” that criticism, “infested with a cant of materialism,” overlooks “the fact, that some men, namely poets, are natural sayers, sent into the world to the end of expression” (449). While Emerson presumably intends the primary meaning of “end of expression” to suggest the means or purposes of expression, he could not have been unaware of the other, secondary meaning of “end of expression” as suggesting something quite different, something that is interested more in the “sensuous fact” and less in commentary on this fact (447). Poets are sent into the world, we might read Emerson’s sentence, for and towards the “end of expression” – for moving towards aspects of expression that incorporate silence and wonder. These aspects are some of the important but neglected reasons readers continue reading Whitman centuries after the publication of the 1855 Leaves of Grass.
Bauerlein, Mark. “The Written Orator of “Song of Myself”: A Recent Trend in Whitman Criticism.” Walt Whitman Quarterly Review 3.3 (1986): 1-14. Print.
Bell, Daniel. “The Art of Judgment.” Mind, 96: 221-244.
Clune, Michael. Writing Against Time. Stanford: Stanford University Press, 2013. Print.
Davis, Theo. Ornamental Aesthetics: The Poetry of Attending in Thoreau, Dickinson, and Whitman. New York: Oxford University Press, 2016.
Emerson, Ralph Waldo. Essays and Lectures. New York: The Library of America, 1983. Print.
Fischer, Philip. Wonder, the Rainbow, and the Aesthetics of Rare Experiences. Cambridge: Harvard University Press, 2003. Print.
Gerhardt, Christine. A Place for Humility: Whitman, Dickinson, and the Natural World. Iowa City: University of Iowa Press, 2014. Print.
Hyde, Lewis. The Gift: Creativity and the Artist in the Modern World. 2nd ed. New York: Vintage Books, 2007. Print.
Jacobson, David. Emerson’s Pragmatic Vision: The Dance of the Eye. University Park: The Pennsylvania State University Press, 1993. Print.
Janaway, Christopher. “Kant’s Aesthetics and the ‘Empty Cognitive Stock.’” Kant’s Critique of the Power of Judgment: Critical Essays, edited by Paul Guyer, Rowman & Littlefield Publishers, Inc., 67-86.
Mack, Stephen John. The Pragmatic Whitman: Reimagining American Democracy. Iowa City: University of Iowa Press, 2002. Print.
Moon, Michael. “The Twenty-Ninth Bather: Identity, Fluidity, Gender, and Sexuality in Section 11 of “Song of Myself.” Leaves of Grass and Other Writings. Ed. Michael Moon. New York City: W.W. Norton & Company, 2002. 855-863. Print.
Whitman, Walt. Complete Poetry and Collected Prose. New York: The Library of America, | <urn:uuid:58d641e4-741a-4e22-8551-60978c217e20> | CC-MAIN-2021-21 | https://www.emptymirrorbooks.com/literature/only-the-lull-i-like-walt-whitmans-image-of-silence | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00254.warc.gz | en | 0.953917 | 7,648 | 2.578125 | 3 |
The Stem Cell Revolution
"Stem cells have the ability to differentiate into a variety of tissues. This means, through careful engineering, stem cells could be used to repair a damaged brain or heart, rebuild a knee, restore injured nervous system connections, treat diabetes and much more. That's the potential power of stem cells, and the reason the University of
Today's Christian Doctor - Winter 2001
"Stem cells have the ability to differentiate into a variety of tissues. This means, through careful engineering, stem cells could be used to repair a damaged brain or heart, rebuild a knee, restore injured nervous system connections, treat diabetes and much more. That's the potential power of stem cells, and the reason the University of Minnesota is investing greatly in its Stem Cell Institute - the first of its kind. The Institute today will change medicine as we know it tomorrow."
Medical Bulletin - University of Minnesota, 2001
There is a revolution going on in medicine that will likely have as great, if not greater, an impact than the dawn of the antibiotic era 60 years ago. As one writer commented, it is "as though they had stumbled upon a packet of magic seeds that, depending on where they were planted, could grow carrots, broccoli, corn or cabbage."1 These magic seeds are stem cells that theoretically can give rise to any of the 210 different types of tissues in the human body and can divide and multiply for an indefinite period of time. This opens a new frontier of possibilities. Could stem cells be guided to produce islet cells for transplant into diabetics, programmed to replace heart muscle damaged by an infarction or even be used to reconnect a damaged spinal cord in a paraplegic? Maybe, but not tomorrow or next year as many would think after listening to the media.
As a Christian dentist or physician, your patients, friends and church look to you for guidance on complex issues that involve science and morality. My goal is to give you the scientific, biblical and ethical information you need to speak authoritatively in this case. I also want to help you think through the complex ethical questions surrounding this issue. Though the Christian Medical & Dental Associations (CMDA) does not yet have an official statement on stem cells, many of our ethical statements address the issues raised.
To begin with, some definitions will be helpful. Stem cells are thought to be totipotent, pluripotent or multipotent.
- "totipotent" stem cells, such as a fertilized human egg, can become an entire human being;
- "pluripotent" stem cells, such as those found in a seven-day embryo (a blastocyst) can develop into any body cell type but can't become an entire human being;
- "multipotent" stem cells can only differentiate into the same tissue type. For example, a bone marrow stem cell can differentiate into a monocyte, WBC or lymphocyte but not into kidney, heart muscle or brain.
Have you got that? If you do, you are only three years out of date because it is not nearly that simple. Recent studies have demonstrated that bone marrow stem cells can differentiate into other tissue types and there is evidence that pluripotent stem cells from a seven-day multipotent could develop into an embryo.2,3,4
Sources of Stem Cells
There are five, or, maybe soon, six sources of stem cells.
- Embryonic Stem Cells - are harvested from the inner cell mass of the blastocyst seven to ten days after fertilization and early cell differentiation. The embryo at this stage may be up to 200 cells in size.5
- Fetal Stem Cells - are often taken from the germline tissues that will make up the ovaries or testes of aborted fetuses.6
- Umbilical Cord Stem Cells - are taken from umbilical cord blood, which contains stem cells similar to those found in the bone marrow of newborns.7
- Placenta Derived Stem Cells: Anthrogensis Corporation recently announced the development of a commercial process that can extract ten times as many stem cells from a placenta as from cord blood.8
- Adult Stem Cells: Tissues, like bone marrow, lung, pancreas, brain, breast, fat, skin and even tooth pulp (We can thus draw you dentists into this ethical issue!) contain stem cells that have been isolated.9 In the public debate, umbilical cord and placenta stem cells are included in the term "adult stem cells," though they are not adult at all.
- De-differentiation of Somatic Cells: PPL Limited, the Scottish biotech company that developed "Dolly," is trying to create stem cells by the "de-differentiation" of somatic cells. Using skin or other cells, they hope to cause a cell to revert back to its stem cell ancestor.
How Stem Cells are Used
Presently, there are five proposed stem cell applications.
- Functional Genomics - Scientists will use them to try to understand the complex events of cell development.
- Drug Testing: Stem cells could allow scientists to test new drugs using human cell lines, which could hasten new drug development.
- Cell Therapy: If cells could be guided to differentiate into specific cell populations, they could be used to treat diseases characterized by cell death such as diabetes, multiple sclerosis, myocardial infarctions or strokes.
- Gene Therapy: These cells? ability to integrate and generate new cells within an organ makes stem cells prime candidates to deliver gene therapy to replace genetically defective cells.
- Organ Generation: Stem cells could become the seeds of an unlimited source of lab grown organs for transplantation.10
There are already 15,000 adult stem cell therapies carried out in this country each year. Bone marrow derived stem cells are used in cancer and autoimmune treatment protocols to replace/repair patients? hemopoietic systems after high dose chemotherapy or radiation. These treatment protocols are used to treat brain tumors, retinoblastoma, ovarian cancer, sarcomas, multiple myeloma, leukemia, breast cancer, neuroblastoma, renal cell carcinoma and juvenile rheumatoid arthritis as well as other diseases.11Scientists thus have broad experience in many aspects of adult stem cell therapy.
The Current Debate
The debate that has raged in our country focuses around whether federal funding should be used to fund research that requires the destruction of human embryos. CMDA has maintained that federal funds should not be allocated in this manner because such funding is illegal, immoral and unnecessary.
It is illegal because the "Dickey Amendment," which has been attached yearly to National Institutes of Health budgets states, "None of the funds made available in this Act may be used for research in which a human embryo or embryos are destroyed, discarded or knowingly subjected to risk of injury...."12Unfortunately, under the Clinton administration, an end run was done around this prohibition through guidelines that stated that the NIH could fund embryonic stem cell research as long as federal funds weren't used to actually destroy the embryos. These guidelines circumvented the intent of the Act and created a powerful incentive for destroying human embryos.
Destructive embryonic research is immoral as well. At its core, this debate is not about science but about human rights. Proponents of stem cell research know it is impossible to take away the inalienable right to life from human beings, so they are taking away the embryos? humanity by referring to them as "clumps of cells" or "primordial masses of tissue." In interview after interview, prominent scientists have testified that embryos are not human beings at all. Yet a cursory consultation of the dictionary reveals that a human being is: "A member of the genus Homo and especially of the species H. sapiens."13 If human embryos are not human beings, then what are they? Are they monkeys, pigs or cows? Of course not, they are human beings.
Though there are few atrocities whose scope compares to slavery, there are some disturbing parallels between the justifications made for slavery 150 years ago and what is being said today to justify the destruction of embryos in the process of stem cell research. Slavery was justified in two ways. It was alleged that African-Americans were subhuman because they didn't look, think or speak like Caucasian Americans. Once humanity is taken away from individuals, they then can be treated like property. They can be enslaved and abused on a plantation or confined to a lab till they die of old age or are dissected under a microscope.
Slavery was also justified by predictions of economic ruin and damage to the well-being of the general population if the slaves were freed. It was claimed that the United States would never take its rightful place among the world's great powers without a cheap source of manual labor. President Lincoln didn't buy this utilitarian argument. Despite enormous political pressures and a civil war, he issued the Emancipation Proclamation. Though it ultimately cost him his life, he showed moral statesmanship. He is now seen as the greatest U.S. President of all time. Of course, American ingenuity triumphed and today we have the greatest economy in world history.
In the stem cell debate, it is also argued that without federal funding, the United States will lose its lead in science. We're told that research will move offshore and the best scientists will move to other countries. These are gross overstatements, since only around half a dozen biotech companies out of close to a thousand are involved in embryonic stem cell research and twice as much investment money is pouring into companies doing adult stem cell research.
Biblical, Ethical and Moral Considerations
What does the Bible say about this issue? Most importantly, the Bible says that man is made in God's image (Genesis 1:26-27, 9:5-7). God's image is not based on human capacity such as the ability to reason or have relationships. The image of God is something humans possess as part of their nature or essence. The Scriptures describe a continuity of human personhood from before birth (Psalm 51:5, 39:13-16). Man is not seen as just another animal. God gave humankind dominion over animals (Genesis 1:26). The Bible also teaches that we are not to unjustly take human life (Deuteronomy 5:17). What drives these points home is the fact that Christ?s incarnation began with a miraculous fertilization (Luke 1:43; 26-38). Our Savior was once a one-cell embryo.
There are many ethical principles that argue against destroying embryos. The ethical principle of autonomy states that no one may act in a way that will affect another person without his or her informed consent. There is no greater violation of autonomy than to take a person's life. Our Declaration of Independence states that "All men are created equal and endowed by their Creator with certain inalienable rights, among which are the right to life, liberty and the pursuit of happiness." The right to life is "inherent" in the sense that it is bound to the human essence of a person. It cannot be bestowed or taken away by another person, legislative body or court unless a person has forfeited his or her right to life by killing someone with intent and forethought.
Sacrificing embryos for their stem cells also crosses the continental divide of medical ethics. The foundational ethical principle of medicine is to "do no harm." Thus medicine has prohibited harmful research on humans. The Nuremberg Code, adopted after World War II atrocities involving the elite physicians and medical institutions of Germany states, "No experiment should be conducted where there is a prior reason to believe that death or disabling injury will occur."14 The NIH's own "Guidelines for the Conduct of Research Involving Human Subjects" states, "The voluntary consent of the human subject is absolutely essential." It then excerpts the Nuremberg Code and states, "No experiment should be conducted where there is a priori reason to believe that death or disabling injury will occur."15 Though the American Medical Association (AMA) sanctions embryonic research, its standards for investigation state, " It is fundamental social policy that the advancement of scientific knowledge must always be secondary to the primary concern for the individual."16 The Council of Europe's Convention on Human Rights of 1997, the only international code of bioethics prohibits destroying embryos for research and regulates experimentation that can take place on them.17
It is argued that: "These frozen embryos will die anyway, so what difference does it make? Shouldn't some good come out of their existence?" There is a significant moral difference between individuals dying natural deaths versus having their lives taken by another. If a patient is dying of cancer, that is a tragedy, but it doesn't give a doctor the right to harvest their heart for transplant. In the first instance the patient dies of natural causes; in the second, the doctor takes the patient's life. The doctor's action would not be justifiable, even if the patient's heart were used to help someone else. It should also be noted that it is impossible for parents to give true "informed consent" to have their child killed. They could no more do this on the basis of the "best interests" of their embryo than they could authorize it for their five-year-old.
What about the estimated 100,000 frozen embryos in U.S. IVF clinics, many of whom are not abandoned or unwanted? They wait in suspended animation for the decision of their parents. A more humane alternative to the destruction of these embryos through research is for parents to decide to put them up for adoption. This would provide a wonderful alternative for the two million infertile couples in this country. It can be argued that there will be much better maternal-child bonding if the adoptive mother carries the child through gestation and delivery. The "Snowflakes" adoption program of the Nightlight Adoption agency in California already has done this successfully, despite a paucity of laws governing embryo adoptions. John and Marlene Strege gave testimony before Congress in July 2001 while holding their child, Hannah, who had been adopted as an embryo.
What is a person?
Are these young embryonic human beings persons? Adult human beings are the result of continuous growth that begins at fertilization. There is no morally relevant break in their development. Personhood does not depend on having abilities such as the power to reason, self-awareness, a certain level of intellect or consciousness. These capacities may be latent due to the fault of certain conditions, but the internal essence of the human being is unchanged.18 Developmental markers proposed by some for personhood are arbitrary and capricious. These markers include:
- Implantation: The essential nature of a person is not dependent on hormonal signals, the survival rate of embryos or whether twinning can occur, or has occurred.
- Brain Development: Brain wave activity is not a marker of life/personhood like it is a marker for death. Death of the brain is irreversible. The embryo has the capacity to develop full brain activity if it is allowed to do so.
- Pain Sensation: This "marker" confuses the sensation of harm with the reality of harm. A person is harmed, whether or not they feel their leg being needlessly cut off.
- Quickening: Personhood is not dependent on a mother's ability to feel her baby moving.
- Birth: Birth is just a change of location and degree of dependency. A baby is more dependent on the efforts of another after birth than it is before.19
Peter Singer, a professor at Princeton, has arbitrarily chosen one year after birth as the time to confer personhood. He states that a mature monkey has more moral value than a newborn baby does because the monkey can reason, is self-conscious and has a higher level of intellect.
Legally, what is a person? At present, 38 states recognize that life begins at conception20 and 25 states already regulate embryo/fetal research. Ten states ban harmful embryonic research all together.21Louisiana designates IVF -derived embryos as judicial persons.22 Maine, Michigan and Massachusetts impose up to five years of imprisonment for harmful research on live embryos or fetuses.23 Five states restrict the sale of embryos; five more restrict sale for research, and eight others prohibit sale for any reason.24
Adult Stem Cell Research is More Promising
The good news is that there is an ethical alternative to embryonic stem cell research that has gotten only token recognition in the media and has been downplayed by prominent scientists. Adult stem cell research holds as much, if not more, promise as embryonic stem cell research, and we are likely get to our therapeutic goals more quickly if the federal government puts its funding into this area.25,26,27,28
There have been no successful therapies utilizing embryonic stem cells in humans. Embryonic stem cells show signs of being genetically unstable and they are difficult to culture. It is hard to control their differentiation and it is difficult to get a pure cell culture of one cell type.29 The great advantage of embryonic stem cells is that they can differentiate into 210 different types of tissue. This is also their greatest weakness. How does a scientist direct development down just one path? Geron researchers, at the December 2000 meeting of the Society of Neuroscience, reported that they had attempted to transplant human embryonic stem cells into the brains of rats. The embryonic stem cells did not differentiate into brain cells. They stayed in disorganized clusters and brain cells near them began to die. Many reports in the lay press of embryonic stem cell success in animal models misleadingly omit the fact that these studies were done with fetal stem cells that had already differentiated into neural or other tissue stem cells.
Most people do not realize that using a few "left over" embryos from in-vitro fertilization will only allow scientists to do research. To do embryonic stem cell therapy, either tens of thousands of embryonic stem cell lines will have to be developed or the recipient patient will need to be cloned to assure histocompatibility. The clone, the twin of the patient, will be grown to the blastocyst stage in culture and then cannibalized for serviceable parts. The scientist would then have to manipulate the stem cells into the right tissue and transplant them back into the patient.30
By contrast, adult stem cells have distinct advantages. They can differentiate into many types of tissue regardless of their origin. Research with mice showed that adult stem cells can grow into heart, lung, intestine, kidney, liver, nervous tissue, muscle and other tissues.31 These cells are much more "plastic" than once thought. Bone marrow cells can become heart, brain, bone and kidney. Neural stem cells can become blood or retina. Pancreatic duct stem cells differentiated into islet cells and totally reversed diabetes in mice in a study at the University of Florida,32 while a much-vaunted study using embryonic stem cells showed that the islet-like structures produced only 1/50th of the insulin needed by a diabetic mouse; all the diabetic mice died.33 One adult marrow stem cell was able to completely repopulate the bone marrow of an immune deficient mouse.34 This points to an unlimited life span for adult stem cells. These cells are also easier to culture.35 Because the cells in question can be the patient's own cells, there is not transplant rejection or risk for genetic or viral disease transfer. Most importantly, adult stem cells seem to convert into the type of cells needed by the environment in which they are placed. If a marrow stem cell is put into a damaged kidney, it converts to a kidney type stem cell and begins to repair the damage. These cells also seem to migrate to damaged areas due to some unknown chemical signal.36 In some cases, littler or no lab manipulation seemed to be needed.
As Richard Doerflinger, spokesperson for the Conference of Bishops of the Catholic Church, testified before the Senate on July 18, 2001, adult stem cells " have repaired damaged corneas, restoring sight to people who were legally blind; they have healed broken bones and torn cartilage in clinical trials; they are being used to help regenerate heart tissue damaged by a cardiac arrest."37 Since stem cells have been found in fat, patients soon may have the side benefit of a liposuction to reduce their weight as stem cells are harvested to repair cardiac muscle destroyed by their infarction.
Conflicts of Interest and Current Events
All this data leaves one question unanswered. Why have scientists from Harvard, Stanford, the NIH and other prestigious institutions said unequivocally that embryonic stem cells are the answer to our therapeutic dilemmas? An investigative report by Neil Munro in the National Journal explains that it may be "the pecuniary interests of the physicians and scientists" that leads them to make these pronouncements. Three scientists from the above institutions were quoted 216 times in the national press. In only 17 instances was it mentioned that they were shareholders, founders or board members in private biotech companies that would benefit directly or indirectly from federal funding. With such conflicts of interest, it is impossible for them to be unbiased. Johns Hopkins? John Gearhart was co-discoverer of embryonic stem cells while working for Geron Corporation, a leading biotech firm. Geron has a profit sharing agreement with Hopkins as does the University of Wisconsin, where James Thomson, the other co-discover works. All of these scientists were special contributors to the NIH report on stem cells delivered to the President. A media that usually investigates and reports any conflict of interest has ignored this commingling of science and business. And it is not just scientists. Former Senator Connie Mack, R-Fla., who vigorously promotes federally funded embryonic stem cell research, is on the board of two biotech firms.38
Just as this article was being written, President Bush made his television address on stem cells. A number of prominent Christian leaders praised his moral statesmanship. President Bush's decision established that federal funding would only be used for experimentation with 60-plus stem cell lines already created. It wouldn't fund further destruction of embryos or incentivize scientists to destroy them. President Bush also put significant funding into adult stem cell research and appointed Leon Kass, a conservative bioethicist, to chair the President's Council on Bioethics. CMDA was encouraged by President Bush's courage in the face of enormous political pressure, but we are also concerned. For the first time in the history of American medicine, the federal government is now funding research on human tissue obtained in an immoral manner. Though the bioethical dam didn't break, the President's decision has put a leaking crack in its face. Pressure will increase that will further erode and perhaps rupture this major ethical barrier. Sixty cell lines will not last indefinitely and though they will facilitate research, they will not allow therapy. While a definitive decision has been delayed, there will be enormous pressure to legalize the mass destruction of embryos if there is any breakthrough in research with embryonic stem cells. The President's decision says nothing about the status of embryos; therefore, the private destruction of young human beings will continue with private funds. The very scientists, universities and biotech corporations that killed embryos to start their cell lines will now be financially rewarded for doing so.
Private embryo commodification will continue and expand. Geron is already buying and selling embryos from IVF clinics. The Jones Institute in Virginia is paying donors for eggs and sperm, with their explicit permission to create embryos who will be destroyed. Advanced Cell Technology is trying to clone embryos using somatic cells and cow ova to create embryo farms for commercial application. England looks on with amusement. Their government has already sanctioned therapeutic cloning and destructive embryonic research. They hope to be the first to bank the proceeds of trafficking in human life.
What Can We, Then, Do?
How should Christian doctors respond? First of all we should try to correct the disinformation campaign that is going on in our own areas of influence. The debate has been framed into a false dichotomy. The public believes it has two choices. Either they must accept embryonic stem cell research or forgo lifesaving breakthroughs for themselves or their loved ones. As physicians, we are especially well equipped to educate our patients, our friends and our churches.
You can find a "Power Point" presentation to aid you in doing this at CMDA's Web site: http://www.cmda.org. It contains the resources used to create this article, and much more.
We also can write op-ed pieces, volunteer to be interviewed on local TV and radio stations and contact our elected representatives. CMDA has been providing media training to members and scheduling them for media opportunities. In the first seven months of 2001, CMDA had over 540 media "hits," so there is plenty to do. We are also working to place members on the President?s Council on Bioethics and other commissions and councils of the department of Health and Human Services.
Our nation is making decisions that will affect the future of science in this country and its ethical foundations for decades to come. What sort of society are we creating for our children and grandchildren? We are in a stem cell revolution. Unfortunately, a lot of human beings-people?s children-are being killed and many more are at risk. It is our job to do what we can to save them. We must honor life at every stage of development.
1 Verfaille, Catherine. "Seeds of Hope." The Stem Cell Revolution. Winter 2001: 4.
2 Human embryonic stem cells differentiated into three primary germ layers and trophoblast. Thomson, J.A. et al. "Embryonic stem cell lines derived from human blastocysts." Science 282 (6 Nov. 1998):1145-47.
3 Human embryonic stem cells differentiated in culture to extraembryonic (trophoblast) and somatic stem cell lines. Reubinoff, B.E. et al. "Embryonic stem cell lines from human blastocysts: somatic differentiation in vitro." Nature Biotechnology 18 (April 2000):399-404.
4 ES cells from a monkey differentiated into three primary germ layers and trophoblast. Thomson, J.A. et al. "Isolation of a primate embryonic stem cell line." Proc. Natl. Acad. Sci. 92 (Aug. 1995): 7844-7848.
5 Thomson, James. "Embryonic stem cells derived from human blastocysts." Science 282 (6 Nov. 1998): 1145-47.
6 Shamblott, Michael, et al. "Derivation of pluripotent stem cells from cultured human primordial germ cells." PNAS 95 (Nov. 1998): 13726-31.
7 Amos, Johnathan. "Umbilical cords to repair brain damage." "BBC News" 19 Feb 2001. Http://news.bbc.co.ui/hi/english/in_depth/ sci_tech/2001/san_francisco/newsid_11.../1177766.st.
8 "Placenta May Be Life-Affirming Alternative Source for Stem Cells." Associated Press 12 April 2001. Http://www.prolifeinfo.org.
9 For articles documenting these sources of stem cells go to <adultstemcells.org> or download my PowerPoint presentation "Stem Cells: Potential and Problems" at http://www.cmdahome.org .
10 "Stem cells: A Primer." National Institute of Health. May 2000. Http://www.nih.gov/news/stem%20cell/primer.htm.
11 See http://www.stemcellresearch.org .
12 Section 511, Public Health Services Act.
13 The American Heritage(r) Dictionary of the English Language, Third Edition (c) 1996 by Houghton Mifflin Company.
14 from Trials of War Criminals before the Nuremberg Military Tribunals under Control Council Law No. 10, Vol. 2, pp. 181-182.. Washington, D.C.: U.S. Government Printing Office, 1949.
15 Consult the NIH?s Office of Human Subjects Research Web site: http://ohsr.od.nih.gov/.
16 See http://www.amaassn.org/apps/pf_online/pf_online?f_n=browse&doc=policyfiles/CEJA/E2.07.HTM&&s_t=&st_p=&nth=1&prev_pol=policyfiles/CEJA/E-1.02.HTM&nxt_pol=policyfiles/CEJA/E-2.01.HTM&
17 Council of Europe, Convention on Human Rights and Biomedicine, Chapter I, Article 2: "Primacy of the human being" and Chapter V, Article 18(2): "Research on embryos in vitro" (1997).
18 For philosophical arguments for personhood see Rae, Scott B. and Paul M. Cox. Bioethics: A Christian Approach in a Pluralistic Age Grand Rapids: Eerdmans, 1999. 159-165. A summary of their arguments is found in the Power Point presentation "Stem Cells: Potential and Problems" available for download at http://cmda.org.
20 Casey, Samuel B. "The Unchosen and Frozen: An Essay on the Need for Legislative Guidance Most Protective of Human Life in Deciding the Fate of Frozen Human Embryos in a Cold and Hard World." in Kilner, John, et al., ed. The Reproduction Revolution: A Christian Appraisal of Reproductive Technologies, Sexuality, and the Family. Grand Rapids: Eerdmans, 2000.
21 Andrews, Lori. "State Regulation of Embryo Stem Cell Research [draft]" (unpublished manuscript, commissioned by the National Bioethics Advisory Commission), p. 3.
22 Casey, op. cit., note 7.
23 Andrews, op. cit., p. 4, n. 16.
24 Andrews, op. cit., p. 12, citing Minn. Stat. Ann. § 145.422(3).
25 "The emerging truth in the lab is that pluripotent stem cells are hard to rein in. The potential that they would explode into a cancerous mass after stem cell transplant might turn out to be the Pandora?s box of stem cell research." Jonietz, Erika. "Innovation: Scouring Stem Cells" Technology Review. 2 March, 2001 http://209.58.177220/articles/jan01/innovation_jonietz
26 "Cell therapies using autologous (adult) donor cells hold tremendous promise for the treatment of both acquired and inherited diseases involving tissue degeneration and cellular dysfunction." Kaji, E.H. and Leiden, J.M. "Gene and Stem Cell Therapies." JAMA 285-5 (7 Feb. 2001): 548.
27 "The potential of tissue engineering using undifferentiated stem cells to replace organ function is even more profound. For example, it may be feasible to use pancreatic stem cells to replace islet function. Neural stem cells from adult animals have been stimulated to form tissues from all 3 germ layers ...." Niklason, L.E. and Robert Langer. "Prospects for Organ and Tissue Replacement." JAMA 285-5 (7 Feb 2001):574 -575.
28 "Easily accessible cells from bone marrow might someday be used to treat a wide range of neurological diseases - without raising the ethical concerns that accompany the use of embryonic cells." Stem Cells: New Excitement, Persistent Questions." Science 290 (1 Dec. 2000): 572.
29 Vogel, G. "Stem cells: New excitement, persistent questions." Science 290 (1 Dec 2000): 1672-1674.
30 "Now, a promising solution, which could potentially revolutionize transplantation medicine, would be to combine this embryonic stem cell technology with nuclear transfer technology, or cloning technology... making cells that would e fully compatible with the human patient." Dr. Michael West, President Advanced Cell Technology, in testimony before the Senate Subcommittee on Labor, Health and Human Services and Education on Dec. 2 1998.
31 For hundreds of references confirming the adult cells potential got to http://www.stemcell research.org
32 V. K. Ramiya, et al, "Reversal of insulin-dependent diabetes using islets generated in vitro from pancreatic stem cells," Nature Medicine 6, 278-282, March 2000.
33 N. Lumelsky et. al., "Differentiation of embryonic stem cells to insulin-secreting structures similar to pancreatic islets," Science Express. See http://www.sciencexpress.org Published online 26 April 2001; zdoi;10.1126/science.1058866.
34 Bhatia, M. et al. "Purification of primitive human hematopoietic cells capable of repopulating immune-deficient mice." Proc. Natl. Acad. Sci. USA 94 (May 1997): 5320-25.
35 Cho, R. H. and C.E. Muller-Sieburg. "High frequency of long-term culture-initiating cells retain in vivo repopulation and self-renewal capacities." Exp. Hematol. 28 (1 Sept. 2000): 1080-86.
36 Eglitis, M.A. et al. "Targeting of marrow-derived astrocytes to the ischemic brain." Neuroreport 10 (26 April 1999): 1289.
37 Testimony of Richard M. Doerflinger on behalf of the Committee for Pro-Life Activities United States Conference of Catholic Bishops before the Subcommittee on Labor, Health and Human Services, and Education, Senate Appropriations Committee Hearing on Stem Cell Research, July 18, 2001, p. 5.
38 Munro, Neil. "Mixing Business with Stem Cells." National Journal. (17 July 2001). 2348-2349. | <urn:uuid:ba9c03b8-56d0-4edc-90a3-f21bfb602d8d> | CC-MAIN-2021-21 | https://cmda.org/article/the-stem-cell-revolution/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988741.20/warc/CC-MAIN-20210506053729-20210506083729-00136.warc.gz | en | 0.936544 | 7,048 | 2.765625 | 3 |
Life as a Surgeon
Surgical careers begin long before one is known as a surgeon. Medicine in general, and surgery in particular, is competitive from the start. As the competition begins, in college or earlier, students are confronted with choices of doing what interests them and what they may truly enjoy vs doing what is required to get to the next step. It is easy to get caught up in the routine of what is required and to lose track of why one wanted to become a doctor, much less a surgeon, in the first place. The professions of medicine and surgery are vocations that require extensive knowledge and skill. They also require a high level of discretion and trustworthiness. The social contract between the medical profession and the public holds professionals to very high standards of competence and moral responsibility. Tom Krizek goes on to explain that a profession is a declaration of a way of life ‘‘in which expert knowledge is used not primarily for personal gain, but for the benefit of those who need that knowledge.’’
For physicians, part of professionalism requires that when confronted with a choice between what is good for the physician and what is good for the patient, they choose the latter. This occurs and is expected sometimes to the detriment of personal good and that of physicians’ families. Tom Krizek even goes so far as to question whether surgery is an ‘‘impairing profession.’’ This forces one to consider the anticipated lifestyle. In sorting this out, it is neither an ethical breach nor a sign of weakness to allocate high priority to families and to personal well-being. When trying to explain why surgery may be an impairing profession, Krizek expands with a cynical description of the selection process. Medical schools seek applicants with high intelligence; responsible behavior; a studious, hard-working nature; a logical and scientific approach to life and academics; and concern for living creatures. He goes further to explain that in addition to these characteristics, medical schools also look for intensity and drive, but are often unable to make distinctions among those who are too intense, have too much drive, or are too ingratiating.
There are many ethical challenges confronting medical students. As they start, medical students often have altruistic intentions, and at the same time are concerned with financial security. The cost of medical education is significant. This can encourage graduates to choose specialty training according to what will provide them the most expedient means of repaying their debt. This can have a significant, and deleterious, impact on the health care system in that the majority of medical graduates choose to pursue specialty training, leaving a gap in the availability of primary care providers. As medical students move into their clinical training, they begin interacting with patients. One concern during this time is how medical students should respond and carry on once they believe that a mistake on their part has resulted in the injury or death of another human being. In addition, the demands of studying for tests, giving presentations, writing notes, and seeing patients can be overwhelming. The humanistic and altruistic values that medical students have when they enter medical school can be lost as they take on so much responsibility. They can start to see patient interactions as obstacles that get in the way of their other work requirements. During their clinical years, medical students decide what field they will ultimately pursue. For students to make an informed decision about a career in surgery, they need to know what surgeons do, why they do it, and how surgery differs from other branches of medicine. It is important for them to be aware of what the life of a surgeon entails and whether it is possible for them to balance a surgical career with a rewarding family life.
Beginning residents are confronted with a seemingly unbearable workload, and they experience exhaustion to the point where the patient may seem like ‘‘the enemy.’’ At the same time, they must learn how to establish strong trusting relationships with patients. For the first time, they face the challenge of accepting morbidity and death that may have resulted directly from their own actions. It is important that residents learn ways to communicate their experience to friends and family, who may not understand the details of a surgeon’s work but can provide valuable support. The mid-level resident confronts the ethical management of ascending levels of responsibility and risks, along with increasing emphasis on technical knowledge and skills. It is at this level that the surgical education process is challenged to deal with the resident who does not display the ability to gain the skills required to complete training as a surgeon. Residents at this level also must deal with the increasing level of responsibility to the more junior residents and medical students who are dependent on them as teacher, organizer, and role model. All of this increasing responsibility comes at a time when the resident must read extensively, maintain a family life, and begin to put long-range plans into practice in preparation for the last rotation into the chosen final career path. The senior surgical resident should have acquired the basics of surgical technique and patient management, accepting nearly independent responsibility for patient care. The resident at this level must efficiently and fairly coordinate the functioning team, engage in teaching activities, and work closely with all complements of the staff. As far as ethics education is concerned, residents at this stage should be able to teach leadership, teamwork, and decision-making. They should be prepared to take on the value judgments that guide the financial and political aspects of the medical and surgical practice.
The Complete Surgeon
The trained surgeon must be aware of the need to differentiate between the business incentives of medical care and doing what is right for a sick individual. As financial and professional pressures become more intense, the challenge increases to appropriately prioritize and balance the demands of patient care, family, education, teaching, and research. For example, how does the surgeon deal with the choice between attending a child’s graduation or operating on an old patient who requests him rather than an extremely well-trained associate who is on call? How many times do surgeons make poor choices with respect to the balance of family vs work commitments? Someone else can
competently care for patients, but only parents can be uniquely present in the lives of their children. Time flies, and surgeons must often remind themselves that their lives and the lives of their family members are not just a dress rehearsal.
Knowing When to Quit
A 65-year-old surgeon who maintains a full operating and office schedule, is active in community and medical organizations, and has trained most of her surgical colleagues is considering where to go next with her career. Recently, her hospital acquired the equipment to allow robotic dissection in the area where she does her most complicated procedures. She has just signed up to learn this new technology, but is beginning to reflect on the advisability of doing this. How long should she continue at this pace, and how does she know when to slow down and eventually quit operating and taking the responsibility of caring for patients? Murray Brennan summarizes the dilemma of the senior surgeon well. The senior surgeon is old enough and experienced enough to do what he does well. He yearns for the less complicated days where he works and is rewarded for his endeavors. He becomes frustrated by restrictive legislation, the tyranny of compliance, and the loss of autonomy. Now regulated, restricted, and burdened with compliance, with every medical decision questioned by an algorithm or guideline, he watches his autonomy of care be ever eroded. Frustrated at not being able to provide the care, the education, and the role model for his juniors, he abandons the challenge.
Finishing with Grace
Each surgeon should continuously map a career pathway that integrates personal and professional goals with the outcome of maintaining value, balance, and personal satisfaction throughout his or her professional career. He or she should cultivate habits of personal renewal, emotional self-awareness, and connection with colleagues and support systems, and must find genuine meaning in work to combat the many challenges. Surgeons also need to set an example of good health for their patients. Maintaining these values and healthy habits is the work of a lifetime. Rothenberger describes the master surgeon as a person who not only knows when rules apply, recognizes patterns, and has the experience to know what to do, but also knows when rules do not apply, when they must be altered to fit the specifics of an individual case, and when inaction is the best course of action. Every occasion is used to learn more, to gain perspective and nuance. In surgery, this is the rare individual who puts it all together, combining the cognitive abilities, the technical skills, and the individualized decision-making needed to tailor care to a specific patient’s illness, needs, and preferences despite incomplete and conflicting data. The master surgeon has an intuitive grasp of clinical situations and recognizes potential difficulties before they become major problems. He prioritizes and focuses on real problems. He possesses insight and finds creative ways to manage unusual and complex situations. He is realistic, self-critical, and humble. He understands his limitations and is willing to seek help without hesitation. He adjusts his plans to fit the specifics of the situation. He worries about his decisions, but is emotionally stable.
Cystic disorders of the bile ducts, although rare, are well-defined malformations of the intrahepatic and/or extrahepatic biliary tree. These lesions are commonly referred to as choledochal cysts,which is a misnomer, as these cysts often extend beyond the common bile duct (choledochus).
Cystic disorders of the bile ducts account for approximately 1% of all benign biliary disease. Also, biliary cysts are four times more common in females than males. The majority of patients (60%) with bile duct cysts are diagnosed in the first decade of life, and approxi-mately 20% are diagnosed in adulthood.
Cystic dilatation of the bile ducts occurs in various shapes—fusi-form, cystic, saccular, and so on—and in different locations through-out the biliary tree. The most commonly used classification is the Todani modification of the Alonso-Lej classification.
The exact etiology of biliary cysts is unknown.
The initial clinical presentation varies significantly between children and adults. In children, the most common symptoms are intermittent abdominal pain, nausea and vomiting, mild jaundice, and an abdom-inal mass. The classical triad of abdominal pain, jaundice, and a pal-pable abdominal mass associated with choledochal cyst is observed in only 10% to 15% of children, and it is rarely seen in adults. Symp-toms in adults often mimic those seen in patients with biliary tract disease or pancre-atitis.
The definitive treatment of bile duct cysts usually includes surgical excision of the abnormal extrahepatic bile duct with biliary-enteric reconstruction. This approach relieves biliary obstruction, prevent-ing future episodes of cholangitis, stone formation, or biliary cirrho-sis and thus interrupting the inflammatory liver injury cycle. It also stops pancreatic juice reflux, and more importantly, it removes tissue at risk of malignant transformation.
Cancer of the rectum is the fifth most common form of cancer in adults worldwide. In 2012, an estimated 40,300 new rectal cancers will be diagnosed in the US with a median age 69 years. Five-year survival rates for rectal cancer are high for early stage disease (90% for Stage I disease) but drop significantly with worsening stage (7% for metastatic Stage IV disease). Recently, advances in neoadjuvant and adjuvant therapy have decreased the rate of local recurrence and improved long-term survival for some patients. Although the treatment for rectal cancer has become increasingly multimodal, surgical excision of the primary tumor remains essential for eradication of disease.
For a long time there has been a debate about the best surgical approach to early stage rectal cancer, whether treatment should involve radical excision (excision of the rectum) or local excision (tumor alone). Proponents of radical surgery argue that excision of the rectum with its surrounding lymphatic drainage offers the best chance for cure. On the other hand, advocates of local excision feel that a less-aggressive approach can avoid the potential ramifications of major pelvic surgery such as sepsis, poor anorectal function, sexual dysfunction, and difficulty with urination and can eliminate the potential need for a permanent stoma. Although the debate has gone back and forth on the adequacy of local excision, there is a growing body of scientific data that suggests that local excision can be sufficient in patients with early rectal cancer of the mid and distal rectum with good histologic features and preoperative imaging (computed tomography, magnetic resonance imaging, and endorectal ultrasound) that shows no evidence of lymph node involvement.
Traditionally, transanal excision has been performed with the conventional technique using traditional equipment. Although this conventional technique can give surgeons operative access to most distal rectal lesions, it can be difficult to conduct on mid-rectal tumors or in large patients with a deep buttock cleft. The technical difficulties experienced under such circumstances can lead to poor visualization, inadequate margins, or specimen fragmentation. In response to the technical limitations of conventional transanal excision, in the 1980s Professor Gehard Buess from Tubingen, Germany, began to develop the technique of transanal endoscopic microsurgery (TEM).
In collaboration with the Richard Wolf Company in Germany, Dr Buess developed the specialized instruments necessary to perform endoscopic surgery transanally. TEM was introduced into clinical practice in 1983, and was gradually implemented in several European countries and eventually introduced in North America and Asia. The last decade has witnessed international growth in the application of TEM yielding a significant amount of scientific data to support its clinical merits and advantages and also shedding some light on its limitations.
É o tipo de câncer mais associada a quem faz uso de bebidas alcoólicas e é adepto do tabagismo. Mas pode ocorrer também em quem tem refluxo acido do estômago para o esôfago (hérnia hiato e/ou doença do refluxo gastro esofágico). Como todo câncer, seu diagnóstico é tardio, pois não causa dor nem incomodo nas suas fases mais iniciais, e por isso, pedimos aos pacientes, que façam o exame de controle regularmente (endoscopia digestiva alta). O tratamento é um a combinação de radioterapia, quimioterapia e cirurgia, mas que vai ter variações conforme localização no esôfago (medindo entre 26 e 30 centímetros) e o estagio da doença. O esôfago é um tubo musculomembranoso, longo e delgado, que comunica a garganta ao estômago. Ele permite a passagem do alimento ou líquido ingerido até o interior do sistema digestivo, através de contrações musculares. O câncer de esôfago mais freqüente é o carcinoma epidermóide escamoso, responsável por 96% dos casos. Outro tipo de câncer de esôfago, o adenocarcinoma, vem tendo um aumento significativo principalmente em indivíduos com esôfago de Barrett, quando há crescimento anormal de células do tipo colunar para dentro do esôfago.
O câncer de esôfago apresenta uma alta taxa de incidência em países como a China, Japão, Cingapura e Porto Rico. No Brasil, consta entre os dez mais incidentes, segundo dados obtidos dos Registros de Base Populacional existentes, e em 2000 foi o sexto tipo mais mortal, com 5.307 óbitos. De acordo com a Estimativa de Incidência de Câncer no Brasil para 2006, devem ocorrer cerca de 10.580 casos novos deste câncer (7.970 entre os homens e 2.610 entre as mulheres) este ano.
Fatores de Risco/Prevenção
O câncer de esôfago está associado ao alto consumo de bebidas alcoólicas e de produtos derivados do tabaco (tabagismo). Outras condições que podem ser predisponentes para a maior incidência deste tumor são a tilose (espessamento nas palmas das mãos e na planta dos pés), a acalasia, o esôfago de Barrett, lesões cáusticas no esôfago, Síndrome de Plummer-Vinson (deficiência de ferro), agentes infecciosos (papiloma vírus – HPV) e história pessoal de câncer de cabeça e pescoço ou pulmão. Para prevenir o câncer de esôfago é importante adotar uma dieta rica em frutas e legumes, evitar o consumo freqüente de bebidas quentes, alimentos defumados, bebidas alcoólicas e produtos derivados do tabaco. A detecção precoce do câncer de esôfago torna-se muito difícil, pois essa doença não apresenta sintomas específicos. Indivíduos que sofrem de acalasia, tilose, refluxo gastro-esofageano, síndrome de Plummer-Vinson e esôfago de Barrett possuem mais chances de desenvolver o tumor, e por isso devem procurar o médico regularmente para a realização de exames.
O câncer de esôfago na sua fase inicial não apresenta sintomas. Porém, alguns sintomas são característicos como a dificuldade ou dor ao engolir, dor retroesternal, dor torácica, sensação de obstrução à passagem do alimento, náuseas, vômitos e perda do apetite. Na maioria das vezes, a dificuldade de engolir (disfagia) já demonstra a doença em estado avançado. A disfagia progride geralmente de alimentos sólidos até alimentos pastosos e líquidos. A perda de peso pode chegar até 10% do peso corporal.
O diagnóstico é feito através da endoscopia digestiva, de estudos citológicos e de métodos com colorações especiais (azul de toluidina e lugol) para que seja possível se fazer o diagnóstico precoce, fazendo com que as chances de cura atinjam 98%. Na presença de disfagia para alimentos sólidos é necessária a realização de um estudo radiológico contrastado, e também de uma endoscopia com biópsia ou citologia para confirmação. A extensão da doença é muito importante em função do prognóstico, já que esta tem uma agressividade biológica devido ao fato do esôfago não possuir serosa e, com isto, haver infiltração local das estruturas adjacentes, disseminação linfática, cau-sando metástases hematogênicas com grande freqüência.
O paciente pode receber como formas de tratamento a cirurgia, radioterapia, quimioterapia ou a combinação destes três tipos. Para os tumores iniciais pode ser indicada a ressecção endoscópica, no entanto este tipo de tratamento é bastante raro. Na maioria dos casos, a cirurgia é o tratamento utilizado. Dependendo da extensão da doença, o tratamento pode passar a ser unicamente paliativo, através de quimioterapia ou radioterapia. Nos casos de cuidados paliativos, também dispõe-se de dilatações com endoscopia, colocação de próteses auto-expansivas, assim como uso da braquiterapia. | <urn:uuid:0d51df93-9a56-4f02-a3b3-fdde318f4332> | CC-MAIN-2021-21 | https://thesurgeon.club/2013/07/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990419.12/warc/CC-MAIN-20210511214444-20210512004444-00496.warc.gz | en | 0.852296 | 4,631 | 2.65625 | 3 |
Boring Lava Field facts for kids
Quick facts for kidsBoring Lava Field
Mount Sylvania, one of the major volcanoes in the Boring Lava Field in Portland, Oregon
|Location||Oregon and Washington, U.S.|
4,061 feet (1,238 m)
|Last eruption||≈57,000 years ago|
The Boring Lava Field (also known as the Boring Volcanic Field) is a Plio-Pleistocene volcanic field with cinder cones, small shield volcanoes, and lava flows in the northern Willamette Valley of the U.S. state of Oregon and adjacent southwest Washington state. The field got its name from the town of Boring, Oregon, located 12 miles (19 km) southeast of downtown Portland. Boring lies just southeast of the most dense cluster of lava vents. The zone became active about 2.7 million years ago, with long periods of activity interspersed with quiescence. Its last eruptions took place about 57,000 years ago at the Beacon Rock cinder cone volcano; the individual volcanic vents of the field are considered extinct, but the field itself is not.
The volcanic field covers an area of about 1,500 square miles (3,900 km2), and it has a total volume of 2.4 cubic miles (10 km3). This region sustains diverse flora and fauna within its habitat areas, which are subject to Portland's moderate climate with wide temperature variations and mild precipitation. The highest elevation of the field is at Larch Mountain, which reaches a height of 4,055 feet (1,236 m).
The Portland metropolitan area, including suburbs, is one of the few places in the continental United States to have extinct volcanoes within a city's limits, and the Boring Lava Field plays an important role in local affairs, including the development of the Robertson Tunnel, recreation, and nature parks. Because of the field's proximity to densely populated areas, eruptive activity would be a threat to human life and property, but the probability for future eruptions in the Portland–Vancouver metropolitan area is very low. Boring Lava may also influence future earthquakes in the area, as intrusive rock from its historic eruptions may affect ground movement.
The Boring Lava deposits received their name based on their proximity to the town of Boring, which lies 12 miles (19 km) southeast of downtown Portland. The term "Boring Lava" is often used to refer to the local deposits erupted by vents in the field. They are located in the western portion of the U. S. state of Oregon. The deposits were given this name by R. Treasher in 1942. In 2002, as geochemical and geochronological information on the Boring deposits accumulated, they were designated part of the larger Boring Lava Field. This grouping is somewhat informal and is based on similarities in age and lithology.
The Boring Lava deposits lie west of the town of Boring. The Global Volcanism Program lists its highest elevation as 4,055 feet (1,236 m), at Larch Mountain, with most vents reaching an elevation of 660 to 980 feet (200 to 300 m). Located in the Portland Basin, the field consists of monogenetic volcanic cones that appear as hills throughout the area, reaching heights of 650 feet (200 m) above their surroundings. The collection includes more than 80 small volcanic edifices and lava flows in the Portland–Vancouver metropolitan area, with the possibility of more volcanic deposits buried under sedimentary rock layers. The borders for the Boring Lava Field group are clear, except on the eastern side where distinguishing between Boring deposits and those from the major Cascade arc are less clear; many geologists have arbitrarily placed the eastern border at a longitude of 122 degrees west. In total, the Boring Lava Field covers an area of about 1,500 square miles (4,000 km2), and it has a total volume of 2.4 cubic miles (10 km3).
With a variable topography, the Portland area ranges from river valley floors to terraces reaching elevations of 400 feet (120 m). The Willamette Valley is marked by hills reaching heights more than 1,000 feet (300 m), and it is also physically separated from the lower Columbia River valley. The Columbia River flows west from the eastern Portland region, merging with the Willamette near Portland before moving north. Tributaries for the Willamette include the Pudding, Molalla, Tualatin, Abernethy, and Clackamas Rivers, while the Washougal and Sandy Rivers mark notable tributaries for the Columbia River. The Columbia River has significantly shaped the geology of the area.
Multnomah Creek drains from Larch Mountain, one of the volcanic cones in Boring Lava Field. Local streams near the community of Boring receive seepage from the local aquifer. This unit, part of the greater Troutdale sandstone aquifer, is also made of sandstone and conglomerate and bears water well. It also supplies water to domestic wells in the Mount Norway area. Boring Lava is known to have formed intrusions into local sedimentary rock, and thus it may guide flow of groundwater locally.
Portland's climate is moderate, with long growing seasons, moderate rainfail, mild winters, and warm, dry summer seasons. The area typically does not experience frost, with more than 200 frost-free days annually. Temperature can vary widely, reaching a historic maximum of 107 °F (42 °C), though the usual July maximum is below 80 °F (27 °C), and the average minimum for January is above 32 °F (0 °C). Yearly, precipitation averages between 35 to 45 inches (89 to 114 cm) in most river valleys, with a mean of 42.04 inches (106.8 cm) from 1871 through 1952. It shows variability, however, with a historic low of 26.11 inches (66.3 cm) at Portland in 1929 and a maximum of 67.24 inches (170.8 cm) in 1882. More than 75 percent of this precipitation occurs between October and March; July and August mark the driest months with means below 1 inch (2.5 cm), while November, December, and January represent the wettest with averages greater than 6 inches (15 cm). Prevailing winds originate from the south during winter and from the northwest during the summer season, with the exception of prevailing winds at the mouth of the Columbia River Gorge, where winds predominantly move to the east. The southern winds have the highest velocities of the three, only rarely occurring with potentially destructive force.
The Portland area has a moderate climate, and precipitation is not typically very heavy, allowing for vegetation, which can hamper fieldwork in the area. Many forests that covered the area were partly cleared for agriculture, timber, or cemetery applications in the early 20th century. These cleared and burned land plots sustain rich stands of secondary forest, featuring gorse, huckleberry, nettles, poison oak, salal, and blackberry. Myriad species of fern, as well as rapid-growth deciduous trees like alder and vine maple are also frequent. Forests support stands of Douglas fir, western hemlock, western redcedar, Pacific dogwood, bigleaf maple, Oregon ash, red alder, cascara buckthorn, Pacific madrone, and Oregon white oak; within swamps and moist areas in creeks, the shrub Devil's club can be observed. Other trees that sometimes dominate forest areas include black cottonwood and red alder. Forest communities have many additional shrubs including Indian plum, western hazel, and snowberry. Ground layer plants include the herbaceous sword fern and stinging nettle.
In contemporary times, clearing of forests for housing development have left about half of the Boring Lava region still forested. As a result, water quality has decreased due to higher sedimentation and turbidity, and flooding has gotten worse over time. Streams within the area are of either first or second order, with moderate to low flows and average gradients between 10 and 12 percent. Cool and clear, many sustain macroinvertebrates, and a smaller number support amphibians and fish. The riparian zones in the Lava Field area host diverse species, and they are influenced by uplands that serve as migration connections for birds, mammals, reptiles, and some amphibians.
The United States Fish and Wildlife Service provided a list of potentially threatened or endangered species in the Boring Lava area, calling them "sensitive" species. Among plant species, they determined the following species to be sensitive: white top aster, golden Indian paintbrush, tall bugbane, pale larkspur, peacock larkspur, Willamette daisy, water howellia, Bradshaw's lomatium, Kincaid's lupine, Howell's montia, Nelson's checkermallow, and Oregon sullivantia. For animal and marine life, northwestern pond turtles, Willow flycatchers, long-eared myotises, fringed myotises, long-legged myotises, Yuma myotises, Pacific western big-eared bats, and northern red-legged frogs have been identified as species of concern; pileated woodpeckers, bald eagles, cutthroat trout, and coho salmon are also considered sensitive.
Settler Colonial history
The nearby Portland area has historically been a center for trade since it was founded in 1845. With time, commerce has diversified. Iron mining and smelting was common between 1867 and 1894, with paper mills becoming established as an industry in 1885. Plants manufacturing cement and conducting aluminum reduction, and shipyards can be found in the region. Industrial chemical production represents an important industry in Portland. Most of these industries rely on resources outsourced from other areas, except for the paper industry; business is driven by low power costs and the local industrial mineral market. Other important manufacturing industries in the nearby region include food processing and logging.
In 1893 the Kelly Butte Natural Area was formed by a petition from the Portland City Council. The park, a sect of public land 6 miles (9.7 km) to the southeast of downtown Portland named after a pioneer family, covers an area of 22.63 acres (0.0916 km2), including part of the Boring Lava Field. Historically, it sustained a quarry, prompting the creation of the Kelly Butte Jail, which used prisoner labor (under guard supervision) to gather crushed rocks for building roads in Portland until the 1950s. In general, rocks from the Boring Lava Field have been used for masonry projects including retaining walls, garden walls, and rock gardens, especially oxidized and scoriaceous rocks. Despite the prevalence of quarrying activity in historical times, there is no ongoing mineral or aggregate resource mining near the Boring Lava Field.
In 1952, after a local vote, the Kelly Butte Civil Defense Center was built between 1955 and 1956, costing about $670,000. The center was constructed to host local government agents should a nuclear attack on Portland occur; it had an area of 18,820 square feet (1,748 m2), intended to host 250 people in case an emergency government became necessary. It was known throughout the United States as a model facility for local governments, and in 1957, the docudrama A Day Called X included footage of the Defense Center. The center was left obsolete after a 1963 Portland City Council vote to abolish it passed; in 1968, just one permanent employee remained. Eventually the building was converted into an emergency services dispatch center from 1974 through 1994, when it was abandoned due to rising costs for renovation and space limitations. That same year the building was vacated, and then it was sealed off in 2006. A sixty-bed isolation hospital operated at Kelly Butte from September 1920 until 1960, supporting patients with communicable disease. A 10 million gallon water tank stood in the area from 1968 through 2010, when it was replaced with a 25 million gallon underground reservoir that cost $100 million, despite opposition from local environmental groups like the Friends of the Reservoirs. Historically, the park has also housed a police firing range, and Kelly Butte remains a recreational space today, administered jointly by Portland Parks and Recreation and the Portland Water Board.
In 1981, the Portland city government built a reservoir at the north end of Powell Butte (part of the Boring Lava Field), which still serves the city. In 1987, Portland government created Powell Butte Nature Park, covering 600 acres (2.4 km2) of meadows and forest within the city. Planning started in 1995 for a second water reservoir in the area, which was built between 2011 and 2014. The new reservoir is underground, buried under topsoil and native plants, and it has a volume of 50,000,000 US gallons (190,000,000 l). With the new reservoir came improvements to the Powell Butte park, including resurfaced and realigned trails, reduced environmental impacts, better accessibility measures, and reduction of steep grades. The government also built a visitor center, caretaker's house, public restrooms, maintenance yard, and a permeable parking area that permitted filtration of rainwater through asphalt to an underground stone bed, where it could be absorbed by the soil and then into the nearest aquifer.
Built between 1993 and 1998, the Robertson Tunnel runs for 3 miles (4.8 km) through the Tualatin Mountains. Located 260 feet (80 m) underground, it marks the deepest train station in North America. The tunnel displays a core that exhibits Boring Lava deposits. For the first 3,900 feet (1,200 m) of the tunnel, the core shows Boring lava flows with cinder, breccia, and loess dated from 1.47 million to 120,000 years ago, which have been deformed by the Sylvan fault. With the Oatfield fault, the Sylvan fault trends to the northwest, extending 9.3 miles (15 km) northwest and 16 miles (25 km) southeast of the tunnel. It is of Quaternary age and lacks surface expression, possibly as a result of its extensive burial by loess along its length.
In 2000, the nonprofit Friends of Mt. Tabor Park was formed to help maintain the Mount Tabor Park area, located 3.5 miles (5.6 km) east of downtown Portland. They have an organizational website and publish a bi-annual newsletter called the Tabor Times. Membership requires dues, and they also rely on donations and a gift shop for financial support.
In September 2017, the Hogan Butte Nature Park opened in the city of Gresham, encompassing an area of 46 acres (0.19 km2) that includes the extinct Boring Lava Field volcano Hogan Butte. This park opened after more than 25 years of processing, supported by a 1990 bond from the city and two regional Metro bonds. Collaborators for opening the park include the U.S. Forest Service, local citizens, Metro, The Trust for Public Land, and the Buttes Conservancy organization. Gresham marks one of just a few places in the United States with volcanoes contained in its city limits. Mount Sylvania and Mount Scott lie within the limits of Portland, in the southwestern and southeastern parts of the city, respectively.
There are 90 volcanic centers within a 20 miles (32 km) radius of Troutdale and more than 32 vents within a 13 miles (21 km) radius of Kelly Butte. Mostly small cinder cone vents, these volcanoes also include some larger lava domes from shield volcanoes at Mount Sylvania, Highland Butte, and Larch Mountain. The Boring Lava Field marks the densest volcanic vicinity in this group, encompassing an area of 36 square miles (93 km2). It includes more than 80 known small vents and associated lava flows, with more volcanic deposits likely present under sedimentary rock deposits from the Missoula Floods (also known as the Bretz or Ice Age Floods), which took place between 21,000 and 15,000 years ago and probably destroyed small cinder cones (including those made from tuff) and maar craters, burying them under up to 98 feet (30 m) of silt from slack water. The Global Volcanism Program reports that the field includes somewhere between 32 and 50 shield volcanoes and cinder cones, with many vents concentrated northwest of the town of Boring.
Considered an outlier of the Cascade Range, the Boring Lava Field lies about 62 miles (100 km) to the west of the major Cascade crest. It marks one of five volcanic fields along the Quaternary Cascade arc, along with Indian Heaven, Tumalo in Oregon, the Mount Bachelor chain, and Caribou in California. Like the Cascade Range, the Boring field was also generated by the subduction of the oceanic Juan de Fuca tectonic plate under the North American tectonic plate, but it has a different tectonic position, with its eruptive activity more likely related to tectonic rifting throughout the region. The Boring Lava Field has erupted material derived from hot mantle magma, and the subducting Juan de Fuca plate may be as shallow as 50 miles (80 km) in depth at their location.
The High Cascades, a segment of the Cascade volcanic arc that includes the Boring Lava Field, is characterized by basaltic lava flows with andesite, tuff breccia, and volcanic ash. The High Cascades may lie over a graben (a depressed block of the Earth's crust bordered by parallel faults), and activity at the Boring field and throughout the Portland area may be associated with deformation of the block. Portland lies within the Portland Basin, part of the forearc (the region between an oceanic trench and the associated volcanic arc) between the Cascades major arc and the Pacific Coast Ranges, which consist of Eocene to Miocene marine sedimentary rock deposits and Eocene intrusions and extrusions of basalt that were emplaced on the Siletz terrane. The eastern boundary for the Portland Basin is the Cascades, while the Tualatin Mountains lie to the west, along an anticline formation that has been changing since the Miocene. The Boring Lava Field sits on the floor of the Portland Basin, residing in the forearc setting between tectonic extension to the south and compression to the north. The uneven distribution of vents within this forearc suggests a local zone of crustal expansion, indicative of northward movement and clockwise rotation of a tectonic microplate that leads to gradual northwest-trending propagation for the field over time. The migration rate for volcanism within the field is an average of 0.37 inches (9.3 mm) ± 0.063 inches (1.6 mm) per year relative to the motion of crustal blocks in the region, using the last 2.7 million years as a starting reference point. The Boring Lava Field represents the youngest episode of volcanism within the Cascade forearc, and while there is no evidence that they were associated with a slab window (a gap that forms in a subducted oceanic plate when a mid-ocean ridge meets with a subduction zone and plate divergence at the ridge and convergence at the subduction zone continue, causing the ridge to be subducted), they likely interacted with the regional mantle wedge.
The Boring Lava Field shows a similar composition to the High Cascades that run through Oregon and southern Washington state, with Pliocene to Pleistocene basalt lava flows and breccias. It was active during the late Tertiary into the early Quaternary. Within the field, lava shows a diverse composition overall, varying from low-K, tholeiitic to high-K, calc-alkaline eruptive products. Some of the low-K tholeiite deposits likely originated from vents closer to the High Cascades, and they are overlain by Boring Lava materials. J. M. Shempert proposed that mantle sources for the two different lava types may be different and that the calc-alkaline sources are more refractory.
Like the surrounding High Cascades, Boring Lava Field erupted lava made of olivine basalt and basaltic andesite; these sub-alkaline basalts and basaltic andesite predominate among Boring Lava deposits. The olivine basalt deposits have fine to medium textures, and the basaltic andesite lava flow deposits have relatively little pyroclastic rock in them, suggesting that explosive eruptions were uncommon within the field. Dark gray to light gray in color, Boring Lava produces columnar and platy joints, which can be seen in Oregon east of Portland and in Clark County in Washington state. It is usually phyric, though one sample from Rocky Butte consists of labradorite with olivine phenocrysts that have been transformed to iddingsite. The Boring Lava reaches thicknesses of more than 400 feet (120 m). Boring Lava has a more mafic (rich in magnesium in iron) composition than the nearby volcano Mount Hood, but they have similar ages. There is a small amount of andesite in the lavas from the field, mostly erupted from monogenetic vents or Larch Mountain. Sometimes, Boring Lava overlaps with volcaniclastic conglomerate from other Cascade eruptions in Multnomah County and the northern part of Clackamas County. The Boring Lava also contains tuff, cinder, and scoria; it is characterized by plagioclase laths that show a pilotaxitic texture with spaces between them that show a diktytaxitic texture. The Boring Lava exposures show aeromagnetic anomalies with short wavelengths and high amplitudes suggestive of their relatively young geological ages.
At points where the Boring Lava sits over Troutdale Formation deposits, landslides are frequent, producing steep head scarps with heights of 66 feet (20 m). These scarps tend to have grabens at their bases and Boring Lava blocks at their tops, and they show variable slide surfaces from hummocky to flat. A number of these exposure show dips up to 35 degrees, as well as minor faults. The landslides range in thickness from 20 to 79 feet (6 to 24 m). Portland's wet climate leads to weathering, which at the Boring Lava Field has reached depths of up to 25 feet (7.6 m), altering the upper 5 to 15 feet (1.5 to 4.6 m) of soil to a red, clay-like material. At the cinder cone in Mount Tabor Park, an outcrop of quartzite-pebble xenoliths (rock fragments enveloped in a larger rock during the latter's development and solidification) can be observed among local cinder specimens, dating from Miocene to Pliocene Troutdale deposits. While the volcanic rock of Boring Lava was being emplaced over rock from the Troutdale formation, there was deformation that uplifted and dropped fault blocks to the southeast of Portland. Along the Washougal River, a large landslide occurred as a result of failure due to the Boring Lava pushing down on rock from the Troutdale formation. Intrusions of Boring Lava formed outcrops at Highland Butte, La Butte, and potentially in the subsurface regions near Aurora and Curtis, and these intrusions have been associated with normal faulting at Parrett and Petes Mountain, Aurora, Curtis, and Swan Island (along the Molalla River). Faults together with igneous intrusions are usually accompanied by stretching and doming as a result of magma influxes or collapses from the evacuation of the magma flows. Similarly, faults north of Oregon City might have resulted from subsidence after magma chambers emptied or lava was extruded as a result of Boring Lava eruptions. Some of the Boring Lava vents are known to cut off hydrogeologic units in the surrounding area.
Eruptive vents on the western edge of the field formed along a fault line that trended to the northeast, located north of present-day Carver. Boring Lava was erupted by vents in the volcanic field, and it has been exposed at elevated topographic levels in intact volcanic cones and dissected lava plains. There is likely more lava deposited under Quaternary sedimentary mantle throughout the region, though activity was confined to a relatively concentrated area.
D. E. Trimble (1963) argued that the Boring Lava Field was produced by eruptive activity at 30 volcanic centers. These include shield and cinder cone volcanoes. J. E. Allen reported 95 vents in 1975, dividing them into four clusters in 1975: 17 vents north of the Columbia River, 14 vents west of the Willamette River, 19 vents east of the Willamette River and north of Powell Valley Road, and 45 vents east of the Willamette River and south of Powell Valley Road (Highway 26). Of these, 42 were unnamed, and several volcanoes contained multiple vents. Generally, all lava flows in the field can be traced to specific vents in the field, but documented source vents have been confirmed through chemical analysis or petrographic comparisons, with a few exceptions.
In the eastern part of the Boring cluster, volcanic vents have average diameters less than 1.6 miles (2.6 km), with average heights less than 1,090 feet (330 m) from base to summit. The lava flows from Highland Butte and Larch Mountain, both shield volcanoes, encompass a wide area, with Boring Lava deposits averaging thicknesses of 100 to 200 feet (30 to 61 m) not considering areas next to volcanic cones in the field. Most of the summit craters have been destroyed, though there are partial craters at Bobs Hill (located 20.5 metres (0.0205 km) northeast of Portland) and Battleground Lake (located 20.5 miles (33.0 km) north of Portland); Mount Scott also has an intact summit crater. However, many of the Boring cones retain the shape of a volcanic cone, with loess extending above an elevation of 400 feet (120 m). The Rocky Butte plug, which reaches a height of 330 feet (100 m) above its surroundings, was dated to 125,000 ± 40,000 years old by R. Evarts and B. Fleck from the United States Geological Survey (USGS). Mount Tabor is also prominent in the area, dated by the USGS to 203,000 ± 5,000 years old, as are Kelly Butte, Powell Butte, and Mount Scott. Scott has been dated to 1.6 million years ago.
A series of lava tubes were documented near the Catlin Gabel School along the western slope of the Portland Hills. These formations, created by lava flow cooling at the surface while its hot interior keeps draining, were first identified by R. J. Deacon in 1968 and then L. R. Squier in 1970, and studied in detail by J. E. Allen and his team in 1974. The Catlin Gabel tubes lie among cinder cones and lava flows from the Pliocene to Pleistocene, and they are the oldest known lava tubes in Oregon, the only described older than the Holocene. The tubes were produced by a small vent at the southern end of the northern segment of the field, extending 2.5 miles (4.0 km) from its base to the south and then the west. They originated from the uppermost lava flow from a series of eruptions that ran into a valley on the western slope of the Portland Hills. The Catlin Gabel tubes have a width of 2,500 feet (760 m), with slopes averaging 150 feet (46 m) per mile for an average grade of 3 percent. On average, these tubes have a thickness of 235 feet (72 m) near their center, with an upper lava unit thickness of 90 feet (27 m) that has since been modified by erosion and the deposition of up to 30 feet (9.1 m) of Portland Hills silt. The Catlin Gabel tubes also sit atop 434 feet (132 m) of silt from the Troutdale Formation. Running along the tube's arc are five depressions, which were created through the collapsing roofs of the lava tubes within a subsegment that is 6,000 feet (1,800 m) in length. The characteristics of the tube system are not well documented, since only the collapsed segments are accessible; some of the channels have been reduced to rubble, and study has revealed that they trended northwest, had widths up to 40 feet (12 m) and depths no more than 60 feet (18 m), and required special engineering procedures to permit the construction of a 15-story building above them.
The following vents are in Oregon:
|Chamberlain Hill||890 feet (271 m)|
|Cook's Butte||718 feet (219 m)|
|Highland Butte||1,594 feet (486 m)|
|Kelly Butte||400 feet (122 m)|
|Larch Mountain||4,061 feet (1,238 m)|
|Powell Butte||614 feet (187 m)|
|Rocky Butte||612 feet (187 m)|
|Ross Mountain||1,380 feet (421 m)|
|Swede Hill||995 feet (303 m)|
|Mount Scott||1,093 feet (333 m)||Named for Harvey W. Scott|
|Mount Sylvania||978 feet (298 m)|
|Mount Tabor||630 feet (192 m)|
|Mount Talbert||715 feet (218 m)|
|TV Hill||1,275 feet (389 m)|
|Walker Peak||2,450 feet (747 m)|
The following vents are in Washington:
|Battle Ground Lake||509 feet (155 m)|
|Bob's Mountain||2,110 feet (643 m)|
|Bob's Mountain (N)||1,775 feet (541 m)|
|Bob's Mountain (S)||1,690 feet (515 m)|
|Brunner Hill||680 feet (207 m)||2 vents|
|Green Mountain||804 feet (245 m)|
|Mount Norway||1,111 feet (339 m)|
|Mount Pleasant||1,010 feet (308 m)|
|Mount Zion||1,465 feet (447 m)|
|Nichol's Hill||1,113 feet (339 m)|
|Pohl's Hill||1,395 feet (425 m)|
|Prune Hill (E)||610 feet (186 m)|
|Prune Hill (W)||555 feet (169 m)|
|Tum-Tum Mountain||1,400 feet (427 m)|
Eruptions at Boring Lava Field occur in a concentrated manner, often in clusters of three to six vents, as at Bobs Mountain and Portland Hills. These types of vents typically produced similar types of magma in relatively short periods of time, and they also frequently show alignment. Vents in the field have generally produced basalt and basaltic andesite, with some andesitic eruptions, including those that produced the large Larch Mountain shield volcano.
Prior to the 1990s, there was little potassium-argon dating data available for the lava field, and despite the field's proximity to an urban area, little was known about its composition until recent years. Weathering, fine grain size, and glassy content mean that there are limitations to argon–argon dating for the field as well. Recent research suggests that eruptive activity at the Boring Lava Field began between 2.6 and 2.4 million years ago, yielding far-reaching basalt lava flows, the Highland Butte shield volcano, a number of monogenetic vents, and one andesitic lava flow. These took place near the southern Portland Basin, and were followed by about 750,000 years of quiescence. About 1.6 million years ago, eruptive activity resumed to the north of the previously active area, with alkalic basalt lava flows generating the Mount Scott shield volcano. As eruptions shifted to the east over time, the Larch Mountain volcano was produced by eruptions in the foothills of the Cascade Range. Activity spread out over the area, extending to its current expansive state about 1 million years ago. In addition to spreading out geographically, the lava composition in the field's vents became more diverse. This period continued until about 500,000 years ago, with no activity until about 350,000 years ago, after which activity continued through roughly 60,000 to 50,000 years ago according to several sources, or about 120,000 years ago according to I. P. Madin (2009). R. Evarts and Fleck originally reported that lava flows at the Barnes Road deposit of the field represented the youngest eruptive products in the Boring area, with a radiometric dating age of 105,000 ± 6,000 years. These eruptions followed a relatively even age distribution over time; geographically, younger vents and associated deposits lie in the northern portion of the field, while older deposits are confined to the south.
The products of the Boring Lava Field were erupted discontinuously over an erosion surface. Activity took place during the late Tertiary and early Quaternary, in what is now the Portland area as well as the surrounding area, with a particularly concentrated pocket of activity to the east. Nearly all of these eruptions were confined to single vents or small vent complexes, with the exception of a lava plain southeast of present-day Oregon City. Boring Lava generally consists of flowing lava; only one eruptive deposit contains tuff, ash, and tuff breccia, and one vent to the northeast of the Carver area displayed evidence of explosive eruptions that later became effusive.
Recent activity and current threats
According to the USGS, sometime less than 100,000 years ago, magma at Battle Ground Lake in Washington state interacted with water to form the eponymous maar volcano, destroying a lava flow dated to 100,000 years ago. The last volcanic center to form in the field was Beacon Rock, a cinder cone produced by eruptions about 57,000 years ago, which was eroded by the Missoula Floods to leave only its central volcanic plug. While the known volcanic vents in the Boring Lava Field are extinct, the field itself is not considered extinct. Nonetheless, according to the USGS, the probability for future eruptions in the Portland–Vancouver metropolitan area is "very low". It is rare that more than 50,000 years pass without an eruption in the region; given the past eruptive history of the field, an eruption is predicted to occur once every 15,000 years on average.
About half of the Boring Lava Field eruptions took place in what are today densely populated areas of the Portland–Vancouver metropolitan area. Though the formation of a small cinder cone vent might not extend far beyond its surroundings, depending on location, similar eruptions could lead to deposition of volcanic ash that could lead to serious infrastructural consequences, covering large areas. A larger eruption, like the ones that built Larch Mountain or Mount Sylvania, could extend for years to decades. It is unclear where exactly a future eruption might take place, but it would probably occur in the northern portion of the field.
Many seismic faults in the northeastern section of the northern Willamette Valley formed as a result of intrusions of Boring Lava, as supported by their orientation, lengths, displacements, age, and proximity to Boring Lava intrusions. Though intrusions from any future eruptions at the Boring field are "probably minimal", Boring Lava might play a role in determining the intensity of ground shaking during future earthquakes in the area.
Trails in the city of Gresham travel over parts of the Boring Lava Field and its cones. Mount Tabor and Powell Butte are better known for their recreational uses than other cones; Powell Butte Nature Park offers 9 miles (14 km) of trails. The Mt. Tabor Park is open to bicyclists and pedestrians from 5 a.m. through midnight and to motorized vehicles from 5 a.m. through 10 p.m. each day, except for Wednesdays when the park roads are not open to automobiles. The Hogan Butte Nature Park offers views of Mount Adams, Mount Hood, Mount Rainier, and Mount St. Helens, as well as running trails and sites for picnicking. Gresham's mayor at the time, Shane Bemis, predicted that the park would "quickly become Gresham's crown jewel."
In addition to the nature park on Hogan Butte, a number of smaller cinder cones are also publicly accessible. The Gresham Saddle Trail traverses Gresham Butte and Gabbert Butte, running for 3.3 to 3.7 miles (5.3 to 6.0 km). The trail is considered of moderate difficulty, and it offers no amenities. It includes the Gabbert Loop Trail, which extends for 1 mile (1.6 km) through forests of maples, alders, ferns, and firs.
Boring Lava Field Facts for Kids. Kiddle Encyclopedia. | <urn:uuid:c1535e38-8d80-4084-a67b-76a7cd77d669> | CC-MAIN-2021-21 | https://kids.kiddle.co/Boring_Lava_Field | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988796.88/warc/CC-MAIN-20210507150814-20210507180814-00096.warc.gz | en | 0.948123 | 7,821 | 3.609375 | 4 |
(CNN) — For almost a century, King Tutankhamun has been the poster boy for Ancient Egypt. His death mask was sublimely, breathtakingly crafted over 3,300 years ago from 24 pounds of beaten gold, with eyeliner of lapis lazuli and eyes of quartz and obsidian.
It's probably the most recognizable artifact we have from antiquity.
Once entombed in Egypt's Valley of the Kings, the mask has toured the world, entrancing audiences with its aura of opulence and millenia-old regal mystery.
"You know, if you ask a child from the age of eight and you tell him Egypt, and he will tell you King Tut," Zahi Hawass, an Egyptian archaeologist and former antiquities minister, tells CNN. "I made a Skype last week to a school in the States. All the children ask about one thing, Tutankhamun."
While ancient Egyptians crafted great monuments to their dead, wonders of granite and limestone including the Pyramids at Giza, modern Egyptians have been building a new home for Tutankhamun and his ancestors just over a mile away.
As big as an airport: The new Grand Egyptian Museum
KHALED DESOUKI/AFP via Getty Images
Something monumental of their own in glass and concrete.
Construction has taken eight years so far, the opening delayed multiple times, but the Grand Egyptian Museum isn't called "Grand" for nothing.
At almost half a million square meters, it's the size of a major airport terminal, with a price tag to match. Most of the huge cost has been met with loans from Japan.
The Egyptians badly want tourists back in the large numbers that the country hasn't seen since before the country was gripped by political upheaval in the wake of the 2011 Arab Spring.
This museum amounts to a calculated gamble -- a $1 billion dollar bet on Tutankhamun. His treasures will be the star attraction when all finally arrive here next year.
This should be their last resting place.
Child of incest?
A golden statue of Tutankhamun is seen carrying a harpoon.
MAHMOUD KHALED/AFP via Getty Images
A selection of Tutankhamun's treasures have been on the road -- off and on -- since the 1960s. Among them, the Boy King as "Guardian Statue," a proud figure, face and body painted black to symbolize the fertile silt of the Nile.
Another gold-covered statue shows Tutankhamun carrying a harpoon and wearing one of his many crowns. On the back of a golden throne, the young pharaoh appears in a tender marital portrait, king and queen bathed in the rays of the sun. And on a golden fan, he's seen hunting in his chariot.
Tutankhamun's portrait was always idealized, as it was on his famous death mask.
The reality may well have been rather different.
Tutankhamun was dead at 18 or 19.
BEN CURTIS/AFP via Getty Images
Images of his unimposing mummy, first revealed in 1925, show that Tutankhamun was probably a child of incest, standing about 1.65 meters (5 ft 5 in). Scientists think that he may well have had a clubfoot and buck teeth.
He was dead at 18 or 19 and Egyptologists still speculate about what killed him.
"When you go really deeply into the collections of the king, into the history of the king, you discover that he was a really important king," says Tayeb Abbas, head of archaeology at the Grand Egyptian Museum.
"What is also important about the king is that his life and death are still a mystery. And that's why people all over the world are still fascinated by King Tutankhamun."
Work began on the museum back in 2003.
Courtesy Stephanie Vermillion
The creation of the new museum has taken almost as long as Tutankhamun's lifespan. A winning design was chosen in 2003, with a facade or wall of semi-translucent stone, one kilometer long, that can be backlit at night.
The original architects were a small Dublin-based practise, Heneghan Peng, led by an American-Chinese architect, Shih-Fu Peng.
The Egyptian Revolution in 2011 delayed things and construction only began in earnest the following year.
CNN first caught up with the project six years later, in 2018, as the pyramid's entrance took skeletal shape.
"It's a new landmark that is being added to the complete view of the city of greater Cairo... for the first time the pyramids and the fantastic treasures of Tutankhamun will be eye to eye," Tarek Tawfik, the former director general of the Grand Egyptian Museum Project, said at the time.
Workers spray the new museum's interior with disinfectant to combat Covid-19.
KHALED DESOUKI/AFP via Getty Images
A visit to the museum in May 2020 revealed everything looking pretty much landscaped and ready.
But behind the scenes, it's still a construction site -- and Covid-19 hasn't helped.
Going in, everyone had to have their temperature checked, including CNN's team.
Like extras from a remake of "Ghostbusters," gangs of workers wearing tanks of disinfectant on their backs were out spraying.
"We are working hard, despite Covid-19," says Major General Atef Moftah, the army engineer who is the museum's general supervisor. "We are taking precautions, sterilizing everything and everyone."
Pharaohs and gods
The museum was built around the gigantic statue of Ramses II.
On the ground, it's easy to get a sense of how gargantuan this project really is.
The statue of Ramses the Great -- the largest of all the museum artifacts -- arrived in 2018 so they could build the atrium around the 13th century BCE pharaoh.
More than 20 meters high, crafted from 83 tons of red granite, he's simply magnificent -- even with his nose royally chipped and his stubby toes lightly coated in dust.
Egypt is building The Grand Egyptian Museum, a more than one billion-dollar undertaking that will re-house and restore the country's most precious artifacts.
The new museum is a far more dignified place to hang out than the polluted spot outside Cairo's main railway station where Ramses used to stand.
His companions -- on a grand staircase behind him -- are mostly still under wraps.
There'll be 87 statues of pharaohs and Egyptian gods on the steps. As visitors ascend, they'll get a sweeping history of Ancient Egypt. Some 5,000 years of it.
Or at least they will when the museum finally opens.
"The project is scheduled to be finished by the end of this year," says Moftah. "Then at the beginning of next year, we will work on the antiquities side of the project for four to six months. Hopefully by then, Covid-19 will be over and have left the world in peace."
What was clear from CNN's brief visit, as work to tidy up the vast spaces goes on, is that tourists may need to set aside two days to get around it.
Museum with a view
Grand Museum, grand view
KHALED DESOUKI/AFP via Getty Images
Wandering through the museum, visitors may spot a familiar recurring motif.
Pyramids are everywhere. Upright and sometimes inverted, huge triangular designs built into the museum's monumental structures and mosaic surfaces. And then there are the real pyramids -- visible through gigantic windows. This is a museum with a view.
That Great Pyramid less than a mile away took roughly, it's thought, 20 years to build, when Pharoah Khnum Khufu wanted to create a burial place for himself back in the 26th century BCE using more than two million granite blocks -- each weighing over two and a half tons.
From architectural competition to planned opening in 2021, the Grand Egyptian Museum has also taken almost 20 years.
The museum is manifestly a matter of huge prestige for Egypt. In tandem with the building, an extraordinary program is underway to conserve every single one of Tutankhamun's treasures.
The intention is to exhibit all of them together -- for the very first time.
'Panoply of death'
Conservators need a good eye and steady hand.
Khaled DESOUKI / AFP
The Conservation Center for the Grand Egyptian Museum is the largest of its kind in the Middle East. It's a seemingly endless corridor leading to no fewer than 10 different laboratories, all devoted to the art of conservation.
The labs themselves are enveloped in an almost monastic silence. The experts working within them need intense concentration, a good eye and a steady hand.
There are more than 5,000 artifacts to conserve from Tutankhamun alone. "His magnificent panoply of death," as Howard Carter, the man credited with finding his tomb, once said.
Many items are being freshly conserved so that they can be shown for the very first time when the new museum opens.
Funding for some of this work comes from Tutankhamun's golden legacy -- income from exhibiting his treasures overseas.
"When I sent that Tutankhamun exhibit in 2005 to the States, Australia, Japan and London, I brought to Egypt $120 million to build the conservation labs. I never thought to see young Egyptians -- geniuses with golden hands -- returning every piece back," says Hawass, the former minister of antiquities. "That was the first thing that captured my heart."
Brought back to life
The cow goddess, Mehet-Weret, is represented in a pair of bovine figures.
Khaled DESOUKI / AFP
Here in the labs can be found the lion goddess, Menhit, with nose and tears of blue glass, eyes of painted crystal. There's the deity Ammut, part hippo, part crocodile, part lion -- with teeth and red tongue of ivory.
The cow goddess, Mehet-Weret, is here too, represented in a pair of bovine figures, solar discs wedged between their horns.
There are also ritual couches that were apparently intended to speed Tutankhamun on his journey to the afterlife. After conservation, they remain in astonishingly good condition.
It's a privilege to witness all this material before it goes under glass in the new museum.
And not everything was golden.
Tutankhamun was buried with some 90 pairs of his sandals. Some of rush and papyrus, others of leather and calf-skin.
Tutankhamun was buried with about 90 pairs of sandals.
MOHAMED EL-SHAHED/AFP via Getty Images
Before conservation, one pair had partially rotted away, but even these were still somehow salvageable.
"We create a new technique by using some special adhesive," says Mohamed Yousri, one of the conservators. "It's condition was very bad, and I think it comes alive again."
One pair -- almost brand new, it seemed -- is decorated with captured warriors, one Nubian, the other Asiatic. In these sandals, Tutankhamun could symbolically crush his enemies under foot every day.
"What we are doing here is re-discovering the collections of the king," says Tayeb Abbas, the museum's head of archaeology. "So we are doing the job which is really as important as it was done by Carter."
British archaeologist Howard Carter, right, at Tutankhamen's tomb
Hulton Archive/Getty Images
Howard Carter, an Englishman, was 48 years old when he made the discovery of his life in the Valley of the Kings, a pharaonic burial complex on the western banks of the Nile near the city of Luxor.
He would spend a decade -- from 1922 to 1932 -- recording the treasures and methodically clearing the tomb.
Without his doggedness, Tutankhamun might never have been found. And without Tutankhamun, we probably wouldn't have a Grand Egyptian Museum.
"This king was unique," says Hawass. "I think Howard Carter was so lucky to discover his tomb. And this is my opinion. This is the most important discovery, still, in archaeology."
Carter's archive is kept at the Griffith Institute, an Egyptology center at the UK's Oxford University.
A meticulous, demanding man, Egyptology will forever owe him an immense debt. Carter's clearance of the tomb was, for the time, exemplary.
Carter's meticulous methods helped preserve many of the finds from Tutankhamun's tomb.
Hulton Archive/Getty Images
Artifacts like the black and gold Guardian Statue were sprayed with protective paraffin wax. But now, almost a century later, the wax is being taken off.
In the labs, a ceremonial chariot was having every little bit of wax winkled and teased out during CNN's visit. Its old sheen was restored.
Tutankhamun's outer coffin, meanwhile, has been fumigated for insects. Just conserving this one artifact has taken some eight months.
This is the first time the coffin has ever left the tomb in the Valley of Kings. And it won't be going back there, a fact that not everyone's happy about.
"To be honest, most of the people on the west bank were angry because of this," says Abbas. "But when we took it out of the tomb, the people saw the bad condition of how the coffin was, people started to encourage us to get it back to how it looked like before."
Egypt's former antiquities chief Zahi Hawass, wearing a hat, supervises the removal of King Tutankhamun from his tomb in 2007.
BEN CURTIS/AFP via Getty Images
The local residents did win one campaign. They're going to keep Tutankhamun's mummy, even though, like the outer coffin, the authorities had coveted it for the new museum.
"The people of Luxor think that their grandfather should stay there," says Hawass, the former minister of antiquities. "And I really do respect this. When we decided to move it a few months ago, all the people of Luxor disagreed with that. And this really actually made me happy. The mummy will stay there."
Fifteen years ago, Hawass did manage to extract the mummy for a CT scan -- but only for a day, barely enough time to incur the "mummy's curse" -- the supposed deadly consequence of moving King Tut's remains, a legend that is said to have claimed the life of Carter's financial backer Lord Carnarvon.
"When I went to scan the mummy and I took the mummy out of the coffin, I looked at his face," Hawass says. "That is the most beautiful moment in my life. The discovery -- November 4, 1922, 5,398 objects were found, excavated by Carter for 10 years. The curse. Lord Carnarvon died. All of that made the magic of King Tut!"
Everything, it seems, always comes back irresistibly to Tutankhamun. A century ago, only a few Egyptologists even knew his name. Now everyone does.
'Smell the history'
The original: Egyptian Museum of Antiquities
MARWAN NAAMANI/AFP via Getty Images
While the new Grand Museum will help preserve his status and many more of Egypt's ancient artifacts, it's reassuring to note that a piece of the country's more recent history will not be overlooked.
Head into central Cairo and the salmon pink sandstone edifice of one of its most distinct landmarks is unmissable.
The Egyptian Museum of Antiquities -- so beloved by Egyptologists -- opened in 1902.
Inside is hall after hall of statuary and an ever-expanding collection. It's here you can meet some of the Boy King's relatives.
There's Akhenaten, the so-called "heretic" pharaoh -- Tutankhamun's father. And an unfinished bust of his stepmother, the serene Nefertiti. His grandparents are here too -- Yuya and Tjuyu were once a power couple.
"You know, Cairo Museum, you cannot close it even if you have the Grand Egyptian Museum," says Hawass. "If you enter this museum, you smell the history. You smell the past and that's why we are keeping it as it is."
Of course, tourists always swiftly take the stairs to the first floor of the Cairo Museum. That's where -- for the time being -- you can still find the death mask and the rest of Tutankhamun's treasures
Tutankhamun's golden sarcogphagus
KHALED DESOUKI/AFP/Getty Images
The display here has always seemed a bit dull. Beautiful but old-fashioned glass cases and drab lighting.
But then gold is still pretty stunning in any light.
There's the solid 22-carat gold likeness of Tutankhamun that formed his innermost coffin. It measures just over six feet long, and weighs a hefty 108 kilograms (240 pounds) -- about the same weight as Anthony Joshua, the heavyweight boxing champion.
More endearing perhaps, is a painted wooden sculpture of Tutankhamun, probably executed when he was in his early teens. This may have been a mannequin for his clothes.
Back in the labs, they have, in fact, been carrying out the first ever scientific study of Tutankhamun's textiles, including a scarf or shawl several meters long that has somehow endured over the 3,300 years since his death.
One of the golden chariots previously on display in the Cairo museum
The research is part of a joint Egyptian-Japanese project.
"Among the many objects from the Tutankhamun's tomb, textiles are the most deteriorated materials," says ancient textiles expert Mia Ishii, an associate professor at Saga University in Japan. "Therefore it was a request from the Egyptian government to work especially on the textiles from the beginning."
More than 100 textile examples were recovered from the tomb and were evidently used in life. Among them is a tunic of some kind. Clearly visible is the design of a lotus flower -- for ancient Egyptians, the symbol of eternal life.
Japanese experts have also been advising on the big move -- from the Cairo museum to the new Conservation Center's labs 10 miles away.
Back in 2018, CNN watched a ritual couch being bandaged up like a patient with sunburn. Traditional Japanese washi or tissue paper was applied to fragile areas of gold leaf.
A hunting chariot needed a bespoke crate and a lot of manhandling -- almost enough men for a football team.
Tutankhamun's chariots, they've discovered, were all made from a hardwood -- elm -- a tree not native to Egypt. The wood probably came from somewhere in the Eastern Mediterranean more than 500 miles away.
Some artifacts are still revealing their secrets. A set of rods covered with gold leaf were a bit of a puzzle, but it's now believed they were part of a sunshade for a ceremonial chariot -- the oldest sunshade ever found.
One discovery, a dagger found with Tutankhamun's mummy, was made with iron from a meteorite. Anything that fell from the heavens seems to have had special meaning for ancient Egyptians.
Another treasure is a fabulous pendant, covered with scarab beetles -- the ancient Egyptian symbol for immortality.
Flesh of the gods
Tutankhamun's death mask
Hannes Magerstaedt/Getty Images Europe/Getty Images
Back in the 1920s, the pendant was photographed around the neck of a boy who was employed to supply Carter's archaeological dig workers with water. This, so the story goes, is the child who found the tomb.
Clearing a space for his large water jar, he chanced on the first step -- the first of a flight of sixteen down to the tomb.
His story is expected to feature in the Grand Egyptian Museum.
Back at the new museum, the Tutankhamun exhibition space isn't yet open for viewing. What is known is that it'll be massive, some 7,000 square meters.
The Egyptians expect two to three million visitors in the first year of opening, and up to seven or eight million in the longer term. That would make the new Museum among the top three most visited in the world.
We know that Tutankhamun died young and suddenly. And in death, the Golden Boy became a god -- ancient Egyptians regarded gold as the flesh of the gods.
And of course, it's that gold that dazzles us still -- and will surely tempt some of us to visit Tutankhamun in his new home. | <urn:uuid:90ece1ce-ee09-486d-9048-18b370db353e> | CC-MAIN-2021-21 | https://edition.cnn.com/travel/article/tutankhamun-grand-egyptian-museum/index.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00531.warc.gz | en | 0.964679 | 4,369 | 2.671875 | 3 |
By Andy May
This is the seventh and last post in my series on the hazards of climate change. In this post we examine the effects of climate change on glaciers and sea level rise. The first six examined the effect of humans on the environment, the effect of the growing human population, climate change and the food supply, the cost of global warming, the effect of man and climate change on extinctions, climate (or weather) related deaths, and extreme weather and climate change.
Source: Mike Lester
The IPCC AR5 report has the following to say about the risks of sea-level rise:
“Risks increase disproportionately as temperature increases between 1°–2°C additional warming and become high above 3°C, due to the potential for a large and irreversible sea level rise from ice sheet loss. For sustained warming greater than some threshold [Current estimates indicate that this threshold is greater than about 1°C (low confidence) but less than about 4°C (medium confidence) sustained global mean warming above preindustrial levels.], near-complete loss of the Greenland ice sheet would occur over a millennium or more, contributing up to 7 m of global mean sea level rise.” AR5, WG2, page 61.
OK, if temperatures increase enough, we could go to a rate of sea level rise of as much as 7 mm/year (possibly less). We have no idea how much of a temperature increase it would take, but with low to medium confidence it is between 1°C and 4°C.
“Due to sea level rise projected throughout the 21st century and beyond, coastal systems and low-lying areas will increasingly experience adverse impacts such as submergence, coastal flooding, and coastal erosion (very high confidence). The population and assets projected to be exposed to coastal risks as well as human pressures on coastal ecosystems will increase significantly in the coming decades due to population growth, economic development, and urbanization (high confidence). The relative costs of coastal adaptation vary strongly among and within regions and countries for the 21st century. Some low-lying developing countries and small island states are expected to face very high impacts that, in some cases, could have associated damage and adaptation costs of several percentage points of GDP.” AR5, WG2, page 68.
A very reasonable statement, as more people build and live on the coast, they are more vulnerable to sea level rise. Costs to protect these developments vary a lot, depending upon where they are.
Kip Hansen reports that The New York Times rather breathlessly tells us in 2017:
“A rapid disintegration of Antarctica might, in the worst case, cause the sea to rise so fast that tens of millions of coastal refugees would have to flee inland, potentially straining societies to the breaking point. Climate scientists used to regard that scenario as fit only for Hollywood disaster scripts. But these days, they cannot rule it out with any great confidence. The risk is clear: Antarctica’s collapse has the potential to inundate coastal cities across the globe. … If that ice sheet were to disintegrate, it could raise the level of the sea by more than 160 feet — a potential apocalypse, depending on exactly how fast it happened.” — The NY Times, Looming Floods, Threatened Cities, a three part series by Justin Gillis
So, sustained warming over some unknown threshold, perhaps between 1° to 4°C, will cause the Greenland ice sheet to melt in over 1,000 years. Antarctica has ten times as much ice as Greenland and it will rapidly disintegrate? This paragraph, from the once great New York Times, is laughably speculative and dishonest.
This is particularly true because NASA has recently shown that Antarctica is getting colder and gaining in ice. This is based on studies by Jay Zwally and colleagues in 2015 and 2011. Further, Antarctic sea ice extent set records in 2012 and 2014, as discussed by NASA here. Finally, the record cold temperature in Antarctica of -135.8°F (-93.2°C) was set in 2010 and nearly the same temperature was reached in 2013.
Notice I name my sources and link to peer-reviewed articles, as opposed to the New York Times article which sites anonymous “climate scientists” and “recent computer forecasts.” They do allude to Columbia University’s Dr. Nicholas Frearson in the previous paragraph, but do not attribute the idea to him. They note the computer forecasts are described as “crude” and “rough” by Robert M. DeConte, University of Massachusetts at Amherst.
The one paper they do cite is DeConte and Pollard, 2016, who use a computer model and the RCP8.5 emissions scenario, to attempt to show it is possible for Antarctica to contribute a meter of sea level rise by 2100 (12 mm per year) and 13 meters by 2500. Dr. Roger Pielke Jr. and Ritchie and Dowlatabadi, 2017 have called the RCP8.5 scenario implausible. All-in-all a very shoddy piece of journalism, but the journalist (Justin Gillis) got a free trip to Antarctica out of it.
The current rate of sea level rise
The IPCC reports in AR5 WG1 (page 1139):
“Proxy and instrumental sea level data indicate a transition in the late 19th century to the early 20th century from relatively low mean rates of rise over the previous two millennia to higher rates of rise (high confidence). It is likely that the rate of global mean sea level rise has continued to increase since the early 20th century, with estimates that range from 0.000[–0.002 to 0.002] mm yr–2 to 0.013 [0.007 to 0.019] mm yr-2. It is very likely that the global mean rate was 1.7 [1.5 to 1.9] mm yr-1 between 1901 and 2010 for a total sea level rise of 0.19 [0.17 to 0.21] m. Between 1993 and 2010, the rate was very likely higher at 3.2 [2.8 to 3.6] mm yr-1; similarly, high rates likely occurred between 1920 and 1950.
Three credible estimates of sea level change from the IPCC AR5 WG1 (page 1147) are shown in figure 1.
All three estimates of sea level rise from 1880 show a steep rise from about 1930 to 1960, followed by a slowing or decline in sea level rise from 1960 to 1967 and then a steep rise to 1983, another pause for a few years and a rise from 1985 to 2013, followed by another pause until today. Figure 2 shows the components, with the AMO (Atlantic Multidecadal Oscillation) index overlain in green. The AMO is a normalized index of North Atlantic sea-surface temperatures. When it is positive, the North Atlantic is warm and when it is negative the North Atlantic is cool. The longer term Bray cycle is in a warming phase as we come out of the Little Ice Age, this provides a background trend of ocean warming and thus, sea level rise due to thermal expansion of the ocean water. The sea level rise component due to thermal expansion is currently about 1 mm/year (0.8 to 1.4) according to the IPCC WG1 AR5 report, page 1151. The slopes of the equations shown in figure 2 are the rate of increase in sea level for that segment in mm/year. The slope is the coefficient of “x.”
In figure 2 the AMO index is shown as is, but perhaps should be lagged a few years. The periods of more rapid sea level rise occur when the index is increasing (more thermal energy being taken up by the ocean) or very high. Periods of less rapid sea level increase are associated with a low or decreasing AMO index (thermal energy being expelled by the ocean).
In the CSIRO record, there is an overall increase in the rate of sea level rise, from about 2 mm/year (1930-1960) to 3 mm/year (1985-2013). This is a small change and well within the margin of error, the standard deviation of the estimated error (global mean sea level uncertainty) in the Church and White sea level data from 1930 to 2013 is 1.9 mm. The acceleration, if real, could be an increasing rate of recovery from the Bray cycle low in the Little Ice Age or due to human greenhouse emissions, or some other ocean process, I don’t know of any data that can tell us which it is. The possible acceleration is modest, and we only have decent data for two AMO lows and recoveries, it is very hard to draw any conclusions with only two values. In another 20 years we will have completed a second AMO high and will know more. The AMO is important, but it is only one of many long-term ocean climate cycles, to read more I recommend Marcia Wyatt’s web site here.
Table 1 shows the components of recent sea level rise. The values are from the IPCC WG1 AR5 report, NSIDC and NASA.
We only have a short record of ocean temperature and it is only to a depth of 2,000 meters. The average depth of the oceans is 3,688 meters. However, making a few reasonable assumptions, we can produce figure 3 from the JAMSTEC ocean temperature grid, it shows the oceans warming at a rate of 0.003°C per year.
Figure 3, Data source JAMSTEC MOAA GPV grid.
The graph shows the ocean temperature change from the surface to 3,688 meters. Zero to 2,000 meters are measured with Argo floats and gridded by JAMSTEC. The ocean temperature at 3,688 meters is assumed to be zero and an interpolation from 2,000 meters (where the temperature is very close to 2.4°C all the time) is made. Due to the nearly constant temperature at 2,000 meters, the interpolation should not affect the trend. The AMO index is overlain on the plot in orange. It explains some of the variability in the whole ocean temperature as we would expect.
Church and White (2011) write:
“For 1993–2009 and after correcting for glacial isostatic adjustment, the estimated rate of rise is 3.2 ± 0.4 mm year-1 from the satellite data and 2.8 ± 0.8 mm year-1 from the in-situ data. The global average sea-level rise from 1880 to 2009 is about 210 mm. The linear trend from 1900 to 2009 is 1.7 ± 0.2 mm year-1 and since 1961 is 1.9 ± 0.4 mm year-1.”
For the most part, all the better-known estimates of recent sea level rise fall in the range of 3 mm/year +-1 mm/year or so. However, this is a very small number and the error is large. Nils-Axel Mörner has pointed out that that there is a considerable amount of evidence that the rate of sea level rise is much smaller than reported by the IPCC. He finds evidence from tidal gauges, vegetation and satellite data, that sea level has barely risen at all in the last 25 years. In his publication Sea Level is not Rising, he lays out some convincing evidence. Since the satellite altimeter data do not agree with the worldwide tidal gauges, our measurements of millimeters of change in sea-level rate are buried in uncertainty.
Kip Hansen, in a series of very well documented and well written posts, has explained the complexities involved in measuring sea level and its rise and fall. In his recent post (part 3 of 3) he summaries his conclusions as follows (I have paraphrased them):
- Sea level rise is a threat to coastal cities and very low elevation populated areas (part 1).
- Sea level is not a threat to anything else (part 1).
- Because land also rises and falls depending upon where you are, local tidal gauges are the most important source of information for communities on the coast (part 2).
- Local changes in sea level, due to tectonics and tides are much larger than changes in global sea level change and much faster occurring. Eustatic sea level change is not irrelevant, but it is small and very slow moving (part 2).
- The tools we use to measure changes in global sea level (satellites and “corrected” tidal gauge records) are only accurate to several centimeters, in practice, and we are trying to use them to measure the change in a dynamic ocean surface. The surface change over an entire year is less than 3 mm, about a tenth of the accuracy of the instruments (part 3). As Hansen points out, we cannot even be sure sea level is rising at all.
Land-based sea level measurements are accurate to +-20 mm and affected by land subsidence or uplift as well. The very best and most modern satellites have a measurement accuracy of +-3 mm, under perfect conditions. They can be affected by weather patterns and problems with orbital decay. Further sea-level change around the world is not uniform, the global distribution of changes are affected by the ocean cycles mentioned earlier. Beyond these comments, I will encourage the interested reader to read Kip Hansen’s posts on how sea-level change is measured and the accuracy of the measurements.
Due to the variation in sea level rise from place to place, which is mostly due to variations in land movement and tidal ranges over time, local communities should evaluate their own risks, based on local measurements. They need to prepare their community’s infrastructure for the specific threats they face. Global sea level changes at a small rate that is swamped by error, the focus should not be on it, but the local threat to your community.
Glaciers are retreating
Glaciers have been advancing for most of the past 6,000 years according to Mayewski, et al. 2004 as the world has cooled from the Holocene Thermal Optimum. Figure 4, from Mayewski’s paper, shows some of the evidence. The global cooling from 6,000 years ago is apparent from the worldwide glacial advances plotted in figure 4c, present day is to the left in this plot. In Switzerland (figure 4d), except for a brief glacial retreat during the Medieval Warm Period 1,000 years ago, glaciers were generally smaller than today before 2,200 years ago.
Figure 4, from Mayewski, et al. 2004
The dates along the top of figure 4 are B2K (years before 2000). The green vertical bars in figure 4 are periods of rapid climate change (RCC). We are currently coming out of the latest RCC, the sixth major rapid climate change in the Holocene. Figure 4a is a proxy for Icelandic low-pressure events, these events correlate well with northern hemisphere ice sheet growth (Mayewski, et al., 1997). Figure 4b is a proxy for the Siberian high-pressure event, which also correlates with ice-sheet growth in the northern hemisphere (Mayewski, et al., 1997). Figure 4f shows the winter insolation values for the northern hemisphere (black) and the southern hemisphere (blue). Figure 4g shows the summer insolation for both hemispheres, the summer insolation has decreased in the Holocene in the Northern Hemisphere and increased in the Southern Hemisphere.
Javier created a figure (figure 5) with some of the same data.
Figure 5, source Javier, here.
Figure 5 shows the Marcott, et al., 2013 global temperature reconstruction modified to reflect the known temperature difference between the Little Ice Age and the Holocene Thermal Optimum of 1.2°C. See the appendix (here) for more on Javier’s adjustment to the curve. For an alternative global temperature reconstruction, with some problematic proxies removed that shows a 1.2°C difference between the Little Ice Age and the Holocene Thermal Optimum, without adjustment, see here. The key point is that the Little Ice Age was the coldest period in the Holocene and this is largely due to changes in the Earth’s orbital tilt, or obliquity, which is plotted in figure 5 with a purple line.
Lomborg reports in Cool It:
“… most glaciers in the Northern Hemisphere were small or absent from nine thousand to six thousand years ago. While glaciers since the last ice age have waxed and waned, they overall seem to have been growing bigger and bigger each time until reaching their absolute maximum at the end of the Little Ice Age. It is estimated that glaciers around 1750 were more widespread on Earth than at any time since the ice ages twelve thousand years ago. So, it is not surprising that as we’re leaving the Little Ice Age we are seeing glaciers dwindling. We are comparing them with their absolute maximum over the past ten millennia.”
“… with glacial melting, rivers actually increase their water content, especially in the summer, providing more water to many of the poorest people in the world. Glaciers in the Himalayas have been declining significantly since the end of the Little Ice Age and have caused increasing water availability throughout the last centuries, possibly contributing to higher agricultural productivity. But with continuous melting, the glaciers will run dry toward the end of the century. Thus, global warming of glaciers means that a large part of the world can use more water for more than fifty years before they have to invest in extra water storage. These fifty-plus years can give the societies breathing space to tackle many of their more immediate concerns and grow their economies so that they will be better able to afford to build water-storage facilities.” Lomborg, Bjorn. Cool It (Kindle Locations 884-930).
Global warming will cause excessive sea level rise
In a previous post we discussed the warming of the oceans. We only have significant ocean temperature data since 2004, it is plotted in figure 3. It shows the temperature in the oceans is rising at about 0.003°C per year currently.
One-third to one-half of the 18 cm rise in sea level that we have seen over the past century (1914-2014, from the CSIRO record plotted in figure 2) is due to the oceans warming as we come out of the Little Ice Age. Thus, at most, only 12 cm (about 5 inches) is due to melting glaciers and ice sheets.
According to Lomborg in Cool It:
“…when water gets warmer, like everything else it expands. Second, runoff from land-based glaciers adds to the ocean water volume. Over the past forty years, glaciers have contributed about 60 percent and water expansion 40 percent of the rise in sea levels. In its 2007 report, the UN estimates that sea level will rise about a foot over the rest of the century. While this is not a trivial amount, it is also important to realize that it is certainly not outside historical experience. Since 1860, we have experienced a sea-level rise of about a foot, yet this has clearly not caused major disruptions.
The IPCC cites the total cost for U.S. national protection and property abandonment for more than a three-foot sea-level rise (more than triple what is expected in 2100) at about $5 billion to $6 billion over the century. Considering that the adequate protection costs for Miami would be just a tiny fraction of this cost spread over the century, that the property value for Miami Beach in 2006 was close to $23 billion, and that the Art Deco National Historic District is the second-largest tourist magnet in Florida after Disney World, contributing more than $11 billion annually to the economy, five inches will simply not leave Miami Beach hotels waterlogged and abandoned.” Lomborg, Bjorn. Cool It (Kindle Locations 956-977).
Sea level rose six inches during the 20th century (see figure 6) according to the Church and White dataset. If the IPCC projection for the 21st century is correct, and it rises another 12 to 16 inches, this should not be a problem. We adapted to six inches of sea level rise with more primitive 20th century technology, another ten inches will not be a problem for 21st century technology. Sea walls, barriers like the Thames Barrier and dikes and levees will be built. Or people may choose to move to higher ground. The key factor is sea level is rising very slowly and there is plenty of time to adapt.
The IPCC expects the average person in the standard future to make $72,700 in the 2080s. If the world decides to mitigate additional CO2, rather than adapt to additional CO2, the average person’s earnings would decrease to $50,600 according to Lomborg, a reduction of 30%. It is possible the environmental world would see more people flooded than the richer, less environmental world, because people would be poorer and less able to adapt. In the last century we have lost very little land to higher sea level, simply because the land was valuable enough to protect it with technology.
Figure 6, Church and White Sea Level, 20th century
Figure 7 shows the famous Thames Barrier that protects London from high tides and North Sea storm surges.
Figure 7, the Thames Barrier
The largest U.S. death toll, due to a hurricane, was in 1900. The Great Galveston Hurricane was a category 4 storm that made landfall on September 8, 1900 and killed 6,000 to 12,000 people. It is by far the deadliest hurricane in U.S. history. During much of the 19th century, Galveston was the largest city in Texas. By the time of the great storm, however, it was the fourth largest city after Houston, Dallas, and San Antonio. The 15-foot storm surge swept over the entire island and destroyed the city. The survivors mostly moved to Houston and elsewhere in Texas. The island, at the time, only had an eight-foot elevation, and 3,600 homes were washed away.
After the storm the remaining population raised the island’s elevation to 17 feet behind a concrete seawall, which was completed in 1911. The seawall has protected Galveston from most subsequent hurricanes, even the monster Hurricane Carla in 1961, which is strongest hurricane to ever hit the United States according to the Hurricane Severity Index.
Figure 8 shows the sea wall not long after it was completed.
Figure 8, The Galveston sea wall in the early 20th century, an undated public domain image.
Figure 9 is a 1905 photo of the wall under construction:
Figure 9, Galveston sea wall under construction in 1905.
Figure 10 shows what the sea wall looked like after Hurricane Ike in 2008, the first hurricane to top the wall.
Figure 10, the Galveston sea wall in 2008, shortly after Hurricane Ike (photo credit: Aurelia May)
The Army Corps of engineers estimate that $100 million of damage was averted in 1983 from Hurricane Alicia by the sea wall. Hurricane Ike (2008) overtopped the sea wall and caused a great deal of damage. Since then additional storm surge protections have been proposed, like the so-called “Ike Dike.” But, these are still in the planning stages.
It is true that glaciers are melting today as we warm up after the Little Ice Age. But, the melting glaciers provide much needed water in dry areas. Glaciers reached their maximum Holocene extent during the Little Ice Age and the glaciers today are still more extensive than they were 6,000 years ago. The melting glaciers, outside of Greenland and Antarctica, contribute 30%-50% of the expected one foot rise in sea level over the next 100 years, with thermal expansion of the ocean water contributing most of the remainder. Greenland and Antarctic are not major contributors to sea level rise, especially if Antarctica is gaining ice as claimed by Zwally, et al. and NASA.
Worldwide sea level rise over the next 100 years is not expected to be a problem. Local sea level rise is a problem for low lying communities and they should monitor it locally and build local infrastructure to ameliorate its effects. Land can be protected from sea level rise and severe maritime storms at an affordable cost, or people can move to higher ground.
When one considers that Galveston, Texas was able to protect itself and rebuild after the devastating 1900 Great Galveston Hurricane in only 11 years, imagine what we can do today, 117 years later. Imagine what we will be able to do 100 years from today. Most analyses suggest that adapting to climate change is better than trying to prevent it. The measures that have been proposed to mitigate climate change, mainly Kyoto and Paris, do very little and are very expensive. Further, we are not even sure that global warming is a problem, why fix something that may not even matter?
From an economic perspective, the “time value of money” principle tells us it is foolish to invest a serious amount of money today to fix something that may or may not be a problem over 100 years from now. The best investments will be those that benefit us now and that means adaptation. We may need that money to adapt, whether climate change is natural or man-made.
The important take-aways from this series, in my opinion, are:
- The data comes first, before models and predictions, especially predictions from unvalidated models.
- If you don’t see the problem in the data, it’s not a problem.
- Global warming will not destroy the planet or humans, even in the worst projections (part 1).
- The oceans, the Sun and the Earth’s orbit are the major controls on the climate. Human’s may have some effect, but it must be small (part 1 and here).
- The time value of money is critical. Spending a lot of money today to fix a possible problem in 100 years is foolish. From the standpoint of technology development, 100 years might as well be forever (post 3).
- Human prosperity leads to a better environment, a healthier population, more adaptability, and lower population growth (part 1).
- Poverty leads to a poorer environment, poorer health, and higher population growth (part 1).
- Cheap, widely available, and reliable energy leads to prosperity (part 3, here and here).
- Cold is worse than hot. Cold weather leads to more deaths and disease, warm weather leads to fewer deaths and less disease (part 5).
- Humans are adaptable, today we live in hot areas, cold areas, dry and wet areas, high in the mountains and in rainforests, we have already adapted, somewhere, to anything foreseen by the climate alarmists (part 5).
- Our food supply is growing rapidly, with no sign of slowing down, prices are stable. Population growth, on the other hand is slowing down (part 2).
- The rate of extinctions today is very low, we are not in a “great extinction” nor are we even close (part 4).
- The extreme weather trend is flat or declining (part 6).
- The Gulf Stream is not shutting down (part 4).
- Our measurements of the rate of sea level rise are so inaccurate we cannot be sure that sea level is rising at all, although it probably is at a very slow rate (Kip Hansen here).
- Sea level rise is not alarming, except locally, and should be dealt with as a local problem (part 7, this post). | <urn:uuid:ce5ba0ee-496f-4027-b225-c9152669a502> | CC-MAIN-2021-21 | https://andymaypetrophysicist.com/2017/12/28/glaciers-and-sea-level-rise/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00255.warc.gz | en | 0.944057 | 5,748 | 3.328125 | 3 |
There have traditionally been four basic arguments used to prove God’s existence. They are called the cosmological, teleological, axiological, and ontological arguments. But since these are technical terms, let’s just call them the arguments from Creation (cosmos means creation), design (telos means purpose), moral law (axios means judgment), and being (ontos means being).
Argument from Creation
The basic idea of this argument is that, since there is a universe, it must have been caused by something beyond itself. It is based on the law of causality, which says that every limited thing is caused by something other than itself. There are two different forms of this argument, so we will show them to you separately. The first form says that the universe needed a cause at its beginning; the second form argues that it needs a cause right now to continue existing.
History of the Argument from Creation
Paul said that all men know about God “for God has made it evident to them. For since the Creation of the world His invisible attributes, His eternal power and divine nature, have been clearly seen, being understood through what has been made” (Rom. 1:19–20). Plato is the first thinker known to have developed an argument based on causation. Aristotle followed. Muslim philosophers Al-Farabi and Avicenna also used this type of reasoning, as did the Jewish thinker Moses Maimonides. In Christian thought, Augustine, Aquinas, Anselm, Descartes, Leibniz, and others to the present day have found it valuable, making it the most widely noted argument for God’s existence.
The universe was caused at the beginning
This argument says that the universe is limited in that it had a beginning and that its beginning was caused by something beyond the universe. It can be stated this way:
1. The universe had a beginning.
2. Anything that has a beginning must have been caused by something else.
3. Therefore, the universe was caused by something else, and this cause was God.
In order to avoid this conclusion, some people say that the universe is eternal; it never had a beginning—it just always existed. Carl Sagan said, “The cosmos is all that is, or ever was, or ever will be.” But we have two ways to answer this objection. First, the scientific evidence strongly supports the idea that the universe had a beginning. The view usually held by those who claim that the universe is eternal, called the steady state theory, leads some to believe that the universe is constantly producing hydrogen atoms from nothing. It would be simpler to believe that God created the universe from nothing. Also, the consensus of scientists studying the origin of the universe is that it came into being in a sudden and cataclysmic way. This is called the Big Bang theory. The main evidence for the universe having a beginning is the second law of thermodynamics, which says the universe is running out of usable energy. But if it is running down, then it could not be eternal. What is winding down must have been wound up. Other evidence for the Big Bang is that we can still find the radiation from it and see the movement that it caused (see chap. 10 for details). Robert Jastrow, founder-director of NASA’s Goddard Institute of Space Studies, has said, “A sound explanation may exist for the explosive birth of our Universe; but if it does, science cannot find out what the explanation is. The scientist’s pursuit of the past ends in the moment of creation.”
But beyond the scientific evidence that shows the universe began, there is a philosophical reason to believe that the world had a starting point. This argument shows that time cannot go back into the past forever. You see it is impossible to pass through an infinite series of moments. You might be able to imagine passing through an infinite number of dimensionless points on a line by moving your finger from one end to the other, but time is not dimensionless or imaginary. It is real and each moment that passes uses up real time that we can’t go back to. It is more like moving your finger across an endless number of books in a library. You would never get to the last book. Even if you thought you had found the last book, there could always be one more added, then another and another.… You can never finish an infinite series of real things. If the past is infinite (which is another way of saying, “If the universe had always existed without a beginning”), then we could never have passed through time to get to today. If the past is an infinite series of moments, and right now is where that series stops, then we would have passed through an infinite series and that is impossible. If the world never had a beginning, then we could not have reached today. But we have reached today: so time must have begun at a particular point in the past, and today has come at a definite time since then. Therefore, the world is a finite event after all and it needs a cause for its beginning.
Two Kinds of Infinite Series
There are two kinds of infinite series, one is abstract and the other is concrete. An abstract infinite series is a mathematical infinite. For example, as any mathematician knows, there are an infinite number of points on a line between point A and point B, no matter how short (or long) the line may be. Let’s say the points are two bookends about three feet apart. Now, as we all know, while there are an infinite number of abstract mathematical points between the two bookends, nevertheless, we cannot get an infinite number of actual books between them, no matter how thin the pages are! Nor does it matter how many feet of distance we place between the bookends; we still cannot get an infinite number of books there. So while abstract, mathematical infinite series are possible, actual, concrete infinite series are not.
Now that we have seen that the universe needs a cause of its beginning, let’s move on to the second form of the argument. This argument shows that the universe needs a cause of its existence right now.
The universe needs a cause for its continuing existence
Something is keeping us in existence right now so we don’t just disappear. Something has not only caused the world to come into being (Gen. 1:1), but is also continuing and conserving its existence in the present (Col. 1:17). The world needs both an originating cause and a conserving cause. In a sense, this question is the most basic question that can be asked, “Why is there something rather than nothing?” It can be put this way:
1. Finite, changing things exist. For example, me. I would have to exist to deny that I exist; so either way, I must really exist.
2. Every finite, changing thing must be caused by something else. If it is limited and it changes, then it cannot be something that exists independently. If it existed independently, or necessarily, then it would have always existed without any kind of change.
3. There cannot be an infinite regress of these causes. In other words, you can’t go on explaining how this finite thing causes this finite thing, which causes this other finite thing, and on and on, because that really just puts off the explanation indefinitely. It doesn’t explain anything. Besides, if we are talking about why finite things are existing right now, then no matter how many finite causes you line up, eventually you will have one that would be both causing its own existence and be an effect of that cause at the same moment. That is nonsense. So no infinite regress can explain why I am existing right now.
4. Therefore, there must be a first uncaused cause of every finite, changing thing that exists.
This argument shows why there must be a present, conserving cause of the world, but it doesn’t tell us very much about what kind of God exists. How do we know that this is really the God of the Bible?
Argument from design
This argument, like others that we will mention briefly, reason from some specific aspect of creation to a Creator who put it there. It argues from design to an intelligent Designer.
1. All designs imply a designer.
2. There is great design in the universe.
3. Therefore, there must be a Great Designer of the universe.
The first premise we know from experience. Anytime we see a complex design, we know by previous experience that it came from the mind of a designer. Watches imply watchmakers; buildings imply architects; paintings imply artists; and coded messages imply an intelligent sender. It is always our expectation because we see it happening over and over. This is another way of stating the principle of causality.
Also, the greater the design, the greater the designer. Beavers make log dams, but they have never constructed anything like Hoover Dam. Likewise, a thousand monkeys sitting at typewriters would never write Hamlet. But Shakespeare did it on the first try. The more complex the design, the greater the intelligence required to produce it.
History of the Argument from Design
“For Thou didst form my inward parts; Thou didst weave me in my mother’s womb. I will give thanks to Thee, for I am fearfully and wonderfully made. Wonderful are Thy works, and my soul knows it very well” (Ps. 139:13–14). Responding to the birth of the Enlightenment and the scientific method, William Paley (1743–1805) insisted that if someone found a watch in an empty field, he would rightly conclude that there had been a watchmaker because of the obvious design. The same must be said of the design found in nature. The skeptic David Hume even stated the argument in his Dialogues Concerning Natural Religion, as have several others. However, there have been at least as many objectors to it as there have been proponents of it. The classic exponent was William Paley, and the most noted opponent was David Hume.
We ought to mention here that there is a difference between simple patterns and complex design. Snowflakes or quartz crystals have simple patterns repeated over and over, but have completely natural causes. On the other hand, we don’t find sentences written in stone unless some intelligent being wrote them. That doesn’t happen naturally. The difference is that snowflakes and crystals have a simple repeated pattern. But language communicates complex information, not just the same thing over and over. Complex information occurs when the natural elements are given boundary conditions. So when a rockhound sees small round rocks in a stream, it doesn’t surprise him because natural erosion rounds them that way. But when he finds an arrowhead he realizes that some intelligent being has deliberately altered the natural form of the rock. He sees complexity here that cannot be explained by natural forces. Now the design that we are talking about in this argument is complex design, not simple patterns; the more complex that design is, the greater the intelligence required to produce it.
That’s where the next premise comes in. The design we see in the universe is complex. The universe is a very intricate system of forces that work together for the mutual benefit of the whole. Life is a very complex development. A single DNA molecule, the building block of all life, carries the same amount of information as one volume of an encyclopedia. No one seeing an encyclopedia lying in the forest would hesitate to think that it had an intelligent cause; so when we find a living creature composed of millions of DNA-based cells, we ought to assume that it likewise has an intelligent cause. Even clearer is the fact that some of these living creatures are intelligent themselves. Even Carl Sagan admits:
The information content of the human brain expressed in bits is probably comparable to the total number of connections among neurons—about a hundred trillion, 1014 bits. If written out in English, say, that information would fill some twenty million volumes, as many as in the world’s largest libraries. The equivalent of twenty million books is inside the heads of every one of us. The brain is a very big place in a very small space.… The neurochemistry of the brain is astonishingly busy, the circuitry of a machine more wonderful than any devised by humans.
Some have objected to this argument on the basis of chance. They claim that when the dice are rolled any combination could happen. However, this is not very convincing for several reasons. First, the design argument is not really an argument from chance but from design, which we know from repeated observation to have an intelligent cause. Second, science is based on repeated observation, not on chance. So this objection to the design argument is not scientific. Finally, even if it were a chance (probability) argument, the chances are a lot higher that there is a designer. One scientist figured the odds for a one-cell animal to emerge by pure chance at 1 in 1040000. The odds for an infinitely more complex human being to emerge by chance are too high to calculate! The only reasonable conclusion is that there is a great Designer behind the design in the world.
Argument from moral law
Similar arguments, based on the moral order of the universe rather than the physical order, can be offered. These argue that the cause of the universe must be moral, in addition to being powerful and intelligent.
1. All men are conscious of an objective moral law.
2. Moral laws imply a moral Lawgiver.
3. Therefore, there must be a supreme moral Lawgiver.
History of the Moral Argument
This argument did not gain prominence until the early nineteenth century after the writings of Immanuel Kant. Kant insisted that there was no way to have absolute knowledge about God and he rejected all of the traditional arguments for God’s existence. He did, however, approve of the moral approach, not as a proof for God’s existence, but as a way to show that God is a necessary postulate for moral living. In other words, we can’t know that God exists, but we must act like He exists to make sense of morality. Later thinkers have refined the argument to show that there is a rational basis for God’s existence to be found in morality. There have also been attempted disproofs of God’s existence on moral grounds based on ideas coming from Pierre Bayle and Albert Camus.
In a sense, this argument also follows the principle of causality. But moral laws are different from the natural laws that we have dealt with before. Moral laws don’t describe what is; they prescribe what ought to be. They are not simply a description of the way men behave, and are not known by observing what men do. If they were, our idea of morality would surely be different. Instead, they tell us what men ought to do, whether they are doing it or not. Thus, any moral “ought” comes from beyond the natural universe. You can’t explain it by anything that happens in the universe and it can’t be reduced to the things men do in the universe. It transcends the natural order and requires a transcendent cause.
Same, Different, or Similar?
How much like God are we? How much can an effect tell us about its cause? Some have said that the effect must be exactly the same as its cause. Qualities such as existence or goodness in the effect are the same as those qualities in its cause. If that is true, then we should all be pantheists, because we are all God, eternal and divine. In reaction, some have said that we are entirely different from God—there is no similarity between what He is and what we are. But that would mean that we have no positive knowledge about God. We could only say that God is “not this” and “not that,” but we could never say what He is. The middle road is to say that we are similar to God—the same, but in a different way. Existence, goodness, love, all mean the same thing for both us and for God. We have them in a limited way, and He is unlimited. So we can say what God is, but in some things, we must also say that He is not limited as we are—“eternal,” “unchanging,” “nonspatial,” etc.
Now some might say that this moral law is not really objective; it is nothing but a subjective judgment that comes from social conventions. However, this view fails to account for the fact that all men hold the same things to be wrong (like murder, rape, theft, and lying). Also, their criticism sounds very much like a subjective judgment, because they are saying that our value judgments are wrong. Now if there is no objective moral law, then there can be no right or wrong value judgments. If our views of morality are subjective, then so are theirs. But if they claim to be making an objective statement about moral law, then they are implying that there is a moral law in the very act of trying to deny it. They are caught both ways. Even their “nothing but” statement requires “more than” knowledge which shows that they secretly hold to some absolute standard which is beyond subjective judgments. Finally, we find that even those who say that there is no moral order expect to be treated with fairness, courtesy, and dignity. If one of them raised this objection and we replied with, “Oh, shut up. Who cares what you think?” we might find that he does believe there are some moral “oughts.” Everyone expects others to follow some moral codes, even those who try to deny them. But moral law is an undeniable fact.
Argument from being
A fourth argument attempts to prove that God must exist by definition. It says that once we get an idea of what God is, that idea necessarily involves existence. There are several forms of this argument, but let’s just talk about the idea of God as a perfect Being.
1. Whatever perfection can be attributed to the most perfect Being possible (conceivable) must be attributed to it (otherwise it would not be the most perfect being possible).
2. Necessary existence is a perfection which can be attributed to the most perfect Being.
3. Therefore, necessary existence must be attributed to the most perfect Being.
History of the Argument from Being
When God revealed His name to Moses, He said, “I AM THAT I AM,” making it clear that existence is His chief attribute (Ex. 3:14, kjv). The eleventh-century monk Anselm of Canterbury used this idea to formulate a proof for God’s existence from the very idea of God, without having to look at the evidence in Creation. Anselm referred to it as a “proof from prayer” because he thought of it while meditating on the idea of a perfect Being; hence, the name of the treatise where it is found is the Monologion, meaning a one-way prayer. In another of his writings, the Proslogion, he dialogues with God about nature and develops an argument from Creation also. In modern philosophy, the argument from being is found in the writings of Descartes, Spinoza, Leibniz, and Hartshorne.
To answer the first question, necessary existence means that something exists and cannot not exist. When we say this of God, it means that it is impossible for Him not to exist. This is the most perfect kind of existence because it can’t go away.
Now this argument succeeds in showing that our idea of God must include necessary existence; but it fails to show that God actually exists. It shows that we must think of God as existing necessarily; but it does not prove that He must necessarily exist. This is an equivocation that has confused many people, so don’t feel stupid for having trouble with it. The problem is that it only talks about the way we think of God, not whether or not He really exists. It might be restated this way:
1. If God exists, we conceive of Him as a necessary Being.
2. By definition, a necessary Being must exist and cannot not exist.
3. Therefore, if God exists, then He must exist and cannot not exist.
All Roads Lead to a Cause
We have seen that all of the traditional arguments ultimately rest on the idea of causality. The argument from being needs the confirmation that something exists in which perfection and being is found. The argument from design implies that the design was caused. Likewise, morality, justice, and truth as principles of an argument all assume that there is some cause for these things. This leads us back to the argument from Creation as the basic argument which proves God’s existence, for as one student said, it is the “causemological” argument.
It is like saying: if there are triangles, then they must have three sides. Of course, there may not be any triangles. You see, the argument never really gets past that initial “if.” It never gets around to proving the big question that it claims to answer. The only way to make it prove that God exists is to smuggle in the argument from Creation. It can be useful, though, because it shows that, if there is a God, He exists in a necessary way. That makes this idea of God different from some other ways to conceive of Him, as we will see later.
Now for the $64,000 Question: If all these arguments have some validity but rely on the principle of causality, what is the best way to prove that God exists? If you answer, “The argument from Creation,” you are on the right track. But what if we can combine all of these arguments into a cohesive whole that proves what kind of being God is as well as His existence? That is what we will do in the following pages.
Geisler, N. L., & Brooks, R. M. (1990). When skeptics ask (pp. 14–26). Wheaton, IL: Victor Books. | <urn:uuid:0a875d02-7437-4af6-b662-f74a343d5ccf> | CC-MAIN-2021-21 | https://truth4freedom.wordpress.com/category/questions-biblical-answers/god-questions/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988741.20/warc/CC-MAIN-20210506053729-20210506083729-00135.warc.gz | en | 0.960627 | 4,647 | 3.53125 | 4 |
This article is written like a personal reflection, personal essay, or argumentative essay that states a Wikipedia editor's personal feelings or presents an original argument about a topic. (January 2016) (Learn how and when to remove this template message)
This article may rely excessively on sources too closely associated with the subject , potentially preventing the article from being verifiable and neutral. (January 2016) (Learn how and when to remove this template message)
|Part of a series on|
Seventh-day Adventists believe that the spiritual gifts such as "speaking in tongues" are used to communicate the truth to other people from differing languages, and are skeptical of tongues as practiced by charismatic and Pentecostal Christians today.
A spiritual gift or charism is an endowment or extraordinary power given by the Holy Spirit. These are the supernatural graces which individual Christians need to fulfill the mission of the Church. In the narrowest sense, it is a theological term for the extraordinary graces given to individual Christians for the good of others and is distinguished from the graces given for personal sanctification, such as the Seven Gifts of the Holy Spirit and the fruit of the Holy Spirit.
The charismatic movement is the international trend of historically mainstream Christian congregations adopting beliefs and practices similar to Pentecostalism. Fundamental to the movement is the use of spiritual gifts (charismata). Among mainline Protestants, the movement began around 1960. Among Roman Catholics, it originated around 1967.
Pentecostalism or Classical Pentecostalism is a Protestant Christian movement that emphasises direct personal experience of God through baptism with the Holy Spirit. The term Pentecostal is derived from Pentecost, the Greek name for the Jewish Feast of Weeks. For Christians, this event commemorates the descent of the Holy Spirit upon the followers of Jesus Christ, as described in the second chapter of the Acts of the Apostles.
Belief "17. Spiritual Gifts and Ministries" of the official 28 Fundamental Beliefs of Adventists affirms that spiritual gifts do continue into the present.While the gift of tongues or "glossolalia" is not specifically mentioned, Adventists more often limit it to the ability to speak unlearned human languages, or "xenoglossy"; and have generally rejected the form of tongues practised by many charismatic and Pentecostal Christians, described as ecstatic speech or a "personal prayer language".
Continuationism is a Christian theological belief that the gifts of the Holy Spirit have continued to the present age, specifically those sometimes called "sign gifts", such as tongues and prophecy. Continuationism is the opposite of cessationism.
Glossolalia or speaking in tongues is a phenomenon in which people speak in languages unknown to them. One definition used by linguists is the fluid vocalizing of speech-like syllables that lack any readily comprehended meaning, in some cases as part of religious practice in which it is believed to be a divine language unknown to the speaker. "Orawashia dela sende" for example is one of the many variations of words that can exist when a person is experiencing glossolalia. Glossolalia is practiced in Pentecostal and charismatic Christianity as well as in other religions.
Xenoglossy, also written xenoglossia, sometimes also known as xenolalia, is the putative paranormal phenomenon in which a person is able to speak or write a language he or she could not have acquired by natural means. The words derive from Greek ξένος (xenos), "foreigner" and γλῶσσα (glōssa), "tongue" or "language". The term xenoglossy was ostensibly coined by French parapsychologist Charles Richet in 1905. Stories of xenoglossy are found in the Bible, and contemporary claims of xenoglossy have been made by parapsychologists and reincarnation researchers such as Ian Stevenson. There is no scientific evidence that xenoglossy is an actual phenomenon.
Supporting this position is Gerhard Hasel, who believed the practice refers to unknown human languages only, and not angelic languages nor ecstatic speech.His document has been frequently cited by Adventists. The Handbook of Seventh-day Adventist Theology takes the position that speaking in tongues refers to "previously unlearned human languages" (xenoglossy), using the experience on the day of Pentecost in Acts 2 as the "criterion" for later interpretation. David Asscherick also believes tongues are xenoglossy only.
Gerhard Franz Hasel (1935–1994) was a Seventh-day Adventist theologian, and Professor of Old Testament and Biblical Theology as well as Dean of the Seventh-day Adventist Theological Seminary at Andrews University.
David Asscherick is the co-founder of ARISE. David currently pastors the Kingscliff Seventh-day Adventist church in Chinderah, New South Wales, Australia. He is the former pastor of the Troy Seventh-day Adventist Church in Troy, Michigan. In 2011, ARISE merged with Light Bearers and David became co-director of Light Bearers. He has been featured on 3ABN and Hope Channel and has been a regular presenter at the annual Generation of Youth for Christ conferences.
Ellen G. White wrote concerning this issue. She states...
Ellen Gould White was an author and an American Christian pioneer. Along with other Sabbatarian Adventist leaders such as Joseph Bates and her husband James White, she was instrumental within a small group of early Adventists who formed what became known as the Seventh-day Adventist Church. The Smithsonian magazine named Ellen G. White among the "100 Most Significant Americans of All Time.
Some of these persons have exercises which they call gifts and say that the Lord has placed them in the church. They have an unmeaning gibberish which they call the unknown tongue, which is unknown not only by man but by the Lord and all heaven. Such gifts are manufactured by men and women, aided by the great deceiver. Fanaticism, false excitement, false talking in tongues, and noisy exercises have been considered gifts which God has placed in the church. Some have been deceived here.— Testimonies for the Church Vol 1, p. 412
She also stated... "They give themselves up to wild, excitable feelings and make unintelligible sounds which they call the gift of tongues, and a certain class seem to be charmed with these strange manifestations. A strange spirit rules with this class, which would bear down and run over anyone who would reprove them. God's Spirit is not in the work and does not attend such workmen. They have another spirit."
See also other Adventist commentators.
The first counterfeit instance in regards to a doctrine issue occurred in 1848. James White recorded the incident writing "There has been some division as to the time of beginning the Sabbath. Some commenced at sundown. Most, however, at 6 P.M. A week ago Sabbath we made this a subject of prayer. The Holy Ghost came down, Brother Chamberlain was filled with the power. In this state he cried out in an unknown tongue. The interpretation followed which was this: 'Give me the chalk, Give me the chalk.' Well, thought I, if there is none in the house then I shall doubt this, but in a moment a brother took down a good piece of chalk. Brother Chamberlain took it and in the power he drew a figure on the floor."
To counterfeit means to imitate something authentic, with the intent to steal, destroy, or replace the original, for use in illegal transactions, or otherwise to deceive individuals into believing that the fake is of equal or greater value than the real thing. Counterfeit products are fakes or unauthorized replicas of the real product. Counterfeit products are often produced with the intent to take advantage of the superior value of the imitated product. The word counterfeit frequently describes both the forgeries of currency and documents, as well as the imitations of items such as clothing, handbags, shoes, pharmaceuticals, aviation and automobile parts, watches, electronics, software, works of art, toys, and movies.
James Springer White, also known as Elder White, was a co-founder of the Seventh-day Adventist Church and husband of Ellen G. White. In 1849 he started the first Sabbatarian Adventist periodical entitled "The Present Truth", in 1855 he relocated the fledgling center of the movement to Battle Creek, Michigan, and in 1863 played a pivotal role in the formal organization of the denomination. He later played a major role in the development of the Adventist educational structure beginning in 1874 with the formation of Battle Creek College.
Brother Chamberlain then gave his own interpretation to his unknown tongue and the drawing...
This represents Jesus' words, 'Are there not twelve hours in the day?' This figure represents the day or the last half of the day. Daylight is half gone when the sun is south or halfway from each horizon, at 12 o'clock. Now go each way six hours and you will get the twelve-hour day. At any time of year the day ends at 6 P.M. Here is where the Sabbath begins at 6 P.M. Satan would get us from this time. But let us stand fast in the Sabbath as God has given it to us and Brother Bates.— Brother Chamberlain's words as recorded by James White - Letter to "My Dear Brother," July 2, 1848, written from Berlin, Connecticut.
This experience carried weight with the believers and they continued to observe the beginning of the Sabbath at six o'clock.Later on, through a study of the Bible, this incident was later discovered as a counterfeit manifestation of the gift of tongues. In the summer of 1855, James White urged J.N. Andrews to investigate the Sabbath commencement issue. After several weeks of a "careful investigation of the Scriptures, (he) demonstrated from nine texts in the Old Testament and two texts in the New that the Sabbath began at sundown. Andrews' conclusions were read at the conference in Battle Creek, November, 1855, and, from the scriptural evidence set forth, those present accepted the responsibility of shifting from six o'clock to sundown as the time to begin the Sabbath."
There are four documented cases of people claiming to speak in tongues in the early history of the Adventist church, according to Arthur White:
Arthur White states "There is no record of Ellen White's giving explicit support to, or placing her endorsement upon, these ecstatic experiences with unknown tongues, although she was an eyewitness to three of the four."
There have also been other counterfeit claims. In June 1853 on her trip to Vergennes, Michigan, Ellen White rebuked a certain "Mrs. A." who "professes to talk with tongues, but she is deceived. She does not talk the language she claims to speak. In fact, she does not talk any language. If all the nations of the earth were together, and should hear her talk, no one of them would know what she says; for she merely goes over a lot of meaningless gibberish."The woman claimed to speak the local Native American language.
"At a meeting she held the next day, this woman spoke on the subject of holiness, and during her talk broke out again in the unknown tongue. An Indian who had been invited to come in to hear her speak his language jumped to his feet, declaring: "Very bad Indian that! Very bad Indian that!" When asked what the woman said, he declared: "Nothing; she talk no Indian."
A few days later in the presence of an Indian interpreter who knew 17 of the languages, she spoke and prayed in her gibberish, and he declared that she had not uttered a single Indian word. Her influence was short lived, not only because of this experience, but because of the disclosure (from one of Ellen White's visions) that the man with whom she traveled and lived was not her husband. This in time was confessed."
Ralph Mackin and his wife claimed to experience gifts of the Holy Spirit such as prophecy, speaking in tongues, and even casting out demons. At an Adventist camp meeting in Mansfield, Ohio; they also claimed the gift of tongues, with Ralph speaking Chinese and his wife Yiddish as the result of a vision.Ellen White was cautious if not skeptical, and ultimately rebuked their testimony stating...
"I was shown that it was not the Spirit of the Lord that was inspiring Brother and Sister Mackin, but the same spirit of fanaticism that is ever seeking entrance into the remnant church. Their application of Scripture of their peculiar exercises is Scripture misapplied. The work of declaring persons possessed of the devil, and then praying with them and pretending to cast out the evil spirits, is fanaticism which will bring into disrepute any church which sanctions such work."
She continues to state...
I was shown that we must give no encouragement to these demonstrations, but must guard the people with a decided testimony against that which would bring a stain upon the name of Seventh-day Adventists, and destroy the confidence of the people in the message of truth which they must bear to the world.— Selected Messages book 2, chapter 4
Pentecostal-turned-Adventist E. C. Card says he gave up speaking in tongues. Howard Blum shared his perspective.
One website article, part 2 - "A False Concept Of the Son" claims Demos Shakarian (1913–1993) and the FGBMFI held a meeting to distribute their Voice magazine to Adventist workers. It mentions Adventists Bill Loveless and Dr. Lowe.This was viewed with concern by Adventists, as one editor stated, "Already we have lost members to the delusions of this phenomenon. Some have been young people."
In 2007, Australian administrator Gilbert Cangy reported receiving the gift of unlearned human languages (xenoglossy), when in the Vanuatuan island Ambrym, local Bislama speakers understood his English presentations.
The 1991 National Church Life Survey in Australia found that approximately 5% of Australian Adventists approve of and/or speak in tongues, whereas 11% have no opinion and approximately 85% disapprove. This was the highest disapproval rating amongst all denominations surveyed.
The Seventh-day Adventist Church is a Protestant Christian denomination which is distinguished by its observance of Saturday, the seventh day of the week in Christian and Jewish calendars, as the Sabbath, and its emphasis on the imminent Second Coming (advent) of Jesus Christ. The denomination grew out of the Millerite movement in the United States during the mid-19th century and it was formally established in 1863. Among its founders was Ellen G. White, whose extensive writings are still held in high regard by the church.
The Seventh Day Adventist Reform Movement is a Protestant Christian denomination in the Sabbatarian Adventist movement that formed from a schism in the European Seventh-day Adventist Church during World War I over the position its European church leaders took on Sabbath observance and on committing Adventists to the bearing of arms in military service for Imperial Germany in World War I.
The Seventh-day Adventist Church had its roots in the Millerite movement of the 1830s to the 1840s, during the period of the Second Great Awakening, and was officially founded in 1863. Prominent figures in the early church included Hiram Edson, James Springer White, Joseph Bates, and J. N. Andrews. Over the ensuing decades the church expanded from its original base in New England to become an international organization. Significant developments such the reviews initiated by evangelicals Donald Barnhouse and Walter Martin, in the 20th century led to its recognition as a Christian denomination.
John Norton Loughborough was an early Seventh-day Adventist minister.
Seventh-day Adventists believe church co-founder Ellen G. White (1827–1915) was inspired by God as a prophet, today understood as a manifestation of the New Testament "gift of prophecy," as described in the official beliefs of the church. Her works are officially considered to hold a secondary role to the Bible, but in practice there is wide variation among Adventists as to exactly how much authority should be attributed to her writings. With understanding she claimed was received in visions, White made administrative decisions and gave personal messages of encouragement or rebuke to church members. Seventh-day Adventists believe that only the Bible is sufficient for forming doctrines and beliefs, a position Ellen White supported by statements inclusive of, "the Bible, and the Bible alone, is our rule of faith".
The 1888 Minneapolis General Conference Session was a meeting of the General Conference of Seventh-day Adventists held in Minneapolis, Minnesota, in October 1888. It is regarded as a landmark event in the history of the Seventh-day Adventist Church. Key participants were Alonzo T. Jones and Ellet J. Waggoner, who presented a message on justification supported by Ellen G. White, but resisted by leaders such as G. I. Butler, Uriah Smith and others. The session discussed crucial theological issues such as the meaning of "righteousness by faith", the nature of the Godhead, the relationship between law and grace, and Justification and its relationship to Sanctification.
Edward E. Heppenstall was a leading Bible scholar and theologian of the Seventh-day Adventist Church. A 1985 questionnaire of North American Adventist lecturers revealed Heppenstall was the Adventist writer who had most influenced them.
Charismatic Adventists are a segment of the Seventh-day Adventist Church that is closely related to "Progressive Adventism", a liberal movement within the church.
Seventh-day Adventists believe that Ellen G. White, one of the church's co-founders, was a prophetess, understood today as an expression of the New Testament spiritual gift of prophecy.
The Adventist Church of Promise is an evangelical Christian denomination which is both Sabbatarian Adventist and classical Pentecostal in its doctrine and worship. It was founded in Brazil in 1932 by pastor John August Silveira, as a split-off from the Seventh-day Adventist Church.
Ralph Mackin and his wife were a Seventh-day Adventist couple from Ohio, United States. They claimed to experience gifts of the Holy Spirit such as prophecy, speaking in tongues, and even casting out demons. They caused a stir at a local Adventist camp meeting, which was reported in the local newspaper. They later sought the counsel of Ellen G. White, whom Adventists believe had the gift of prophecy. White was initially cautious regarding their experiences, and later came out opposed to them. After leading some meetings at an Adventist church for a time, they pass from prominence in the church.
Angelic tongues are the languages supposedly used by angels. It usually refers to sung praise in Second Temple period Jewish materials.
The Seventh-day Adventist Church pioneers were members of Seventh-day Adventist Church, part of the group of Millerites, who came together after the Great Disappointment across the United States and formed the Seventh-day Adventist Church. In 1860, the pioneers of the fledgling movement settled on the name, Seventh-day Adventist, representative of the church's distinguishing beliefs. Three years later, on May 21, 1863, the General Conference of Seventh-day Adventists was formed and the movement became an official organization.
The Pillars of Adventism are landmark doctrines for Seventh-day Adventists; Bible doctrines that define who they are as a people of faith; doctrines that are "non-negotiables" in Adventist theology. The Seventh-day Adventist church teaches that these Pillars are needed to prepare the world for the second coming of Jesus Christ, and sees them as a central part of its own mission. Adventists teach that the Seventh-day Adventist Church doctrines were both a continuation of the reformation started in the 16th century and a movement of the end time rising from the Millerites, bringing God's final messages and warnings to a world.
Merritt Gardner Kellogg was a Seventh-day Adventist (SDA) carpenter, missionary, pastor and doctor who worked in the South Pacific and in Australia. He designed and built several medical facilities. Kellogg was involved over the controversy about which day should be observed as the Sabbath on Tonga, which lies east of the 180° meridian but west of the International Date Line.
Merritt E. Cornell was an energetic Seventh-day Adventist minister, who is best known as an early believer of the advent teaching and the Sabbath along with the Three Angels' Message, and he dedicated his life to preaching it. He, along with Joseph Bates and Joseph H. Waggoner, were together in the Committee going over spiritual gifts for the 1855 Seventh-Day Adventist conference at Battle Creek, which became a significant reason for accepting the prophetic gift of Ellen White. | <urn:uuid:9c2df6b9-1a5d-4f30-8178-1953c792c74f> | CC-MAIN-2021-21 | https://wikimili.com/en/Tongues_in_the_Seventh-day_Adventist_Church | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989012.26/warc/CC-MAIN-20210509183309-20210509213309-00216.warc.gz | en | 0.967703 | 4,341 | 2.75 | 3 |
Midwives as the Quintessential Barefoot Doctors
© Jim Berg, MD, 1999
POEM: Thank You To The Midwives
Barefoot Doctoring is the grassroots approach to the healing arts that people use to help heal themselves, friends, family, and community. It is a lay or professional person’s endeavor to be responsible for their own health and those in their sphere of influence. The phrase “Barefoot Doctor” was popularized in the mid-1900’s by the People’s Republic of China, who trained lay people in the healing arts where medical care was not available. The Chinese government supported local folk healers and customs, synthesizing it with modern scientific medical skills. Farmers and field workers were taught effective hygienic and sanitation practices, disease prevention strategies, and the basics of medical diagnostics and therapeutics. Exercise and nutrition were taught, as was first-aid, childbirthing, primary medical care, herbalism, and acupuncture. Today, Barefoot Doctoring is popular in countries the world over, and refers to the concept of people helping people to heal.
[see poem: The Barefoot Doctor]
Barefoot Doctoring has emerged from very ancient roots, for it has been around since the very first person attempted to help another. Indeed, any attempt to enhance the quality of our lives is a form of Barefoot Doctoring. No material license is needed for this, nor degree or certification; and no governmental intervention is necessary to regulate or register Barefoot Doctors, for it is a natural tendency to have compassion for those in despair, and a natural right to attempt to comfort them. Wisdom and skill are the necessary certificates, and consent the only license needed to engage in this sacred art. Barefoot Doctoring is a covenant between two individuals who endeavor on the path of healing, a path guided by respect, nourished by compassion, and protected by integrity. Most importantly, a Barefoot Doctor combines the intention of love with whatever skill and wisdom that they have. More than a degree, profession, or license, it is a common vow of honor in the healing arts, respecting the hopes, rights, and needs of those seeking a healing.
Most Barefoot Doctors tend to specialize according to their own personal interests and the needs of the community. Some become herbalists, others bodyworkers; some are midwives, others teach yoga; some are medical doctors or nurses, others are bush doctors or shamans. Many Barefoot Doctors take a more wholistic approach, combining many types of healing arts into their own unique blend and attempt to meet whatever needs that arise in their community. Some practitioners do Barefoot Doctoring professionally, and others as a hobby. Some have gone to years of schooling, some have done apprenticeships; others are self-taught. Some barefoot doctors are scientific, while others more intuitive; some are more conventional following protocol and modern standards of care, while others are more unconventional, doing “whatever it takes” to help another on their path. What defines a Barefoot Doctor is the intention to use knowledge appropriately, while attempting to move the life toward a higher quality of existence. For the most part, Midwives have been the Barefoot Doctors, for they have proven their honorable intentions and skill since the most ancient of days.
Even a brief glimpse into the history of midwifery will show that midwives are the quintessential barefoot doctor. Clearly, the intention and care a society gives to the most precious of all human endeavors, childbearing, is a clear reflection of the healthcare the society is capable and willing to provide. Midwives were certainly present in ancient days as evident in the bible and early Greek and Roman literature. But like all barefoot doctoring arts, these arts were the arts of the people and not necessarily the academia. And just as written history reflects the will of churches and kings and ruling powers, but neglects the history of the people, the history of midwifery and barefoot doctoring is present mostly in the wisdom and tradition of those practicing today. As these women, like most barefoot doctors, were not academics, few written histories remain.
Like all barefoot doctor arts, midwifery lives as one person attempting to support another during a time of need. Until recently, it has had little organization and most of the training was passed down by example or word of mouth. As cultures got more complex, apprenticeships were developed to assure that the wisdom was transmitted more systematically. Yet as these healers, like most of the humans on the planet, were usually peasant class people with little financial means, these arts were passed on with little fame or fortune, known only to those lucky enough to bear direct witness to its wisdom.
In more primitive cultures, where there is less class differentiation, midwifery and barefoot doctoring often keeps the respect of society as wise elders. But as the class distinction becomes more obvious, those in power tend to want to control healthcare and midwifery. History shows a distinct and clear conspiracy by political and religious forces to oppress both barefoot doctors and midwives, martyrs of the freedom to heal.
In the dark ages midwives and barefoot doctors provided care for the vast majority of peoples on this planet. The oppression that befell them is a reflection of both the sex and class struggles that pervaded society through this millennia. And though barefoot doctoring and midwifery may have been kept alive in the witches covens of the past, these so-called “witches” were often those seeking freedom as healers/midwives, as an inalienable right, whom those in power, usually the church and state, feared as instigators of civil unrest.
The witch-hunts were calculated schemes to oppress these freedom seekers; the Church relied on pre-established doctrines that dictated how and why to live and heal. Midwives and barefoot doctors were often free-thinkers and empiricists who used their own conscience and traditions to lend their wisdom. Thus, these “witches” were accused of sex crimes, religious impiety, collusion with the devil, and having magical powers, and were tortured and slaughtered by the thousands. As the political forces eventually subdued those of the church, those men in power continued to seek oppression of the barefoot doctors and midwives by requiring that these healers seek academic training and eventually licensure by the state to practice their art.
In certain parts of Europe, midwifery was able to become respected as a necessary art and continued, although with the imposition of strict academic standards. In America, the art of midwifery, like barefoot doctoring, was systematically oppressed by a ruling class of men who influenced the state. The American Association of Obstetricians and Gynecologists was established in 1888. The male dominated obstetricians convinced the American public that the science of childbirth was too complicated a specialty for “lay people” to continue to provide. Women, they claimed, were not intellectually capable of utilizing the modern technology needed for childbirth. Childbirth was compared to a disease to be treated with instruments, drugs, and surgery. Certainly, midwives took business away from obstetricians and served women where they could not be observed clinically, and thus seemingly proved a detriment to the advancement of science. While some American obstetricians pointed to the efficacy of midwifery in Europe and throughout history, the majority of obstetricians, like most doctors, sought to protect their economic territory by politically capturing birthing and healing as exclusively their own right and privilege.
In the early part of this century, foundation money began to support the idea that medical schools ought to conform to the John Hopkins’ germanic model of medical education. The doctors’ exclusive right to practice medicine was consolidated in 1909, when upon the urging of the American Medical Association, the Carnegie Corporation sent Abraham Flexner to evaluate medical schools around the nation. His report, published in 1909, effectively diverted financial support from the smaller medical schools that supported education of blacks, women and natural healers. In 1910, when approximately 50 percent of all births were done by midwives, new licensing laws began to be established that dictated that medicine be practiced by medical doctors trained at a certified medical institution as suggested by the Flexner Report. Those practicing midwifery or healing without a license to practice, were persecuted as criminals.
By 1916, the Census bureau statistics showed rising death rates amongst women and babies, clearly showing that the newly developed medical model was inferior to those that still included midwifery. Nevertheless, despite these and many other revealing statistics, the conspiracy against midwifery and all barefoot doctoring, has continued until the present. Barefoot Doctors and midwives were arrested, harassed and threatened into near oblivion. A dark point for the ancient wisdom as it dwindled to a small and fearful flame. By 1953, the rate of midwifery attending births were down to only three percent. No longer perceiving midwifery as a threat, the rate of propaganda against the ancient art slowed. American women now relied on medical doctors to treat their childbirth as if it were a pathological, not natural, process.
Women’s loss of faith in the birthing/healing as a natural process was accompanied with the loss of the ancient wisdom that has been handed down since time immemorial. Midwives stopped training midwives, as the art dwindled to a point probably unknown since the earliest history of humanity. Yet, like a light so eternally shining, midwifery, as a barefoot doctoring art, kept the wisdom alive.
Modern obstetrics exemplifies the blessings and curses of modern medicine. It helps saves lives, and prevents unwanted pregnancy; It is a lucrative profession and is guardian of the “standard of care”. Obstetrics can also be extremely uncomfortable, expensive and invasive. Obstetrics, like most of medicine, seeks to apply techniques, rather than respect the inherent healing and birthing capacity of human beings. The excellent in the profession seek to educate their clients on healthier ways. The less than excellent doctors, demand that patients comply with the protocol, appearing rude, indignant, and self-righteous. Midwifery, like barefoot doctoring, has evolved to demand that respect accompany the skill. As it came into the modern era, midwives now seek the freedom to practice their art. The pursuit of this freedom has exemplified the barefoot doctors attempt to tie freedom with responsibility and skill with respect.
In the 1960’s and 70’s, a movement to reform childbirth emerged in the form of small gatherings and study groups. Women began to demand the right to be allowed to care for themselves in the way that they preferred. Women rallied and demanded the right to care for themselves and returned to having homebirths.
By this time, most of the births were done within the hospital. In some places, C-sections were greater than 50%. Forceps, episiotomy, and induction of labor were the standards of care, misleading most of the population still to believe that the hospital was the responsible place to have a birth. But, through the faith that birth is a natural process requiring intervention only occasionally, midwives sought to bring the birth back into the comfort of the home.
New laws were passed, allowing women to have their births at home as long as it was with a “qualified midwife”. Today the credentials of a “midwife” vary state-to-state but usually means one of the four types of midwives. 1) lay midwife 2) direct-entry midwife 3) certified professional midwife 4) certified nurse-midwife. From a legalistic standpoint, a qualified midwife today refers to one who meets the requirements for licensure, and this can include any of the types of midwives except a true “lay midwife”. The details of a qualified midwife varies amongst states. From a Barefoot Doctors’ point of view, a qualified midwife is one who has honor and skill in helping women give birth. Most states don’t recognized these qualification without a more formal education and licensure procedure.
A so-called “lay midwife” has usually apprenticed with an experienced midwife and focuses on homebirth. They also may have learned their art by direct observation and experience, from their friends, family, neighbors, traditions, faith and divine inspiration. They are called “lay midwives” by those who consider them to have little or no academic or formal training. By definition, they are not licensed midwives and usually practice illegally. Examples include granny midwives, church midwives, traditional birth attendants, “parteras” who serve Latino women in the American Southwest, and the many nameless ones who informally help others have safe out of hospital births.
A “direct-entry midwife” provides care to women during the prenatal, birthing, and postpartum periods. She may receive academic training from a midwifery school and has done a apprenticeship for appropriate clinical training. In Europe, the term “direct-entry midwife” refers to those who attended a midwifery school that was not a nursing school. In the USA, the term tends to refer to those who used multiple routes of entry to gain the core competencies needed to become a responsible midwife, without necessarily becoming a nurse first. Many states have licensure or at least some form of certification for direct-entry midwives. The bulk of their practice is homebirths. Today, in the USA, less than .5% of all births are homebirths, and most of these are helped by direct-entry midwives.
A certified nurse-midwife (CNM) is a Registered Nurse that has furthered her studies to get a degree in midwifery. To practice as a CNM, a nurse must attend an accredited nurse-midwifery education program, pass a national certification exam and meet the requirements of either the American College of Nurse-Midwives or American College of Nurse-Midwives Certification Council. Because physician backup is required, the nurse-midwife usually works in the hospital, in an obstetrician’s office, or in a birthing clinic. About 96% of CNM attended births are in a hospital, 3% in a free standing birthing center, and only 1% are done at home. Since the CNM is closely allied with the rigorous academic standards of the medical establishment, they are the midwives with the most power and numbers, and thus often are the ones who get hospital privileges, physician referral’s, and their services covered by third party payers.
A certified professional midwife (CPM) can be either a nurse midwife, or a direct entry midwife, who has received certification by NARM (North American Registry of Midwives). Multiple routes of entry are encouraged, and after documenting proof of training and experience, the CPM can pass extensive tests and practical exams to allow them to be licensed if their state recognizes these standards. The CPM credentialing validates multiple routes of entry into midwifery, respecting apprenticeship, schooling, preceptorship, hospital training and self-study as appropriate. They urge a “competency based education” (CBE) where practical skills can be demonstrated as proof of completion of “core competencies”. The certification process requires that there be clear documentation of practical experience as outlined in a “practical skills checklist”.
Most states strictly forbid lay midwifery to be practiced. Some form of licensure is usually required. A person may not professionally help another person with their birth unless blessed by the state licensing board. Recognizing that total freedom of delivery is as impractical as total freedom of healing, compromise had to come by convincing lawmakers to allow for qualified midwives that meet a standard of excellence as defined by each particular state. Most states these days required some college level course material like biology, anatomy and physiology, nutrition, and psychology, as well as a certified midwifery course and a certain minimum number of attended births, and the passing of a written exam. A similar process has happened to many and most of the other barefoot doctoring arts like massage, naturopathy, nutrition, acupuncture and physical therapy. All these need to meet state requirements, if they are allowed to practice at all. This is greatly impinged on our personal right to practice our healing as we see fit; but it is a working compromise to the overwhelming oppression of the total conspiracy to forbid all except doctors to practice healing.
Most of the other healing arts have given in and followed the tyrannical path of medicine requiring formal academic schooling, hospital based training and homage to the doctor as boss in the field of medicine. But direct-entry midwifery, as it is now evolving, is seeking a different compromise.
Midwives have always understood the value of gathering together to keep their sacred knowledge protected. In the United States, this group was consolidated into MANA (Midwifery Alliance of North America). Leaders and wise elders of midwifery met and churned out of their hearts, a set of core competencies that further define excellence in quality childbirthing. MANA welcomes diversity, yet believes that midwives can honor multiple routes of training as proof of ones fundamental mastery of the core competencies. This allows one who is choosing a midwife and homebirthing to know that a group of wise elders think that this person has learned what they need to learn to be helpful and responsible at births as a midwife.
Most midwives believe that certification as verification of completion of core competency (currently now defined by MANA) and NARM skills is enough for a midwife to responsibly practice her art. Unfortunately, a lot more is involved than that. Giving in to a variety of political pressures, states have put restrictions on midwives that vary tremendously. The arbitrariness of the licensure laws lead us to conclude that the government is confused as to the standards of midwifery, and thus more prone to the more conservative political pressures of the established medical community. Midwives have to comply with a state to state standard, and sometimes no standards at all. All the states can show us their wisdom by adopting a core competency practical skills checklist as demonstrated by NARM’s certification process of certified professional midwives. As this has not happened yet, most midwives continue to practice either illegally or by finally buckling down and meeting the licensure requirements of each state in which they practice. This proves to be expensive and often too academic to be practical. MEAC (Midwifery Education Accreditation Council) was started in 1991 to provide educational standards and to evaluate programs doing midwifery education. They accredited schools that comply with its standards as defined by MANA and NARM. Those who graduate from MEAC accredited schools are eligible to qualify for the NARM exam leading to certification as a CPM (certified professional midwife). MEAC also respects multiple levels of entries into midwifery which include apprenticeship, at-a-distance learning, certification programs, degree programs, programs within institutions, and private institutions.
Like all certification processes these days, the expense and hassle of the paperwork keeps many individuals from fulfilling their dreams, and keeps many an institution from being created to help people learn how to birth responsibly. MANA, MEAC, and NARM are the fruits of some of humanity’s deepest midwives. A lot of deep thought goes into development of educational standards, accreditation procedures and agencies. Yet even this does not allow each individual the right to practice her art without bureaucratic and financial nightmare of having to meet standards, even reasonable standards, as dictated by the states. The honor of helping with births ultimately has to remain with each individual who has to answer to their own inner voice of wisdom.
Defining the honor of midwifery is thus challenging as it has no true standard, but the inner voice of righteousness that says, “Yes, this is right”. When it comes to the qualifications of becoming a worthy midwife, there are certain evolutionary steps that excellent midwives take proving their success in their art as well as their skill and wisdom. They usually start as a shimmer of a vision into the profundity of the art. One day she thinks, “I will be a midwife”. For years, even decades sometimes, this may remain only a vision. But as a person matures and applies herself, she begins to study aspects of midwifery of interest to her.
Some choose to study formally at an institution or school. Some seek out midwives and help out around births first just as a witness, then as an attendant, next as an assistant and soon or later if the bonds are good the student becomes an apprentice. The goal of this student phase is to consolidate the knowledge neccesary to understand the natural processes of pregnancy and delivery. MANA has a set of core competencies in the area of antepartum, intrapartum and postpartum and neonatal and well woman care. The student’s job is to begin the acquisition of the knowledge base needed to successfully apply the skills to the midwifery.
Many students go to school, some do home study, some take workshops, some do on the job training. The most important thing is that the acquisition of the knowledge occurs, and that the student becomes enthusiastic to apply this knowledge.
The discipline of acquiring the skills of midwifery, marks the beginning of the apprenticeship phase. Some people apprentice with only one teacher, others seek the skill by apprenticeship with many. Some do more formal apprenticeships in hospitals or at birthing clinics. Some attend homebirths. The goal of their apprenticeship phase is the acquisition of wisdom that is the ability to skillfully apply the knowledge of midwifery into practice. MANA has a list of core competency skills and NARM has developed a practical skills checklist. MEAC requires that all people graduating from a MEAC accredited school have acquired the skills within the MANA core competencies to become a CPM, and one must prove these skills through the practical skills checklist.
Finishing even an accredited school does not guarantee the wisdom necessarily to effectively handle a complex midwifery practice. Certainly passing required state licensure even less. Honor dictates that the apprentice obtain the blessing of their teachers, as well as the internal conviction that they are capable. This is much more important of a need for qualification, but is much harder to qualify for licensure. Different cultures and different traditions can dictate a different set of core competencies, but all cultures and traditions hope that apprentices be worthy in the eyes of their teachers and that they exude a sense of self-conviction.
Our society, lost in bureaucracy, litigation and greed, has lost sight of the need to require honor as part of the core competency. If one takes the course, gets adequate grades, attends a certain number of births, they are eligible for licensure. They can be poor decision makers, sneaky, overconfident, but this does not play into the equation. These moral and ethical standards are not explicitly tested for by the state or accrediting agencies. The best of all worlds produce a smart student and a skilled apprentice who develops into a midwife with wisdom and integrity. Lack of any of these ingredients is a compromise to all involved in a birthing process.
[see “Barefoot Doctors’ Code of Ethics”]
Upon gaining confidence and the necessary theoretical background, the student usually apprentices with a more mature healer(s) whom they respect as having manifested the wisdom in the healing art which they aspire toward. This phase of practical endeavor usually takes many years to gain the confidence in the clinical skills that are necessary in private practice. Some prefer to train in universities, others seek a more private apprenticeship. Some stick with just one teacher, while others prefer to taste the wisdom of many. What is important is that they do gain the necessary clinical skills, and just as important, a style of applying the knowledge and skills that is effective, kind and reasonable.
During these years aspiring as a student and disciplining as an apprentice, a responsible Barefoot Doctor (as a midwife) in training also endeavors on a path of self-healing and community service. A Barefoot Doctor should first recognize his/her own life as sacred, and seek to prove that true healing is possible in one's own being. One's own life force is the one most immediately available, and thus the most accessible to prove one's wisdom and skill. It is through this endeavor into self-healing that allows a radiance to occur from within the healer, a radiance of health and vitality, that immediately overflows into those seeking healing. Failure on this path of self-care due to slothfulness, ignorance, or neglect implies a hypocrisy which obscures the integrity of a healer to those seeking assistance. A Barefoot Doctor in training should also apply whatever knowledge and wisdom they do have into community service. This service, done from the love in ones heart, is for free, and sincerely shows that one’s intention is good. Those who never truly serve another are not Barefoot Doctors, but rather healing mercenaries with selfish motivation. No matter how good their skill, a worthy healer needs love overflowing from their hands to show that they are desiring to respect and honor those who seek help.
This initial stage of aspiration, marked by an in depth study into the art and science of healing, the training in clinical skills, and a successful path of self-care and service, culminates when the teacher bestows their blessing onto the student who feels themselves ready to practice on their own. This recognition may come in the form of a degree or certification, or as a simple nod of the head and a smile. This ‘christening’ signifies the initiation as a Barefoot Doctor. Reminiscent of a black belt in the martial arts, this first major initiation marks the move from aspiration to discipleship--the blessing to now pass down one’s art of healing and take on students and clients of one's own.
As the Barefoot Doctor continues in this path of discipleship, he/she matures into this next stage by successfully helping to heal people with her honor, skill and wisdom. She begins to teach students about her particular art of healing and eventually takes on apprentices to train intimately. Her self-care techniques are well established as a healthy lifestyle, and her service is shown to the community over and over. Once her students become Barefoot Doctors themselves, and they now begin to take on students, this marks a transition to a second initiation as a Barefoot Doctor--the equivalent of a second degree black belt.
The transition from Discipleship to Master begins at this stage. A Master has taken his/her art to a new level- e.g. successfully started schools; developed and perfected healing techniques; inspired and helped many people on their path of healing or as a healer. This stage of Masterhood is the culmination of a Barefoot Doctor, proving that the fruits of her wisdom have flourished. These stages are not necessarily ambitions or achievements, but a reflection onto the profundity of love and wisdom that a person can give in a lifetime. They are not awards, certifications, or degrees, but are the fruits of honor .
Thus we see that the art of Barefoot Doctoring, as exemplified by midwifery, can go as deep as a human is capable. It can be the very instinctual compassionate urge to help someone in pain, or it can be the art of a Master who has spent a lifetime helping our species to better its’ quality of existence. Barefoot Doctoring can be a lay person’s hobby, or a professional’s occupation. It can be a strategy for either a specific healing art or for the very art of healing. By its very nature it seeks to express knowledge and skill (wisdom) with a loving intention to help others heal themselves and become educated as healers. Barefoot Doctoring is the way of a graceful healing, the way of harmonizing with the forces of Nature---the kind, loving way of healing.
[See “The Honor of Midwifery”]
History has proven over and over, that honor does not dictate the right to practice a healing art, for this is influenced by so many factors; but honor does help us to gain vision of where the practice of the art should be. This vision has kept both healing and midwifery with the hope of freedom and responsibility in the delivery of quality care. The success of this care is proven in the statistics. The statistics clearly show that low risk home births attended by licensed direct entry midwives have equivalent neonatal mortality statistics and Apgar scores, less C-sections, and significantly less low birth weight births. Yet, even licensed midwives are biased against, by not receiving hospital and managed care privileges, equivalent (if any) Medicaid and third party re-imbursement, and are denied the right to participate in the healthcare system as equal and independent practitioners of their art. The Pew Health Professions Commission and the University of California San Francisco Center for Health Professions Taskforce on Midwifery has reviewed the current state of the art of midwifery, and made some bold statements concerning the future of midwifery. This 1999 Taskforce report entitled, Charting a Course for the 21st Century: The Future of Midwifery, is as significant a document for midwifery as the Flexner Report was eighty years its prior. The difference is that the Flexner Report hindered the growth of midwifery; and the Taskforce’s report will lead midwifery into a new age of opportunities. Though these recommendations do not cover all the responsible and honorable midwives practicing today, it does give hope to those who have taken the effort to become licensed and “qualified”. It encourages the US healthcare system to embrace midwifery by recommending that midwives be recognized as independent practitioners with the rights and responsibilities which all independent professionals share, and that the laws, rules and regulations regarding midwives reflects this non-discriminatory policy.
It is the
finding and vision of the Taskforce that the midwifery model of care is an
essential element of comprehensive health care for women and their families that
should be embraced by, and incorporated into, the health care system and made
available to all women.
realize this vision, a number of actions need to be taken. The Taskforce offers
fourteen recommendations for educators, policy makers and professionals to
consider. The Taskforce on Midwifery proposes these recommendations in the
spirit of improving health care and hopes that the report will benefit women and
their families through increased access to midwives and the midwifery model of
care. The report should serve to inform managed care organizations, health care
professionals and others who employ, collaborate with, and reimburse midwives
about the midwifery model of care and its benefits. In addition, the authors
hope to inform the profession of midwifery about the opportunities and
challenges it faces in today’s health care delivery environment.
(Taken from: Joint
Report of the Pew Health Professions Commissions and the USCF Center for the
Health Professions, Charting a Course for the 21st Century:
The Future of Midwifery, 1999.)
[See the “Pew Recommendations for the Future of Midwifery”]
Until our society recognizes the right of individuals to practice and seek help in the barefoot doctoring arts, and until the practitioners have proven their honor and skill beyond doubt and recourse, the government has the obligation to regulate the healing arts to protect its citizens from neglect, fraud, abuse and lack of skill. Task forces like this PEW Commission, and wise elder peer groups like MANA, NARM, and MEAC, serve to balance the tendency towards over-regulation by offering guidelines for training and practice, that are fair, deep, safe, and accessible. These organizations are shining examples to all the healing arts on the possibility of cooperative endeavor. The individual practitioner’s own honor and conscience, tempered by these societal forces are the hope that humanity will move towards more fulfilling healing and birthing experiences. The balance between freedom and regulation will go on, of this there is little doubt; Perhaps now, after eons of control issues, they can endeavor on a path of cooperation that further aids qualified practitioners to serve in the most qualified of ways. This is how the art of Barefoot Doctoring is evolving through midwifery.
Jim Berg, M. D. has a
private practice of natural family medicine and acupuncture in the New Orleans
area. As a licensed medical doctor,
he offers wholistic consultation in well-person care, pediatrics and internal
medicine. His training with herbs,
Chinese medicine, yoga, tai chi and Qigong, bodywork, foods and lifestyle
strategies, allows Jim to complement his general medical practice with other
Dr. Berg, with his wife Dee Anne Domnick, L.M., CPM, co-directs the non-profit Barefoot Doctors’ Academy’s School of Natural Medicine and College of Midwifery. The Academy exemplifies the way of integrity and skill of the Barefoot Doctor. Jim is a also a Clinical Assistant Professor at Tulane University, School of Medicine. He lectures internationally on many topics relating to wholistic, natural and complementary medicine, correlating his love for teaching with his love for healing. Of all the arts that thrill him the most, Jim loves the art of Barefoot Doctoring, the art of caring for people. For workshop information, call: (504) 845-4247 or write: P.O. Box 276; Madisonville, Louisiana 70447-0276
Back to Jim' Writings | <urn:uuid:fba821df-6ee8-4e1c-a4ba-c1d7b2f026c4> | CC-MAIN-2021-21 | http://jimbergmd.com/midwives_as_the_quintessential_b.htm | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988753.91/warc/CC-MAIN-20210506083716-20210506113716-00495.warc.gz | en | 0.967247 | 7,074 | 2.625 | 3 |
Review Article | Open Access
Emanuele Bosi, Flavia Mascagni, "Less Is More: Genome Reduction and the Emergence of Cooperation—Implications into the Coevolution of Microbial Communities", International Journal of Genomics, vol. 2019, Article ID 2659175, 5 pages, 2019. https://doi.org/10.1155/2019/2659175
Less Is More: Genome Reduction and the Emergence of Cooperation—Implications into the Coevolution of Microbial Communities
Organisms change to adapt to the environment in which they live, evolving with coresiding individuals. Classic Darwinism postulates the primal importance of antagonistic interactions and selfishness as a major driver of evolution, promoting an increase of genomic and organism complexities. Recently, advancements in evolutionary ecology reshaped this notion, showing how leakiness in biological functions favours the adaptive genome reduction, leading to the emergence of codependence patterns. Microbial communities are complex entities exerting a gargantuan influence on the environment and the biology of the eukaryotic hosts they are associated with. Notwithstanding, we are still far from a comprehension of the ecological and evolutionary mechanisms governing the community dynamics. Here, we review the implications of genome streamlining into the unfolding of codependence within microbial communities and how this translates to an understanding of ecological patterns underlying the emerging properties of the community.
In his 1862 book “Fertilisation of Orchids” , Charles Darwin postulated the coevolution of the orchid and fertilizing insects. Puzzled by the unusual length of the orchid Angraecum sesquipedale spur (around 30 cm long), Darwin predicted the existence of a pollinator moth with a proboscis nearly as long as the orchid spur (“…in Madagascar there must be moths with proboscides capable of extension to a length of between ten and eleven inches”). More than that, Darwin proposed a competition model to explain the emergence of such unusual features, according to which: (i) plants with longer spur are more easily fertilized by moths, since the insects have to delve deep in the flower to reach the nectar, resulting in a better impollination; (ii) insects with longer proboscis easily gather the plant’s nectar with less energy dispersion, acquiring more nutrients at the expense of the plant’s fertilization; (iii) plants with longer spurs are then positively selected; and (iv) the insects need longer proboscides to have an easy time feeding on the plant nectar. In other words, the outcome of such relationship established between these species is an arm race which favours individuals with increasingly long spurs/proboscides.
The concept that biotic interactions (such as the mutual competition reported above) are a major driver of evolution stands at the basis of the Red Queen (RQ) hypothesis . Named after a quote from Through the Looking-Glass, “It takes all the running you can do, to keep in the same place.”, the original RQ is a macroevolutionary hypothesis proposing that coevolution of the interacting species might account for constant extinction rates observed in a number of taxa (opposed to sudden extinction caused by abiotic factors). On a microevolutionary level, RQH has been applied in the context of host-parasite interactions and in particular to explain the advantage of sexual reproduction [3, 4] over other reproductive strategies . Indeed, the host-parasite interaction is ubiquitous and largely influenced by genetics, leading to frequency-dependent selection of genotypes. Therefore, genetic variability introduced by sexual reproduction would provide a substantial advantage facilitating the generation of novel/rare genotypes able to cope with the parasite infection .
Antagonistic interactions are not the only force pushing coevolution. Cooperation is pervasively diffused, if not inevitable, in nature. Looking at the biochemistry of different organisms, from eukaryotic hosts to small microbes, there are a number of compounds which cannot be synthetized but must be gathered from external sources (such as diet or symbionts). For instance, bacteria living in the human gut are auxotroph for different compounds which are acquired from the host, repaying it with vitamins (B1 and B12) and other metabolites having a positive impact on human health. The Black Queen (BQ) hypothesis has been recently proposed to explain the evolutionary dynamics leading to such dependency, which is tightly connected to the concept of “leakiness.” In brief, a number of biological processes produce “leaky” goods that are available from other organisms. Therefore, genetic elements of these “beneficiaries” involved in such processes become dispensable and can be lost. Individuals undergoing such gene loss events will be advantaged and take over their population. The BQ interactions represent a force promoting an adaptive genome reduction, whose strength depends on a number of factors, including the overlap of ecological niches and the presence of other organisms contributing (or subtracting) these shared resources . The BQ has been applied to simple prokaryotic systems, making possible a validation through laboratory (co)evolution experiments .
The routes leading to genome reduction, or simplification, which can be adaptive and not merely the product of neutral gene loss , challenge the evolutionary view according to which life on earth is characterized by an increase of the complexity in time. Although genome complexity does not necessary scale with the (hard to define) organismal complexity , loss of genes in prokaryotes usually implies loss of functions. Therefore, this mode of evolution brings some functional constraints that must be fulfilled from the environment. In other words, the apparent conflict between evolutionary simplification and the “Zero Force Law of Evolution” , stating that unconstrained evolution leads to a monotonic increase in the average organismal complexity , is solved by thinking that the simplification in one organism is allowed by the complexity increase of the environment. Thus, the overall complexity of a biological system subject to reductive evolution increases since it now requires specific ecological interactions.
In this review, we will describe the implications of genome reduction in biological systems defined by complex ecological interactions, including microbial communities and holobionts, which are the combination of eukaryotic hosts and their microbiota.
2. Cooperation Is Pivotal for the Stability of Microbial Communities
Although classic microbiology emphasized the use of pure cultures, in nature, microbes are part of complex communities in which they interact with each other. These ecological interactions include not only “selfish” relationships like predation or competition but also synergistic ones, such as syntrophy , protection against chemophysical stress [16, 17], and access to limited resources [18, 19]. Following the BQ nomenclature, functions whose products are (at least partially) shared with other organisms in the environment are called leaky or Black Queen Functions (BQF) . As mentioned previously, a decrease in the selective pressure on genes encoding such functions will favour genome streamlining in some organisms which will begin to outgrow the microbes within the same population. In a homogeneous population, BQ states that loss-of-function (LOF) mutants will keep growing until an equilibrium between ancestral and mutant clones is reached, where they will compete for the same resources [9, 20]. Indeed, the mutant (beneficiary) will depend on the ancestral strain (helper) to complement the lost function, only if there are no other providers of the required good. In a real-life mixed microbial community, however, it is very likely that unrelated organisms can support the growth of the beneficiaries, without necessary competing for the same resources. In this case, the LOF mutant will take over the ancestral clone, engaging in a dependency relationship with unrelated helpers. It should be noted that, if requirements of helpers and beneficiaries are sufficiently disjointed, this relationship is rewarding for all the actors: beneficiaries can freely acquire the goods provided from other species, while the helpers can become necessary for the other species to thrive. The helper species is not affected by fluctuations of beneficiary species abundance, whereas a decrease of helpers would be detrimental for beneficiaries guarantee, on a community level, a shift from competition to coexistence [21, 22]: beneficiaries tend to be advantaged when they do not compete with their helpers, which means that nutritional specialization maximizes the resource allocation and the overall fitness of the community.
The importance of microbial communities for the environment, the geochemical cycles, and the health and development of coexisting eukaryotes is now acknowledged [23, 24]. More importantly, we know that microbial communities are “complex adaptive systems” , where individuals and populations interact, giving rise to the system’s higher-order (emergent) properties; therefore, to understand the mechanisms underlying their composition is crucial. In this sense, BQ provides important evolutionary insights into the contribution of genome reduction to stratify dependency relationships within the community. For instance, the analysis of gut microbiota variability highlighted the presence of dominant alternative community compositions (enterotypes) , whose origin and nature are still debated [27–29]. A recent study linked the emergence of different enterotypes to a group of strong interacting species, which are species groups characterized by a strong association. Stated differently, according to this model, patterns of association, or codependence, drive the community to different compositions with similar stability. As pointed out by the authors, this knowledge paves the way for translational applications into human health, in that the manipulation (i.e., addition or removal) of these strong interacting species can be used to alter the microbiome composition from unhealthy to healthy enterotypes.
In perspective, the technological advances of metagenomics, taking us closer to a “strain-level” resolution , will allow the integration of microbial ecology with evolutionary genomics. Thanks to advancements in dynamical modeling, it is possible to infer ecological interactions between species by measuring variations in abundances from metagenomics longitudinal data. For instance, Steinway et al. constructed a Boolean dynamic model from time series metagenomics data and used it to identify competitors of Clostridium difficile, using metabolic network reconstruction to break down the metabolic interactions occurring between microbial species . Having the genome sequences, it will be possible to understand the evolutionary trajectories and the ecological interactions of the microbial communities. System biology approaches like constraint-based metabolic modeling, applied at a community level, will facilitate the knowledge-driven engineering of consortia, paving the way for a synthetic ecology .
3. Rising Complexity: Coevolving with Eukaryotic Hosts
An important factor related to reductive evolution of symbiotic microbes is the intimacy of the symbiotic relationship (obligate vs. facultative) with their eukaryotic hosts [34, 35]. For instance, obligate intracellular symbionts, such as Buchnera aphidicola, live in a nutritionally rich environment, with relatively low population size and little (if any) access to foreign DNA to acquire via Horizontal Gene Transfer (HGT). Thus, not surprisingly, this bacterium gradually accumulates inactivating mutation on “dispensable” genes which are successively lost [36, 37]. Again, as adaptive genome streamlining is shaped on the nutritional requirements of the symbiont, it is possible to predict the degree of the reduction, as well as, to some extent, the order of gene deletion . On the other hand, bacteria engaging a less “radical” lifestyle, i.e., extracellular symbionts, are subject to a number of constraints including ecological interactions, fluctuation of nutrients, and dynamic changes of the community composition. Therefore, the evolution of these bacteria is less constrained, and their genome size can either increase or decrease, also depending on their lifestyle . Finally, free-living bacteria able to colonize different niches, such as representatives of the genera Burkholderia and Sinorhizobium, are characterized by large genomes made up by multiple chromosomes and a notable phenotypic versatility which allows them to survive in different environments. It should be noted that, although being less common, reductive evolution also occurs among free-living bacteria .
The eukaryotic hosts are not only the environment in which the microbiota resides; they also coevolved with their symbionts, to the point that the microbiota exerts a huge influence over their health and development. Hosts have specific traits which favour microbes with beneficial effects for their health: for instance, epithelial cells in the human intestine modify their glycans to expose fucose , a sugar used by commensal bacteria which protect their host from pathogens and decrease inflammation. Similarly, plant roots produce exudates which have a role in establishing the symbiosis with soil bacteria. By modulating the mechanisms promoting synthropic interactions in different districts within the host, different groups of microbes sharing metabolic connections (i.e., microbial guilds ) are established. Interestingly, in humans, LOF variants of genes responsible for the interaction with the microbiome are associated with pathogenic phenotypes. For instance, such variants in the gene FUT2, involved in the fucosilation of glycans, are associated with alterations in the gut microbiome, Crohn’s disease, and diabetes [43–45].
Therefore, the genetic landscape of the host (along with other “environmental” factors such as lifestyle, diet, and infections) plays an important role in the selection and maintenance of the microbiome, which then influences the health and development of its host. Notwithstanding in the last decades, for some of these altered microbiota, a treatment has been possible by inoculating microbial mixtures obtained from the stool of healthy donors, a practice known as faecal microbial transplant (FMT) . Although conceptually FMT is not different from classical probiotics (such as sour milk ), it poses the basis for a more focused approach, called bacteriotherapy, in which precise combinations of commensal microbes are provided to restore the microbiota to a balanced state . A rational design of bacterial mixtures to be used as treatment requires not only the knowledge of the patient microbiome composition but also predictive models to infer the combination of strains able to restore the native microbiome functionalities.
In the last year, a rising number of evidences supported the evolutionary importance of reduction, rather than amplification, of genome size . For prokaryotes, genome reduction is also coupled with an actual simplification, in terms of organism complexity. Morris proposed, with the BQ, that genome reduction comes not from neutral selection but is adaptive and strictly related to the leakiness properties in some biological functions. Currently, the analysis of the “social” interactions between microbes is shifting from monospecies populations of model organisms to complex entities such as microbial communities, modeled as economical systems to predict their time-resolved evolution [49, 50].
Here, we reviewed the ecological implications of genome streamlining in complex microbial systems. Although this mode of evolution has probably played a key role in shaping eukaryotic genomes , its impact on prokaryotes is perhaps even greater, with genome reduction directly influencing the emergence within bacterial communities of cooperation and cross-feeding patterns, which in turn affect the genome streamlining dynamics. Such ecological interactions are a primary force driving the composition of these systems: it is crucial to understand the behaviour and the composition dynamics of microbial communities to take into account the emergent constraints of cooccurrence between different species. Although the concept of “leaky function” is rather vague, the establishment of microbial guilds/consortia is primarily driven by nutritional and metabolic interactions.
As a concluding remark, we anticipate that it will be possible to achieve a “quantitative” understanding of the microbial ecology, thanks to the theoretical (e.g., reconstruction algorithms) and technical advancements of comparative genomics and metagenomics: indeed, the identification of metabolic pathways under purifying selection will allow to identify the nutritional constraints under which the organisms within the community are subject. This knowledge will be crucial to efficiently program alterations of the microbial ecology to drive the properties of microbial communities towards desired outcomes.
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper.
The authors want to thank Prof. Andrea Cavallini for reading the manuscript and providing insightful advices.
- C. Darwin, Fertilisation of Orchids, Murray, 1904.
- L. Van Valen, “A new evolutionary law,” Evolutionary Theory, vol. 1, pp. 1–30, 1973.
- M. Salathé, R. D. Kouyos, and S. Bonhoeffer, “The state of affairs in the kingdom of the Red Queen,” Trends in Ecology & Evolution, vol. 23, no. 8, pp. 439–445, 2008.
- C. M. Lively, “Host-parasite coevolution and sex,” Bioscience, vol. 46, no. 2, pp. 107–114, 1996.
- C. M. Lively and D. G. Lloyd, “The cost of biparental sex under individual selection,” The American Naturalist, vol. 135, no. 4, pp. 489–500, 1990.
- G. Bell, The Masterpiece of Nature: the Evolution and Genetics of Sexuality, University of California, Berkeley, 1982.
- J. J. Morris, R. E. Lenski, and E. R. Zinser, “The Black Queen hypothesis: evolution of dependencies through adaptive gene loss,” mBio, vol. 3, no. 2, article e00036-12, 2012.
- J. J. Morris, “Black Queen evolution: the role of leakiness in structuring microbial communities,” Trends in Genetics, vol. 31, no. 8, pp. 475–482, 2015.
- J. J. Morris, S. E. Papoulis, and R. E. Lenski, “Coexistence of evolving bacteria stabilized by a shared Black Queen function,” Evolution, vol. 68, no. 10, pp. 2960–2971, 2014.
- E. V. Koonin, The Logic of Chance: the Nature and Origin of Biological Evolution, FT press, 2011.
- R. C. Lewontin, “The genetic basis of evolutionary change,” Tech. Rep., Columbia University Press, New York, NY, USA, 1974.
- D. W. McShea and R. N. Brandon, Biology’s First Law: the Tendency for Diversity and Complexity to Increase in Evolutionary Systems, University of Chicago Press, 2010.
- Y. I. Wolf and E. V. Koonin, “Genome reduction as the dominant mode of evolution,” BioEssays, vol. 35, no. 9, pp. 829–837, 2013.
- B. Schink, “Synergistic interactions in the microbial world,” Antonie Van Leeuwenhoek, vol. 81, no. 1/4, pp. 257–261, 2002.
- B. E. L. Morris, R. Henneberger, H. Huber, and C. Moissl-Eichinger, “Microbial syntrophy: interaction for the common good,” FEMS Microbiology Reviews, vol. 37, no. 3, pp. 384–406, 2013.
- P. S. Stewart, “Mechanisms of antibiotic resistance in bacterial biofilms,” International Journal of Medical Microbiology, vol. 292, no. 2, pp. 107–113, 2002.
- E. R. Zinser, “Cross-protection from hydrogen peroxide by helper microbes: the impacts on the cyanobacterium Prochlorococcus and other beneficiaries in marine communities,” Environmental Microbiology Reports, vol. 10, no. 4, pp. 399–411, 2018.
- S. A. West and A. Buckling, “Cooperation, virulence and siderophore production in bacterial parasites,” Proceedings of the Royal Society of London B: Biological Sciences, vol. 270, no. 1510, pp. 37–44, 2003.
- A. D'Onofrio, J. M. Crawford, E. J. Stewart et al., “Siderophores from neighboring organisms promote the growth of uncultured bacteria,” Chemistry & Biology, vol. 17, no. 3, pp. 254–264, 2010.
- R. E. Lenski and S. E. Hattingh, “Coexistence of two competitors on one resource and one inhibitor: a chemostat model based on bacteria and antibiotics,” Journal of Theoretical Biology, vol. 122, no. 1, pp. 83–93, 1986.
- S. Estrela, J. J. Morris, and B. Kerr, “Private benefits and metabolic conflicts shape the emergence of microbial interdependencies,” Environmental Microbiology, vol. 18, no. 5, pp. 1415–1427, 2016.
- A. Mas, S. Jamshidi, Y. Lagadeuc, D. Eveillard, and P. Vandenkoornhuyse, “Beyond the black queen hypothesis,” The ISME Journal, vol. 10, no. 9, pp. 2085–2091, 2016.
- J. Rousk and P. Bengtson, “Microbial regulation of global biogeochemical cycles,” Frontiers in Microbiology, vol. 5, p. 103, 2014.
- I. Cho and M. J. Blaser, “The human microbiome: at the interface of health and disease,” Nature Reviews Genetics, vol. 13, no. 4, pp. 260–270, 2012.
- H.-S. Song, W. Cannon, A. Beliaev, and A. Konopka, “Mathematical modeling of microbial community dynamics: a methodological review,” Processes, vol. 2, no. 4, pp. 711–752, 2014.
- M. Arumugam, J. Raes, E. Pelletier et al., “Enterotypes of the human gut microbiome,” Nature, vol. 473, no. 7346, pp. 174–180, 2011.
- A. Gorvitovskaia, S. P. Holmes, and S. M. Huse, “Interpreting Prevotella and Bacteroides as biomarkers of diet and lifestyle,” Microbiome, vol. 4, no. 1, p. 15, 2016.
- G. Falony, M. Joossens, S. Vieira-Silva et al., “Population-level analysis of gut microbiome variation,” Science, vol. 352, no. 6285, pp. 560–564, 2016.
- T. E. Gibson, A. Bashan, H. T. Cao, S. T. Weiss, and Y. Y. Liu, “On the origins and control of community types in the human microbiome,” PLoS Computational Biology, vol. 12, no. 2, article e1004688, 2016.
- E. Bosi, G. Bacci, A. Mengoni, and M. Fondi, “Perspectives and challenges in microbial communities metabolic modeling,” Frontiers in Genetics, vol. 8, p. 88, 2017.
- N. Segata, “On the road to strain-resolved comparative metagenomics,” mSystems, vol. 3, no. 2, article e00190-17, 2018.
- S. N. Steinway, M. B. Biggs, T. P. Loughran Jr, J. A. Papin, and R. Albert, “Inference of network dynamics and metabolic interactions in the gut microbiome,” PLoS Computational Biology, vol. 11, no. 6, article e1004338, 2015.
- A. R. Zomorrodi and D. Segre, “Synthetic ecology of microbes: mathematical models and applications,” Journal of Molecular Biology, vol. 428, no. 5, pp. 837–861, 2016.
- N. A. Moran, “Microbial minimalism: genome reduction in bacterial pathogens,” Cell, vol. 108, no. 5, pp. 583–586, 2002.
- J. P. McCutcheon and N. A. Moran, “Extreme genome reduction in symbiotic bacteria,” Nature Reviews Microbiology, vol. 10, no. 1, pp. 13–26, 2012.
- R. C. H. J. van Ham, J. Kamerbeek, C. Palacios et al., “Reductive genome evolution in Buchnera aphidicola,” Proceedings of the National Academy of Sciences, vol. 100, no. 2, pp. 581–586, 2003.
- N. A. Moran, H. J. McLaughlin, and R. Sorek, “The dynamics and time scale of ongoing genomic erosion in symbiotic bacteria,” Science, vol. 323, no. 5912, pp. 379–382, 2009.
- K. Yizhak, T. Tuller, B. Papp, and E. Ruppin, “Metabolic modeling of endosymbiont genome reduction on a temporal scale,” Molecular Systems Biology, vol. 7, no. 1, p. 479, 2011.
- S. J. Giovannoni, H. J. Tripp, S. Givan et al., “Genome streamlining in a cosmopolitan oceanic bacterium,” Science, vol. 309, no. 5738, pp. 1242–1245, 2005.
- J. M. Pickard and A. V. Chervonsky, “Intestinal fucose as a mediator of host–microbe symbiosis,” The Journal of Immunology, vol. 194, no. 12, pp. 5588–5593, 2015.
- D. V. Badri and J. M. Vivanco, “Regulation and function of root exudates,” Plant, Cell & Environment, vol. 32, no. 6, pp. 666–681, 2009.
- C. F. Maurice and P. J. Turnbaugh, “Quantifying the metabolic activities of human-associated microbial communities across multiple ecological scales,” FEMS Microbiology Reviews, vol. 37, no. 5, pp. 830–848, 2013.
- P. Wacklin, H. Mäkivuokko, N. Alakulppi et al., “Secretor genotype (FUT2 gene) is strongly associated with the composition of Bifidobacteria in the human intestine,” PLoS One, vol. 6, no. 5, article e20113, 2011.
- D. P. B. McGovern, M. R. Jones, K. D. Taylor et al., “Fucosyltransferase 2 (FUT2) non-secretor status is associated with Crohn’s disease,” Human Molecular Genetics, vol. 19, no. 17, pp. 3468–3476, 2010.
- D. J. Smyth, J. D. Cooper, J. M. M. Howson et al., “FUT2 nonsecretor status links type 1 diabetes susceptibility and resistance to infection,” Diabetes, vol. 60, no. 11, pp. 3081–3084, 2011.
- S. Gupta, E. Allen-Vercoe, and E. O. Petrof, “Fecal microbiota transplantation: in perspective,” Therapeutic Advances in Gastroenterology, vol. 9, no. 2, pp. 229–239, 2016.
- P. A. Mackowiak, “Recycling Metchnikoff: probiotics, the intestinal microbiome and the quest for long life,” Frontiers in Public Health, vol. 1, p. 52, 2013.
- B. O. Adamu and T. D. Lawley, “Bacteriotherapy for the treatment of intestinal dysbiosis caused by Clostridium difficile infection,” Current Opinion in Microbiology, vol. 16, no. 5, pp. 596–601, 2013.
- G. D. A. Werner, J. E. Strassmann, A. B. F. Ivens et al., “Evolution of microbial markets,” Proceedings of the National Academy of Sciences of the United States of America, vol. 111, no. 4, pp. 1237–1244, 2014.
- J. Tasoff, M. T. Mee, and H. H. Wang, “An economic framework of microbial trade,” PLoS One, vol. 10, no. 7, article e0132907, 2015.
- I. B. Rogozin, L. Carmel, M. Csuros, and E. V. Koonin, “Origin and evolution of spliceosomal introns,” Biology Direct, vol. 7, no. 1, p. 11, 2012.
Copyright © 2019 Emanuele Bosi and Flavia Mascagni. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | <urn:uuid:8c4efb35-6ed9-4dbf-ad0b-7178488c9d35> | CC-MAIN-2021-21 | https://www.hindawi.com/journals/ijg/2019/2659175/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989766.27/warc/CC-MAIN-20210512162538-20210512192538-00252.warc.gz | en | 0.881805 | 6,300 | 2.96875 | 3 |
Sections in this issue:
1) Going Family Fled South Carolina to Northwest Territory to Escape Discrimination;
2) Ohio Furnished 263,000 Soldiers and Sailors To the United States Military in WW I;
3) Napoleon Bonaparte Goings Bankrupted In Natchez; Removed to New Orleans;
4) DEAR COUSINS.
All Gowen Manuscript Pages and Newsletters: https://goyengoinggowengoyneandgone.com/gowen-research-foundation-pages-and-info/
Gowen Research Foundation Electronic Newsletter
February 2000 Volume 3 No. 2
1) Going Family Fled South Carolina to Northwest Territory to Escape Discrimination
By Anna Going Friedman, Jaymie Friedman Frederick
and Helen Bonnie Moore
3605 Debra Drive, Somerset, Kentucky, 42503
In the preceding installments, the Newsletter readers were introduced to a group of South Carolinians identified as “Free Negroes, Mulattoes and Mustizoes” which included Isaac Going, Levi Going, Edward Going, Sr. and Edward Going, Jr. They had joined 17 other men of Camden District in petitioning the State Legislature for a reduction in their taxes which had been doubled, only on free people of color, in 1791. Apparently the petition, which was endorsed by prominent men of Fairfield County, was denied.
The Goings and several of the 17 other men appeared on the frontier of Western Kentucky shortly afterward. Inequitable taxation and other forms of discrimination in South Carolina apparently prompted the move.
Alex C. Finley was an early-day historian in northwest Kentucky who wrote five volumes dealing with the early history and development of Western Kentucky and Middle Tennessee.
As stated earlier in this series concerning the exodus of the South Carolina tax protesters William and John Morriss, Birds, Goings, Portee and Coies to North Carolina and then on to Kentucky, Finley wrote:
“A portion of them came to near Lexington, Kentucky about the year 1795 and then about 1797 removed to Logan County and settled on the waters of Muddy River.”
Finley’s statement implies that another portion of them either stayed in North Carolina or went elsewhere. When the group left South Carolina, they did indeed leave family behind and probably the same occurred in North Carolina.
Some of the group went to Ft. Vincennes on the Wabash River in Northwest Territory circa 1797. A French fort was established there in 1702 by Francois Margane de Vincennes. It was captured by the British in 1777 and renamed Ft. Sackville. Gen. George Rogers Clark captured Ft. Sackville February 25, 1779 during the Revolutionary War for the American colonies.
On May 7th 1800, this frontier region became known as Indiana Territory which included present-day Indiana, Illinois, Michigan and part of Minnesota This territory was divided into counties with Vincennes as its capital.
Not many early documents on the Going family survived in Indiana Territory. Only bits and pieces are left to provide a glimpse into the past. In the book, “The History Of Green and Sullivan Counties, State of Indiana” is recorded a tale of “an old darkey, Canaan Goen.” “Canaan Gowen” was born in Botetourt County, Virginia about 1775. He was enlisted as a private in the Second Kentucky Regiment in the War of 1812. He fought at Detroit and in the Battle of Thames River. The Melungeon “Cannon Gowen” was enumerated in the
1830 census of Clay County, Indiana. “Canaan Goans” was married March 12, 1835, about age 60, to Susan Tucker in nearby Fountain County and was enumerated there in 1840. [Newsletter, March 1998]
The Knox County census of Indiana Territory of 1807 lists John Morris, and Randolph County lists John Gowen and William Goins. Invalid pensioners of the U.S. of 1807 include Charles Gowens [Newsletter, January 1990] at the rate of $2.50 per month, Thomas Harris at the rate of $15 per month and in 1808 Joseph Bird at the rate of $4 per month.
The book “A Guidebook to Historic Vincennes, Indiana” by Gerald Haffine contains a reference to John Morris:
“The lovely old pioneer landmark, the Maria Creek Church organized in May 1809 is soon to have a new life. It will be used as an international non-denominational chapel on the campus of Vincennes University.
The quaint brick church, abandoned in 1847, stood in a grove of shagbark hickory trees in North Knox County. Among the 13 charter members of the church were Judge William Polk trusted scout and interpreter of Gen. William Henry Harrison and later a member of the 1816 constitutional convention.
Maj. William Bruce, founder of the town of Bruceville and a friend of Abraham Lincoln, Samuel Allison for whom a township was named in Lawrence County, Illinois and John Morriss, identified only as a “man of color” in the church record were among its charter members. The old church became identified with two racial groups, the African Negroes and the American Indians, for it took a courageous stand in the defense of these people.”
The old church has been moved, rebuilt and now stands on the campus of Vincennes University and is renamed Maria Chapel. A residence hall on that campus has been named after John Morriss.”
The book by Jacqueline Cortez, “Contributions In Black and Red” states:
“Some of the earliest settlements in Indiana Territory were founded by Negro men and women. Thomas Coles of Lyle Station, Indiana was a prominent farmer. The residents of Pink Staff and Ft. Allison, Illinois can trace their ancestors as far back as 1800. They emigrated from Kentucky, South Carolina, North Carolina and Tennessee. Most of these residents are a mixture of Negro, Indian and Caucasian.
Two of the families that settled in Lawrence County around 1800 were Samuel and Frederick Allison and the family of John Morris. The Allisons were from Mason County, Kentucky and the Morris family was from Camden District in South Carolina. John Morris, the Anderson family and Austin Tan assisted Samuel Allison in building Ft. Allison. They also acted as Indian scouts around Ft. Allison. In the War Of 1812, the father of the three Anderson brothers was killed.”
Daisy Barnes, interviewed for “Contributions In Black and Red,” gives the account of her family being forced to leave North Carolina because of race hatred, of their traveling in a covered wagon and arriving in Lawrence County, Illinois at Allison Prairie, She also relates that her grandmother, a Portee was a Cherokee Indian.
Evelyn Portee Allen provided her family history relating that John and Harriet Portee came to Illinois from North Carolina by wagon train at the time of the “Trail of Tears” and that they were Cherokee Indians. Included was a picture of John and Harriet Portee and their family. They were light complexioned people, some with very fine European/Portuguese features and some resembling Indians.
In her letter she wrote about the Portee family cemetery in which her grandparents are buried and of Going family members buried there also.
The Going family accompanied the Portee and Morris families in coming to Indiana and Illinois. Not much is found on the Goings except in early census records.
“Brinkerhoff’s History of Jefferson County, Illinois” states:
“One of the first mills in Jefferson County was kept by old Billy Going, as early as 1817. He also operated a tavern and a grocery in which he kept a great many other things including bad company. His mill was only resorted to by the better class of people in the case of extreme emergency.
William Going had a bad reputation and was accused of being connected with horse thieves, counterfeiters, and other lawless characters. His tavern was the headquarters of a band who committed, as was supposed, many dark deeds.
But as the county settled up, a better class of person came in, and the lawless band who frequented Going’s Tavern were cleaned out. Their king bee, Going, was forced to leave for the good of the country.”
“Brinkerhoff’s History Of Marion County Illinois” relates an additional story about the Going family:
“From the earliest settlements of Illinois by the Americans after [Gen. George Rogers] Clark’s conquest, there has been a class of very undesirable citizens hovering on the borders near Vincennes, Shawneetown and at Cave In The Rock on the Ohio.
A regular channel by which these cutthroats and robbers conducted their nefarious barter was kept open with stations along the way, so that property stolen in the Eastern settlements was sold in the West and that stolen in Randolph and St. Clair Counties was
sold in the East at Vincennes or Shawnee town.
In 1816 an attempt to make a station at Walnut Hill for these thieves was made and several families of these undesirable people settled or rather squatted near Walnut Hill. Their neighbors soon suspected that something was wrong, as counterfeit money was put in circulation, and many mysterious strangers were seen to visit them.
Word was conveyed to the rangers of St. Clair County, who in 1819, under Captains Thomas and Bankson moved secretly to the home of the ringleader. Divided into parties of 15 men each, they quietly surrounded the cabins of the outlaws and captured them without resistance.
The captured cutthroats were known as the Going Gang, consisting of William, John and Pleasant Going, Theophilus W. Harring, Tarleton Kane and John Bimberry and others who were not at home. The Going individuals were told that they must leave the county within a given number of days under the penalty of death!
To impress upon their minds that the edict must be obeyed, they were all lashed to saplings and given an unmerciful whipping. By the appointed time all had departed, and none ever returned.”
The above named Going men were found in the 1818 census of Illinois which places William Going in Madison County. Pleasant Going in St. Clair County and John Going in Washington County. In 1819 they were in Randolph County, and by 1820, they are all in Jefferson County.
Because of the allegations concerning the Going gang, I want to introduce to you Aaron Going, [Newsletter, March 1996] part of my family in Crittenden County, Kentucky who owned property on Camp Creek and the Ohio River which is just upstream on the opposite
bank from Cave In The Rock. Aaron drops out of sight in Kentucky in 1815 and appears on the census of 1818 in Gallatin County, Illinois. My family left Crittenden County, Kentucky in 1846-1847 with accusations of counterfeiting hanging over their heads.
Isaac Going and Edward Going from Crittenden County and Isaac Going from Logan County, Kentucky were enumerated in the 1818 Crawford County Illinois census. Why they left Logan County is unknown. Perhaps they became homesick for the rest of the family.
The 1820 census of Crawford County, Illinois shows:
Household Householder Occupants
No. 78 Ezekiel Anderson 6
No. 86 George Anderson 3
No. 87 Jasen Goen 5
No. 88 Austin Tann 3
No. 89 Edy Cole 2
No. 90 Enock Jones 3
No. 91 Joshua Anderson 4
No. 92 Betsy Anderson 6
No. 93 John Porter 5
No. 94 Caleb Anderson 3
No. 98 Sian Morriss 4
No. 99 Edward Going 2
No. 100 Nancy Morriss 2
No. 101 John Evans 7
No. 102 Isaac Goen 8
No. 119 Lewis Goen 3
No. 468 Isaac Goen 4
It is interesting to note that all but Evans and Tann are signatures on Petition No. 164 back in South Carolina. Lewis Goen, No. 119 and Isaac Goen, No. 468 are recorded as “Free People of Color.” Edward and Isaac Going had rejoined the family. At last, in part the Logan County, Kentucky and the Vincennes, Indiana groups had rejoined.
In Part V to come, perhaps fact can be distinguished from fiction.
[To Be Continued]
2) Ohio Furnished 263,000 Soldiers and Sailors To the United States Military in World War I
Ohio was originally settled by military men, veterans of the Revolutionary War from New England. The Revolutionaries made the first permanent settlement in the Northwest Territory at Marietta, Ohio in 1788, and ever after Ohio generously furnished men for the nation’s battles.
In the Civil War, Ohio loyally supported the Union, furnishing 319, 659 for the U.S. Army.
In World War I, more than 263,000 Ohioans, out of a population of 3,000,000 answered the call to the colors, according to “Official Roster of Ohio Soldiers, Sailors and Marines in the World War, 1917-1918.” Originally compiled in 1926, the volume provides detailed information about those inducted which family historians find beneficial to their research. It provides places of birth, location and date of enlistment, city of residence, date of discharge, units of service and war theatres and engagements.
Of interest to Foundation researchers are 14 officers and enlisted men:
Capt. Maurice R. Gowing was born April 11, 1894 in Toledo, Ohio. His residence at the time of enlistment was at Columbus, Ohio. He served as a 2nd lieutenant in the Coast Artillery Corps at Ft. Monroe, Virginia from November 5, 1917. He became a 1st lieutenant February 15, 1918 and captain July 24, 1918. In 1919 he served at Ft. Williams, Maine and Ft. Levett, Maine. His resignation was accepted September 1, 1919.
Capt. William A. Gowing was born in Allendale, Missouri November 12, 1871. He enlisted September 20, 1918 from Toledo, Ohio and served as a doctor in the Army Medical Corps in Michigan. He was discharged December 3, 1918 and returned to his practice in Toledo.
Albert Goins, colored was born April 7, 1887 in Winnsboro, South Carolina. He was enlisted
August 23, 1918 from Cincinnati and served in the 814th Pioneer Infantry Regiment as a private. He went overseas in the American Expeditionary Force October 6, 1918. He was promoted to private first class June 1, 1919 and was discharged July 28, 1919.
Bud K. Goins, colored was born November 4, 1895 in Pomeroy, Ohio. He was enlisted from
Athens, Ohio August 9, 1918 and was assigned to the 18th Infantry Battalion. He was discharged January 27, 1919 as a sergeant.
Charles A. Goins, colored was born in 1897 at Zanesville, Ohio. He was enlisted November 8, 1917 and assigned to Company B, 304th Stevedore Regiment. He was promoted to private first class December 3, 1917. On January 3, 1918, he was transferred to the 304th Service Battalion where he was promoted to sergeant February 2, 1918. He served overseas from January 13, 1918. He was reduced back to private May 13, 1919 and received an honorable discharge June 26, 1919.
Gus Goins, white was born in Frankfort, Kentucky March 4, 1890. He enlisted June 27,
1918 from Toledo and was assigned to the 6th Training Battalion. He served in the 158th Depot Brigade until July 16, 1918. He was transferred to Company B, 309th Ammunition Train and joined the American Expeditionary Forces September 17, 1918. He was promoted to corporal November 9, 1918 and honorably discharged February 14, 1919.
James Goins, colored, was born in Cuthbert, Georgia in 1892. He was enlisted from Columbus October 28, 1917. He was assigned to Company B, 317th Engineers. His unit joined the AEF June 10, 1918 and participated in the Meuse-Argonne battle where he was “severely wounded” November 11, 1918 [Armistice Day], He returned from France February 12, 1919 and was honorably discharged March 15, 1919.
James W. Goins, white, regarded as a brother to Gus Goins, was born August 9, 1887 in
Frankford, Kentucky. He was enlisted from Toledo July 15, 1918 and was assigned to Company I, 335th Infantry Regiment. He was honorably discharged December 9, 1918.
Jesse T. Goins, white, was born August 9, 1887 at Sekitan, Ohio. He was enlisted July
25, 1918 at Ft. Thomas, Kentucky. He was assigned to the Coast Artillery Corps at Ft.
Screven, Georgia until September 6, 1918. On that date he was transferred to Battery C,
45th Artillery Battalion which was assigned to the AEF. He went overseas October 21, 1919 and was returned January 31, 1919. He received an honorable discharge February 12, 1919.
Samuel J. Goins, white, was born in 1888 at Versailles, Kentucky. He was enlisted June 5,
1917 at Columbus Barracks, Ohio. He was made private first class August 1, 1917 and corporal September 1, 1917. He was assigned to Company A, 10th Field Signal Battalion and
went overseas with his unit October 29, 1918 as a sergeant. On December 24, 1918, he was transferred to 56th Service Company, Signal Corps. He returned home July 29, 1919 and
was honorably discharged August 7, 1919.
William M. Goins, white, was born in 1893 at Midway, Kentucky. He was enlisted June 5,
1917 from Akron and assigned to Company A, 10th Field Signal Battalion, along with Samuel J. Goins. He became a private first class August 1, 1917, corporal September 1, 1917 and sergeant March 1, 1918. On October 29, 1918 they went overseas with Company C, 116th Field Signal Battalion. On December 24, 1918, they transferred to the 56th Service Company, Signal Corps, AEF. They were returned home July 29, 1919 and honorably discharged August 7, 1919.
Murphy H. Goins, colored, was born at Carthage, North Carolina in 1894. He was enlisted at Columbus Barracks December 5, 1917 and assigned to Company A, 313rd Service Battalion. He became a sergeant March 6, 1918 and was reduced to private first class April 8, 1918 while with the AEF. He was returned home June 25, 1919 and honorably discharged July 1, 1919.
3) Napoleon Bonaparte Goings Bankrupted In Natchez; Removed to New Orleans
Napoleon Bonaparte Goings, “free colored person” on December 14, 1835 paid $5,000 for E.
Miller’s Store in Natchez, Mississippi. Later he took bankruptcy, and the store was sold for the benefit of his creditors. He lived in Vicksburg, Mississippi in 1838. Later it was reported that he removed to New Orleans, Louisiana.
In 1890, “Georgiana Goins, widow of Napoleon” was listed in the city directory of New Orleans, living at 431 Poydras. In the 1891 city directory, “Georgiana Goins, widow of
Napoleon” reappeared, still living at 431½ Poydras.
4) DEAR COUSINS
Died in Holland . . .
John James “Johnny” McGowan died April 14, 2000 in in Utrecht, Holland, according to his obituary in “Overlijdensberichten, Utrechts Nieuwsblad.”
I am searching for information on Milton Goin, bc1862 in Campbell County, TN. He was married 1885, wife’s name, Sarea Louise. He died January 21, 1843 in Albion, Nebraska. I think his father was James Goin who was born July 5, 1845 in Campbell County. Does anyone have any information on these individuals?
My name is Calvin Goings. I am a member of the Washington State Senate. I recently began researching my family history. Unfortunately I have hit a roadblock. My earliest ancestor is David Goings who was married October 30, 1803 in Giles County, VA. Any suggestions or information you might have would be sincerely appreciated.
South Hill, WA, 98373
Senate Webpage: http://www.leg.wa.gov/senate/sdc
A friend of mine has an album of old photos that her mother-in-law found at a sale and didn’t want to see tossed out. There is a message “presented by Fannie Titcomb W. Enright, Christmas 1867” on one picture. A photographer from San Francisco took some of the photos. Other names in the album are Edward Gowan, Kate Gowan, 13 October 1867; Thomas Horan, August 1869.
I have some blanks in my Gowin research that I would like to have some help with.
I have Samuel Gowin who was bc1816 TN, dp1900 TX. (He is said to have died in Rains Co, TX and is supposedly buried in Dunbar Cemetery there. No one has been able to prove this, nor find the grave site).
He married first a Polly Woods in 1938 in Jefferson Co, AL. They were living in Chickasaw Co, MS in 1850 with children: Felicity J, 10; Nancy, 9; Francis A, ; Benjamin F, ; William Henry Harrison, ; and Samuel’s sister, Catherine Goin, 14 bTN. All of the children of Samuel and Polly (Mary b. AL) were bMS.
My great-grandmother, Mary E. Gowin, was born to them in 1857 in MS. I can’t find them in the 1860 census, but there was a son, James Richard Clay Gowin born to them in AR on July 7, 1861.
I don’t know what happened to Polly (Polly when she married Samuel–Mary in the census), I cannot find where she died. But I suspect she may have died in childbirth with James Richard Clay Gowin or soon after.
In 1863, Samuel was married to Martha Roland in Hot Springs, AR. They had (from what I can tell) four children. Martha, Lucinda, Melissa Belle (who later married John Quincy Adams, a Choctaw Indian), and a twin brother to Melissa Belle that I don’t have a name for.
They were living in Van Zandt Co, TX in the 1870 and 1880 census. Mary E. Gowin married Robert A. Sharp there January 11, 1883. She told the elder aunts in the family that Robert Sharp had “married her straight off the reservation.” Their children were: Ola, Alonzo, Lonnie, Robert T, William Pinkney (my mother’s father), Samuel and Pugsey.
Robert and Mary eventually moved to Henderson CO. TX (where Robert was supposedly jumped and beaten to death over $35.00 after the sale of a bale of cotton said to have been about 1917). Mary was said to have moved to Kaufman, TX to live near her children and
died there of pneumonia at an old age.
I have some ideas as to who Samuel’s father was from the Foundation data. But, no proof. Actually there are a couple of prospects, Benjamin Gowen and James Goins in Jefferson County, AL. Can anyone fill in some blanks for me?
74 Sunny Gap Road
Conway, AR, 72032
NOTE: The above information produced by the Gowen Research Foundation (GRF), and parts of the “Gowen Manuscript” they worked on producing. It has tons of information – much of it is correct, but be careful, some of it is not correct – so check their sources and logic. I’ve copied some of their information in the past researching my own family, only to find out there were some clear mistakes. So be sure to check the information to verify if it is right before citing the source and believing the person who researched it before was 100% correct. Most of the information I found there seems to be correct, but some is not.
Their website is: Internet: http://freepages.genealogy.rootsweb.com/~gowenrf
There does not seem to be anyone “manning the ship” at the Gowen Research Foundation, or Gowen Manuscript site any longer, and there is no way to contact anyone about any errors. The pages themselves don’t have a mechanism to leave a note for others to see any “new information” that you may have that shows when you find info that shows something is wrong, or when something has been verified.
Feel free to leave messages about any new information found, or errors in these pages, or information that has been verified that those who wrote these pages may not have known about. | <urn:uuid:3e2e10c7-d040-464b-8b74-954202619d04> | CC-MAIN-2021-21 | https://goyengoinggowengoyneandgone.com/2000-02-feb-newsletter-grf-2/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988696.23/warc/CC-MAIN-20210505203909-20210505233909-00055.warc.gz | en | 0.976685 | 5,323 | 3.015625 | 3 |
Digital History>eXplorations>Japanese American Internment>The Decision to Intern>Newspaper Articles on Japanese-American Internment
Articles on Japanese-American Internment
headlines of Japanese Relocation. FDR Library.
Francisco News, March 2, 1942: New
Order on Aliens Awaited
San Francisco News, March 2, 1942: Behind
the News With Arthur Caylor
March 3, 1942: Evacuation To Be Carried
Francisco News, March 3, 1942: Security
Committee Makes Recommendations
Francisco News, March 4, 1942: Japanese
on West Coast Face Wholesale Uprooting
Francisco News, March 4, 1942: Ban
to Force Farm Adjustments
Francisco News, April 21, 1942: ‘Manzanar
Nice Place — It Better Than Hollywood’
May 21, 1942: S.F. Clear of All But 6 Sick
ORDER ON ALIENS AWAITED
March 2, 1942
One State Set for Evacuees
new evacuation order which may affect 200,000 Pacific Coast enemy
aliens and their American-born children was awaited today as governors
of states between the Rocky Mountains and the Mississippi —
with one exception — announced they would permit Japanese
aliens to live only in concentration camps if they were moved
Gen. John L. DeWitt, commanding general of the Fourth Army and
the Western Defense Command, said a proclamation would be issued
“shortly” designating military areas from which the
exclusion of certain groups will be required.
Roosevelt has given the Army authority to designate certain vital
defense areas and exclude from them all persons, citizens and
John H. Tolan (D., Cal.), heading a House committee investigating
national defense migration, said he had polled the governors of
15 states west of the Mississippi River on proposals to send evacuees
from Pacific Coast states.
replied, in effect: “No Japanese wanted - except in concentration
Governor Ralph L. Carr of Colorado told General DeWitt his state
would receive evacuated aliens as a contribution to the war effort,
and General DeWitt telegraphed him the Army’s thanks.
final decision as to who will be excluded, from where and when
are “military decisions which must be based on military
necessity,” General DeWitt said.
was strongly critical of those who carried “unfounded rumors”
and “so-called official statements” regarding Pacific
Prepare to Go
executives of the Japanese-American Citizens League were preparing
their members for complete evacuation from the Coast. They pledged
wholehearted co-operation with the Army.
the Army knows where these evacuees will go, and General DeWitt
made plain that wherever they are moved, the public must accept
to Be Ignored
clamor for evacuation from non-strategic areas and the insistence
of local organizations and officials that evacuees not be moved
into their communities cannot and will not be heeded,” he
of national security come first.
appropriate agencies of the Federal Government are engaged in
far-reaching preparations to deal with the problem. A study is
in progress by those agencies regarding the protection of property,
the resettlement and relocation of those who are affected.
complete preparation will include measures designed to safeguard
as far as possible property and property rights, to avoid the
depressing effect of forced sales, and generally to minimize resulting
soon as these studies are concluded, definite designation of persons
to be affected willl be made.”
Tolan has asked President Roosevelt to appoint a Federal co-ordinator
to have charge of evacuees’ problems, and possibly an alien
property custodian for each of the Western states.
to Colorado Governor Carr’s offer to co-operate, General
DeWitt said: “I am hopeful that the governors of other states
in this region will take a similar position, as it will be most
helpful to me in solving the program [problem].”
the News With Arthur Caylor
March 2, 1942
in a way, may be an add on [Westbrook] Pegler’s Friday column,
in which, among other things, he made very clear the importance
of the Negro people’s attitude toward the war. My story
is that, whatever the philosophy involved, the enemy’s agents
in our town are not neglecting an attempt to create a Japanese-Negro
anti-white-race fifth column.
Japanese colony and the Negro colony in San Francisco are close
enough neighbors to provide many contacts. They share some things
in common. The color-line is not so noticeable as it is elsewhere.
This had made it possible, my agents learn from loyal Negro sources,
for Japanese to spread racial propaganda.
isn’t propaganda of the ridiculous Nazi kind, either. It
doesn’t tell the Negro people that they’re really
black Aryans. It points out subtly that their own experience should
teach the Negroes that there’s less difference between brown
and black than between black and white.
takes advantage of all the real discrimination that has gone on,
as well as the propaganda the Communists have used in past years
in their effort to grab off the Negro vote. It attempts to sell
the Negro on the idea that, although pacific by nature, he has
often been forced into American military enterprises—and
paid off in dirt.
not nice to think that Japanese agents should be trying to stir
up strife right in our own town—and at a time when the Japanese
problem may mean such tragedy for loyal Japanese-Americans. But
if you don’t think such things can go on, who do you suppose
is tearing down air-raid shelter signs and defacing other notices
designed to prevent confusion and save lives? Now is the time
for Jap spies to do their stuff.
TO BE CARRIED OUT GRADUALLY
San Francisco News
March 3, 1942
93,000 Nipponese in California Are Affected by Order
entire California, Washington and Oregon coasts, as well as the
Southern sections of California and Arizona along the Mexican
border, today were designated Military Area No. 1 by Lieut. Gen.
John L. DeWitt, commanding the Western Defense Command and Fourth
this vast area, General DeWitt announced “such persons or
classes of persons as the situation may require will by subsequent
proclamation be excluded.”
this vast area will be cleared of all alien and American-born
Japanese, as well as many Italians and Germans, but General DeWitt
emphasized there will be no mass evacuation of Japanese, as some
state and local officials have suggested. Mass evacuations, said
General DeWitt, would be “impractical.”
from military areas will be a continuing process,” he said.
“Japanese aliens and American-born Japanese will be required
by future orders to leave certain critical points within the military
areas first. These areas will be defined and announced shortly.
After exclusion has been completed around the most strategic area,
a gradual program of exclusion from the remainder of Military
Area No. 1 will be developed.”
estimates were that 93,000 aliens and American-born Japanese in
California would be affected by today’s orders and those
no immediate evacuation order was issued, General DeWitt suggested
all Japanese—alien and American-born—might do well
to get out of Military Area No. 1 as quickly as possible.
Japanese and other aliens who move into the interior out of this
area now will gain considerable advantage and in all probability
will not again be disturbed,” he said.
they might go, however, was uncertain. All portions of California,
Oregon, Washington and Arizona were designated Military Area No.
2, from certain portions of which enemy aliens and American-born
Japanese may be excluded.
DeWitt said “military necessity is the most vital consideration,
but the fullest attention is being given the effect upon individual
and property rights” and that “plans are being developed
to minimize economic dislocation and the sacrifice of property
of Military Area No. 1 eventually will clear all American-born
and alien Japanese and hundreds of other enemy aliens from the
coastal section of California in which are located the most important
military and industrial establishments.
area is divided into two zones, A1 and B1. Enemy aliens will be
completely barred from zone A1, and in zone B1 their movements
will be greatly restricted.
proclamation also imposed restriction on persons within the military
area and designated postoffices as places where enemy aliens must
register every time they change place of residence within the
area or by leaving the area. Forms are being prepared.
Aliens in Five Classes
aliens, for greater efficiency, have been classified into five
classes and proclamations affecting their future will be forthcoming
with these numbers, General DeWitt said.
1—All persons suspected of espionage, sabotage, fifth
column or other subversive activities. The FBI and intelligence
services are rounding them up daily.
the military areas are cleared of Japanese, the general indicated,
German and Italian aliens would be next in line for evacuation.
However, German and Italian aliens 70 years of age or over will
not be required to move “except when individually suspected.”
exempted will be “the families, including parents, wives,
children, sisters and brothers of Germans and Italians in the
armed forces,” unless such removal is required for specific
area of the four Western states named is divided lengthwise into
the two military zones. Fronting the ocean and from a distance
of three miles off shore to beyond the coast range mountain areas
is the prohibited zone “A-1.”
adjoining territory—which in Central California extends
as far east as Placerville, thereby slicing the Sacramento and
San Joaquin Valleys down the middle—comprises restricted
addition there are 97 specific localities and communities containing
military installations and utilities which are closed to non-citizens
and are marked “prohibited zones A2-A99 inclusive.”
Francisco and the entire Bay Region as far as Vallejo and Tracy
are within the prohibited zone. To the north Highway 101 in general
follows the contours of the line dividing the prohibited zone
from the contiguous restricted zone.
restricted zone extends approximately from Highway 101 to Highway
99E to the vicinity of Fresno, thence along 99 to where it joins
California Highway 198, eastward near the towns of Johannesburg,
Daggett, and Cadez, along Highway 66 to Topock, Ariz., past Mathia,
Hot Springs Junction, Phoenix, and more or less to the Arizona-New
Mexico state lines to Mexico via the towns of Superior, Bowie
and San Simon.
DeWitt has announced creation of a special civilian staff headed
by Tom C. Clark, Federal alien co-ordinator, to assist the Army
in the economic planning made necessary by the evacuations.
that governors of nine interior states were protesting any resettlement
of Japanese in their areas, General DeWitt said military necessity
must take precedence over civilian wishes.
proclamation and the specific evacuation orders which are to follow
“shortly” are culmination of an alien control policy
the Government instituted immediately after the attack on Pearl
agents seized key Japanese, German and Italian leaders in nationwide
raids. Then aliens were ordered to turn in cameras, shotguns,
short wave radio sets, binoculars and other materials usable for
spying or sabotage. Next all enemy nationals were ordered to register
so the Government could check identities and residences.
January the policy of excluding enemy aliens from strategic areas
was developed. The Army and the FBI cleared 147 such districts
in the four Western states on Feb. 15 and Feb. 24. FBI agents
instituted wholesale raids to seize contraband and “potentially
dangerous enemy aliens” including leaders of Japanese, Italian
and German labor, military and naval societies.
approximately 15,000 enemy aliens were brought into custody or
removed from vital areas.
DeWitt’s proclamation seeks to bring all remaining enemy
aliens on the Coast—closes area to possible Japanese attack—under
M. Masaoka, national secretary and field executive of the Japanese
American Citizens League, said today:
are instructing the 65 chapters of our organization in 300 communities
to call meetings immediately in their locality to discuss methods
by which they can correlate their energies and co-operate extensively
in the evacuation process.”
COMMITTEE MAKES RECOMMENDATIONS
San Francisco News
March 3, 1942
Committee on National Security & Fair Play, headed by Dr.
Henry F. Grady, former assistant secretary of state and president
of the American President Steamship Lines, today urged that care
of evacuated persons be committed to civilian government agencies
experienced in social welfare.
is said there “appear to be only three methods of caring
for evacuees"—allow their settlement whereby they can
work freely and produce for the war or civilian needs; set up
supervised work projects or support them in part or whole at public
committee warned that “indiscriminate removal of citizens
of alien parentage might convert predominately loyal or harmless
citizens into desperate fifth-columnists.”
far, it said, 9000 have been evacuated.
JAPANESE ON WEST COAST FACE WHOLESALE UPROOTING
San Francisco News
March 4, 1942
greatest forced migration in American history was getting under
the entire Pacific Coast, and from the southern half of Arizona,
some 120,000 enemy aliens and American-born Japanese were moving,
or preparing to move, to areas in which the threat of possible
espionage, sabotage or fifth column activities would be minimized.
None of the Japanese had actual orders to get out of the coastal
military area designated yesterday by Lieut. Gen. John L. DeWitt,
Western defense and Fourth Army commander, but all had his warning
that eventually they must go.
deadlines are set for clearing of the area—twice as large
as Japan itself—there is much to be done by the Army and
by governmental agencies co-operating with it in working out a
program that will call for the least possible economic confusion.
C. Clark, alien control co-ordinator, said in Los Angeles he hoped
Japanese might be removed from coast prohibited areas within 60
days, but that “we are not going to push them around.”
are going to give these people a fair chance to dispose of their
properties at proper prices,” Mr. Clark said. “It
has come to our attention that many Japanese farmers have been
stampeded into selling their properties for little or nothing.”
chapters of the Japanese-American Citizens League, which claims
a membership of 20,000 American-born Japanese, will hold meetings
soon in 300 communities “to discuss methods by which they
can correlate their energies and co-operate extensively in the
Masaoka, national field secretary of the league, said its members
“realize that it was the necessity of military expediency
which forced the Army to order the eventual evacuation of all
Japanese,” and that he “assumed” the classification
of Americans of Japanese lineage “in the same category as
enemy aliens was impelled by the motives of military necessity
and that no racial discrimination was implied.”
those who must move, after the Army swings into its plan for progressive
clearing of the 2000-mile-long military area (Japanese and Japanese-Americans
will be affected first) are more that 400 University of California
students - 315 American-born Japanese, 11 alien Japanese, 75 Germans
and six Italians.
DeWitt gave no indication when the first deadline for Japanese
in the coastal area would be set.
was continued action, however, against “Class 1” persons
listed in General DeWitt’s announcement of the military
area. This class includes persons definitely suspected of sabotage
and espionage, of which several thousand already have been taken
into custody by the FBI on presidential warrants accusing them
of being potentially dangerous aliens.
the most important arrests during the past 24 hours was that of
George Nakamura, an alien Japanese living close to the Santa Cruz
shoreline. In his possession FBI agents and police said they found
69 crates of powerful fireworks of the signal type - rockets,
flares and torches.
Ban to Force Farm Adjustments
San Francisco News
March 4, 1942
of Japanese from California’s agricultural areas will necessitate
serious adjustments in farming and marketing of fruits and vegetables
in this state farm spokesmen said today. Officials of the California
Farm Bureau estimated that 40 per cent of all California’s
vegetables were raised by Japanese, with the percentage of fruit
lands under their control running somewhat less.
types of agricultural produce are practically dominated by Japanese
labor or control.
100 Per Cent’
are nearly 100 per cent under the control of Japanese,”
one farm authority said. “The work requires the most arduous
form of ‘stoop labor’ and much of it must be done
on hands and knees. It is impossible to get any other type of
labor than Japanese to stand the pace of the nine-month season.”
plantings in celery, tomatoes, peppers, are important and it is
estimated that they likewise are responsible for nearly 75 per
cent of the state’s acreage in cucumbers, onions and spinach.
officials of the Farm Bureau point out that white farmers can
handle the planting of tomatoes this year, the problem of their
harvest later will create a real problem.
has been proposed to close rural schools earlier this year as
a potential source of labor for harvesting tomatoes,” one
bureau official said.
proposals under considerable by farm groups include shutting down
relief projects to provide more farm workers, and possible use
of Mexican labor.
harvests around the Salinas Valley are not expected to be affected
where an ample of supply of Filipino labor is available. The valley
supplies 90 per cent of the lettuce to the entire country when
the flow of “green gold” is at its seasonal peak.
watch is also being kept on the possible movement of Italians
from the coastal belt, particularly in the artichoke industry
which they dominate from Colma to Monterey County. The harvest
season is just reaching its peak and will last about another month.
impending evacuation of Japanese “makes possible a return
of the Chinese to the good earth,” The Chinese Press, only
all-English Chinese paper in America, said today.
Charles Leong said:
few Chinese remember that their parents labored on farms in
the Sacramento and San Joaquin Valleys and all along coastal
farm areas. Many owned potato and asparagus ranches. In farm
centers like Watsonville and Santa Cruz, Chinese at one time
owned all the strawberry business.
when the old-timers passed on, it seems that the ranch life,
a hard life, did not appeal to the second generation. As a result
the Japanese today have a monopoly on an industry when the Chinese
could have continued to develop... .”
faces the major problem with the Japanese on farm lands on the
West Coast, the census figures reveal, as they are listed as owning
68 million dollars worth of farm lands here and only an additional
two million dollars worth of farm lands in Oregon and Washington
three major clusterings of Japanese in rural areas are in the
Sacramento River delta regions, the lower San Joaquin Valley district
and the country around Santa Maria and Santa Barbara.
the Bay Area the number of farms owned by Japanese are listed
as follows: Alameda County, 130; San Mateo County, 71; Contra
Costa County, 70; Marin, 4, and Santa Clara, 390.
Japanese exodus also will hit the lawns and gardens of thousands
of Bay Area residents, particularly those on the Peninsula, for
there seems no substitute labor supply to replace the hundreds
of Japanese gardeners. Fast and efficient workers, some of the
Japanese have been caring for from 40 to 50 gardens each.
entire problem is being studied closely by officials of the California
State Chamber of Commerce, the Farm Bureau, and other state and
Federal agencies interested in agricultural questions.
study locally was the matter of the eventual clearing out of the
Japanese section roughly bounded by Geary, Pine, Octavia and Webster-sts,
in which several hundred homes and shops are occupied by Japanese.
1940 census listed 5280 Japanese—2004 citizens and 2276
aliens—in San Francisco. The majority of them live in the
Japanese section. Some have been interned and many more already
have moved inland. But possibly 4000 still are there.
will become of the homes and shops they eventually will vacate
is under discussion by real estate organizations. No decision
has been reached.
FE, N.M., March 4.—In the wake of reports that “nearly
3000 Japanese” being evacuated from the Pacific Coast would
be interned in New Mexico, Governor John E. Miles today announced
his state would co-operate fully. He urged strict methods to safeguard
New Mexico citizens.
CLEAR OF ALL BUT 6 SICK JAPS
May 21, 1942
the first time in 81 years, not a single Japanese is walking the
streets of San Francisco. The last group, 274 of them, were moved
yesterday to the Tanforan assembly center. Only a scant half dozen
are left, all seriously ill in San Francisco hospitals.
night Japanese town was empty. Its stores were vacant, its windows
plastered with "To Lease" signs. There were no guests
in its hotels, no diners nibbling on sukiyaki or tempura. And
last night, too, there were no Japanese with their ever present
cameras and sketch books, no Japanese with their newly acquired
furtive, frightened looks.
colorful chapter in San Francisco history was closed forever.
Some day maybe, the Japanese will come back. But if they do it
will be to start a new chapter - with characters that are irretrievably
changed. It was in 1850 - more than 90 years ago - that the first
Japanese came to San Francisco, more than four years before Commodore
Perry engineered the first trade treaty with Japan. The first
arrival was one Joseph Heco, a castaway, brought here by his rescuers.
What happened to Heco is, apparently, a point overlooked by historians.
He certain came and probably went – but nobody seems to
know when or where.
for another 11 years did the real Japanese migration begin. In
1861, the second Japanese came here. Five years later, seven more
arrived. The next year there were 67, and from then on migration
boomed. By 1869 there was a Japanese colony at Gold Hill near
Sacramento. In 1872 the first Japanese Consulate opened in San
Francisco – an office that passed through many hands, many
regimes, and many policies before December 7, 1941. On that fateful
day, according to census records, there were 5,280 Japanese in
left San Francisco by the hundreds all through last January and
February, seeking new homes and new jobs in the East and Midwest.
In March, the Army and the Wartime Civil Control Administration
took over with a new humane policy of evacuation to assembly and
relocation centers where both the country and the Japanese could
be given protection. The first evacuation under the WCCA came
during the first week in April, when hundreds of Japanese were
taken to the assembly center at Santa Anita. On April 25 and 26,
and on May 6 and 7, additional thousands were taken to the Tanforan
Center. These three evacuations had cleared half of San Francisco.
The rest were cleared yesterday.
last Japanese registered here last Saturday and Sunday. All their
business was to have been cleaned up, all their possessions sold
or stored. Yesterday morning, at the Raphael Weill School on O'Farrell
Street, they started their ride to Tanforan. Quickly, painlessly,
protected by military police from any conceivable "incident,"
they climbed into the six waiting special Greyhound buses. There
were tears – but not from the Japanese. They came from those
who stayed behind – old friends, old employers, old neighbors.
By noon, all 274 were at Tanforan, registered, assigned to their
temporary new homes and sitting down to lunch.
Japanese were gone from San Francisco. | <urn:uuid:532b7940-18bc-41be-92c5-fe8e68139581> | CC-MAIN-2021-21 | https://www.digitalhistory.uh.edu/active_learning/explorations/japanese_internment/newspaper_articles.cfm | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991772.66/warc/CC-MAIN-20210517115207-20210517145207-00014.warc.gz | en | 0.937498 | 5,352 | 2.765625 | 3 |
Before I start this article, I just want to point out that this is a rather complex subject, and I will possibly be modifying this article in the future if my understanding of it changes further. So far, each year I coach I have learned more on this subject. I hope you learn a lot from this and put it to good use in your own coaching or training. Try to not skip parts, as all the information contained here is important, though take your time to get through it as it is around 9 pages long excluding references and images.
Periodisation is the concept of planning a training programme by breaking down the training into various training phases and cycling them in an order that will allow the athlete to be prepared for their best performance at the right time. Such as performing at their best at a National Championship. Periodisation needs to take the goals of the athlete into consideration and use the training principles as a guide to structure the programme.
You should be using periodisation if you want better results from your athletes. Good periodisation can keep training from becoming stale, and can help prevent or overcome training plateaus. It can also help with injury prevention by making sure that the athlete has progressively adapted to handling harder training.
There are a lot of training recommendations and principles out there, even just while writing this I came across well over 10 various training principles or guidelines, many which are similar variations of each other.
Below are the principles and guidelines I believe to be key to keep in mind while creating your training plan.
An athlete who specialises in their chosen sport early will reach relative peak performance sooner, however higher performance while the athlete is a child results in lower performance when they become an adult.
Athletes who have age appropriate training have a longer period of time at their peak of performance, as well as being more likely to be able to set world records, compared to athletes who specialise early and have a shorter period of time at their peak performance .
What this means is that in gymnastics children are often selected for competitive training by the age of 6 or 7. By the time they are 9 or 10 they are training around 8 to 9 hours a week, and are often expected to prioritize gymnastics training over other sports. Quite often they need to stop participating in other sports entirely. This is early specialisation, and should preferably not be occurring before the age of 12. Ideally before the age of 12 the athlete will diversify their training and sample a large range of other sports which will help to develop other areas that artistic gymnastic might not do .
Gymnastics develops a large range of abilities so in my opinion sports that might benefit the athlete the most would possibly be things such as swimming or ball sports for manipulation skills. I am not saying that before the age of 12 they shouldn’t be training 9 hours a week, however, they should hopefully not need to stop other sports because of gymnastics.
An athlete can make progress by overloading their body with more than what it can handle. This is done by using knowledge of the body’s processes called super-compensation.
We can break down Super-compensation into 4 steps.
Step 1 – Training load. The athlete trains at a challenging load or intensity, and their body reacts by going into a recovery mode making them tired and decreasing their performance.
Step 2 – Recovery. The athlete trains in the next session at an easier training load, or they have active rest. Thanks to this recovery time, the athlete’s energy and performance levels will return to where they were before their first session.
Step 3 – Super-compensation. Once the athlete has returned to their original energy and performance levels, they then adapt and their energy and performance levels continue to rise further so that they are capable of handling even more then what they originally could. This is a physiological, psychological, as well as technical response of the athlete. This is when the athlete can perform in an even harder training session.
Step 4 – Decline. This is when the athlete loses the super-compensation effect. This decline in energy and performance levels will happen with or without a harder training session during the super-compensation phase.
Using this process, an athlete can make progress in a wavelike manner by manipulating the training load and intensity. The load must be increased progressively and not by large jumps. If the load increases too fast the body will move into the decline phase and lead to injury or overtraining syndrome, which no athlete or coach wants. Overtraining is quite a serious disorder that takes from several weeks to several months to recover from while the athlete suffers with anything from persistent fatigue and illness, to other things such as loss of motivation and appetite .
Above is an image that shows us the super-compensation wave of different types of training. If training sessions are timed correctly you can take advantage of this effect. The horizontal line is the baseline of the athlete, the part of the line that rises above the fitness baseline is the super-compensation wave, if an athlete does a second training sessions while the wave is still increasing, then the fitness baseline moves higher. However, if the next training sessions is too soon, or too hard, then the athlete will progressively get worse as they haven’t completed their recover time yet.
The ideal training windows are listed at the bottom of the image. Ideal length between training sessions depends on what sort of training the gymnast is doing, but in my opinion they should typically do short power and sprint work at the start of the week so there is roughly a 30 hour rest period between the next session, and then perform more of their strength training at the end of the week to provide roughly a 48 hour training gap.
It is important to realise that there are various ways to manipulate the super-compensation process. A few types are shown in the image below, with the 4th example being most common type used in sports training. For example, several training sessions through the week without adequate recovery time and decreasing performance, followed by a long rest over the weekend perhaps, and a delayed but large super-compensation effect.
Accommodation is a general law of biology, which in a simplified manner says that a person’s response to a constant stimulus will decrease over time. In sports training this is the principle of diminishing returns. A new athlete might improve greatly from a rather easy training load, while experienced athlete may have barely any improvements even with hard training loads. Because of this way that our bodies work, using the same training loads over a long time period is very inefficient. Coaches should vary their training loads and intensities to overcome this issue .
The way an athlete adapts to training is very specific. As an example, you may have 3 athletes from various sports all training sprints. Your 200m sprinter would have a lot of benefit from the sprint training, your tennis player a small benefit from the training, yet your golfer would potentially have no benefit from the sprint training. For training results to transfer to the athlete’s sport, the training should be as similar to the desired outcome as possible. The more similar the training to the sport, the better the transfer. For example; a basketball player should rather train vertical jumps and sprinting than they should train rope climbs. This specificity is also important when looking at energy systems. A 100m sprint uses different energy systems compared to a 42.2km marathon, therefore a marathon runner wont have much benefit training short sprints.
That being said, all athletes are different individuals, even if they are in a team, and what may work to improve one athlete will not necessarily help the other. All training programmes should suit the individual, taking advantage of their strengths. For example, if your basketball player has bad knees they may need to do swimming to reduce impact . This part is often called the principle of individualisation, however I like to think it’s common sense.
Detraining, also known as reversibility, is when a training load is not high enough, or not present at all, and the physiological aspects of an athlete’s fitness begins to decrease .
This is rather important for coaches managing injured athletes, or athletes on holiday. It is also very important in periodisation to make sure that certain types of training are done regularly enough as to not lose the benefits. It is important to note that the detraining effect does not apply to skills.
This is where knowledge of the residual training effects becomes very important. Between training and detraining is where the residual training effect sits. This is how long your training effect will last for before the benefits of your training session starts to decrease. Different systems last longer than others however .
Below, table 1 shows roughly how long an athlete can keep their fitness / sports shape before the detraining effect starts to occur. The table shows a rough range, though we do know that athletes who have been training their sport years longer have a greater residual effect than athletes who are new to their sport . Table 2 is an example as to how you can time various training when coming up to a competition.
Table 1. Training residuals of different physical abilities .
|Ability||Duration of residual effect (days)||Physiological background|
|Aerobic endurance||30±5||Increased amount of; aerobic enzymes activity, mitochondria number, muscle capillaries, hemoglobin capacity, glycogen storage, & higher rate of fat metabolism|
|Maximal strength||30±5||Improvement of neural mechanism & muscle hypertrophy occurs mainly due to the muscle fibers’ enlargement|
|Anaerobic glycolytic endurance||18±4||Increased amount of; anaerobic enzymes activity, higher lactate accumulation rate, buffering capacity, and glycogen storage|
|Strength endurance||15±5||Muscle hypertrophy occurs mainly in slow-twitch fibers, better local blood circulation, and lactic tolerance|
|Maximal speed||5±3||Improved neuromuscular interactions, motor control, and increased phosphocreatine storage|
Table 2. Example of Residual Training Effects within Target Peak Date .
|Anaerobic Glycolytic Endurance||—>|
I struggle to understand energy systems myself due to the amount of information out there and the various terms used, but I’ll do my best for this part. The first thing to know about energy systems is that there are three energy systems. There is the Alactic Energy System, the Lactate Energy System, and the Aerobic Energy System. To make things even more confusing, they often go by other names depending on what sort of fuel source is used.
Gymnastics use of the energy systems is split into the following; 80% is ATP-PCr & Anaerobic Glycolysis, 15% Aerobic Glycolysis & Oxidative, and 5% Oxidative . An athlete will use all systems in their sport, and the system that is most dominant is simply because of the length of the activity. For example in a 1km run the athlete will start with the alactic system at the start, however finish the run using the aerobic system.
Table 3. Work-to-rest ratios for various exercise durations .
|Approximate % of maximum power||Primary energy system stressed||Typical exercise duration||Range of exercise-to-rest period ratios|
|90-100||Phosphagen||5-10 seconds||1:12 to 1:20|
|75-90||Glycolytic||15-30 seconds||1:3 to 1:5|
|30-75||Glycolytic and oxidative||1-3 minutes||1:2 to 1:4|
|20-35||Oxidative||>3 minutes||1:1 to 1:3|
The important part that the table above shows is the work to rest ratios in the last column. Keep this in mind while the gymnasts train or while working out conditioning programmes. If you want them to work at their best, they will need adequate rest in between sets or attempts.
For gymnastic coaches who work with children, we must aim to do the following;
Athletes aged 6-11
Their skeletal structure is still developing, it is important to develop good posture. Ensure they do not spend an excessive time doing bridges or develop hinging in the spine.
Their concentration is generally less than 5 minutes long.
Coach should gradually take the children from fun playing, to enjoying structured sports preparation.
It is important for the coach to set a good example and stay positive.
Athletes aged 11-15
Athletes will be going through puberty and gaining strength faster than what their tendons and ligaments are capable of handling.
When the athlete has their peak growth spurt (often around 13/14) they will often have vestibular and coordination issues.
During this time period the coach often has to be a bit more careful of the mental side of training.
Athletes aged 15-18
Muscles, bones, tendons, and ligament strength should all be completed during this time period.
Training can become a lot more intense and the athlete should be able to start to handle a lot more.
There are certain time periods where certain motor abilities are sensitive to development. Coaches should aim to take maximum advantage of this to help their athletes.
Table 4. Periods of sensitivity towards specific development of abilities in young male athletes.
Table 5. Periods of sensitivity towards specific development of abilities in young female athletes.
Note. All of the information above about age appropriate training is adapted from Zahradník and Korvas (2012) .
Just before we get into the various models of periodisation, we will just quickly cover the terminology for the cycles.
Table 6. Periodisation cycle lengths.
|Training Day||1 day, though not always only one training session|
|Macrocycle||52 weeks, annual plan|
|Quadrennial Cycle||4 years|
There are various ways to structure the loads and intensities for these cycles, the main cycle typically looked at for load structure is the mesocycles. I typically stick to using step loading, though there are a lot of other methods available, flat, reverse, etc.
You could read more about load structure by following this link. Click >HERE<.
Below are the three main periodisation models that I recommend, and they are also the most commonly researched. We will start with the easy and finish with the hard. After those three I will mention a couple of other models you may hear about, although I feel for a gymnastics coach they are unneeded.
Traditional Periodisation is a good model to start with if your athlete is a beginner in strength training or general sports preparation. It is relatively straight forward, as the main adjustments are simply to the training load and intensities. It doesn’t use the principle of super-compensation much, rather focusing more on preparation stages .
These phases are the General Preparation (endurance / hypertrophy), Special Preparation (strength), Competition (power & peak), and Transition phases.
In this model, you start with endurance (high volume, low intensity), and gradually work towards your training peak where you are working on max strength or power (low volume, high intensity).
I would suggest you only use this model for a year or two, before using a more advanced model, as the traditional model doesn’t really function well for multiple peak events, which is often required by athletes. It also seems to not accommodate for the principle of diminishing returns too well, though it still handles it better than no periodisation at all .
There are two designs for undulating periodisation, weekly and daily undulation, with daily undulation being the most common .
With the undulating method, an athlete trains in various ways during each micro-cycle. For example if the athlete trains three days a week, they might train hypertrophy twice, and power once in the micro-cycle. Although various systems are trained each microcycle, each system should also progress with a loading pattern.
Studies have shown that undulating periodisation is better than the traditional model for strength gains and improvement of the central nervous system mechanisms, which means the athlete gains more strength with less muscle mass, a big positive for sports such as gymnastics where more weight can effect a large amount of skills. Daily undulating periodisation has also shown to be more efficient in avoiding plateaus in more elite athletes struggling with the principle of diminishing returns .
This makes it a good choice of periodisation for gymnastic coaches or athletes to use. It is a good system for intermediate to relatively advanced athletes
Block periodisation is designed more towards elite athletes and is designed to accommodate for residual training effects. This is a relatively complicated method to be used for advanced athletes. Studies have shown this model of periodisation to have been the only reasonable way to accomplished training goals in some difficult situations .
The design uses training blocks the length of a mesocycle, which are highly specialised workloads.
There are 3 main blocks.
Accumulation – In which you develop general aerobic endurance, muscle strength, and general patterns of movement technique.
Transmutation – In which you are focused on developing specific abilities like combined aerobic-anaerobic or anaerobic endurance, specialized muscular strength, and event-specific technique.
Realization – This is a pre-competitive training phase that focuses mainly on competitive model exercises, attaining maximal speed, and recovery prior to the next competition.
Reverse periodisation is aimed more towards long distance runners or other endurance athletes, and is based on maintaining intensity closer to what is required in competition, then slowly raising the volume .
As gymnastics is not an endurance sport, and it is what this site focuses on, I will leave reverse periodisation here. It is very similar and based on the traditional periodisation model.
The Westside conjugate system was created by Louie Simmons by combining various training systems together, specifically an old Soviet system, and a Bulgarian system where they train near maximal effort in every workout .
This system was designed using various weightlifting techniques and exercises, as such I believe it is a bit tricky to use it as a more generalised system to use for other sports. It is often confused with the block periodisation model due to name and terminology confusion. When reading up on a periodisation type, make sure you know exactly what they are referring to.
I hope this article has been of some use to you. It has taken me roughly 2 months to collect all this information together, and it is my hope that you can use this to come back and revise whenever needed. I tried to cover all the essential information, although I know there is plenty I still haven’t covered, or gone into enough detail with. If you need to know more you can look up some of the references for extra reading, just be warned, there is a ton of information out there, and it can be tough to sort through it all.
↑ Balyi, I., Way, R., & Higgs, C. (2013). Long-term athlete development. Retrieved November 6, 2015, from http://www.humankinetics.com/excerpts/excerpts/late-specialization-is-recommended-for-most-sports
↑ Gambetta, V. (2007). Athletic development: The art & science of functional sports conditioning. Retrieved November 10, 2015, from http://www.humankinetics.com/excerpts/excerpts/defining-supercompensation-training
↑ Mackinnon, L. (2000). Overtraining effects on immunity and performance in athletes. Immunology and Cell Biology, 78, 502-509. doi:10.1111/j.1440-1711.2000.t01-7-.x
↑ Olbrecht, J. (2000). The science of winning : Planning, periodizing and optimizing swim training. Luton, England: Swimshop.
↑ Zatsiorsky, V., & Kraemer, W. (2006). Science and practice of strength training. (2nd ed.). Champaigne, IL: Human Kinetics.
↑ Godfrey, R. (2006, April 7). Detraining – why a change really is better than a rest. Retrieved November 12, 2015, from http://www.pponline.co.uk/encyc/detraining-1113
↑ Mäestu, J. (2013, April 1). Residual training effect. Retrieved November 15, 2015, from https://academy.sportlyzer.com/wiki/residual-training-effect/
↑ Mantak, M. (2012, November 30). How much down time is too much: The concept of detraining. Retrieved November 15, 2015, from http://home.trainingpeaks.com/blog/article/how-much-down-time-is-too-much-the-concept-of-detr
& ↑ Exercise Prescription. (2013, December 3). Residual training effect. Retrieved November 18, 2015, from http://www.exrx.net/ExInfo/ResidualTraining.html
↑ Fox, A., Keteyian, S., & Foss, M. (1998). Fox’s physiological basis for exercise and sport (6th ed.). Boston, Mass.: McGraw-Hill.
↑ Leyland, T. (2007). Rest and recovery in interval-based exercise. CrossFit Journal, (56). Retrieved December 21, 2015, from http://library.crossfit.com/free/pdf/56_07_Rest_Recovery.pdf
↑ & ↑ Winer, L. (2014, November 10). A simple guide to periodization for strength training. Retrieved November 20, 2015, from http://breakingmuscle.com/strength-conditioning/a-simple-guide-to-periodization-for-strength-training
↑ Hassen, A. (2009, October 22). Periodization: Linear vs. Non‐linear. Retrieved November 22, 2015, from http://www.asdccr.ca/images/library/102109_JBrdZb5gFGfqS96f_153357.pdf
↑ Fleck, S. J. (2011). Non-Linear Periodization for General Fitness & Athletes. Journal of Human Kinetics, 29A, 41–45. http://doi.org/10.2478/v10078-011-0057-2
↑ Kirckof, C. (2012, December 14). Methods of training: Sequencing of programming and organizing training. Retrieved November 25, 2015, from https://d-commons.d.umn.edu/bitstream/10792/374/1/Kirckof, Chris.pdf
↑ Grantham, N. (n.d.). Base endurance: Move forwards with reverse periodisation. Peak Performance, (272), 5-7. Retrieved November 21, 2015, from http://iceskatingresources.org/EnduranceTrainingPlan.pdf
↑ Simmons, L. (2011). The westside conjugate system. CrossFit Journal. Retrieved December 5, 2015, from http://library.crossfit.com/free/pdf/CFJ_Simmons_Conjugate.pdf | <urn:uuid:d83d8855-0cb0-45aa-976d-784e5f065b00> | CC-MAIN-2021-21 | https://acrobaticarts.co.nz/gymnastics/general/periodisation-an-overview/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989812.47/warc/CC-MAIN-20210515035645-20210515065645-00614.warc.gz | en | 0.93943 | 4,879 | 2.90625 | 3 |
There has been a revolution, but it snuck up on us so gradually that you'd be forgiven if you missed it. It's called artificial intelligence, and it will have a profound impact on how we design digital products in the near future.
This has been something of an unexpected comeback. In the very early days of computing, many expected that machines would soon be able to complement or even surpass humans in tasks requiring intelligence. But while well-defined undertakings, such as playing chess, have proven to be solvable by using strict rules, more fuzzy problems, such as recognizing a cat in a photo, have turned out to be much more elusive. And so for decades, the idea of artificial intelligence has been considered mostly an unkept promise. While applications of machine learning have been increasingly useful when it comes to processing big-data collections at major Internet companies, the consensus has been that for most practical applications, human intelligence simply cannot be replaced.
But recently, artificial intelligence, or AI for short, has actually begun to deliver. New or revitalized techniques have started to equal or even surpass humans in tasks previously thought out of reach, from speech recognition to playing complex games. The rate of this AI resurgence has taken aback even leaders of the industries being affected the most. Google co-founder Sergey Brin said in a recent interview that he has been surprised by the recent surge in practical applications for artificial intelligence . Approaches such as neural networks and deep learning, coupled with access to massive amounts of data and new computational hardware, have led to significantly better results than traditional methods in areas such as image recognition, machine translation, and speech synthesis.
In one of the more spectacular examples, Google's Deep Mind software was able to beat a grandmaster of the ancient Chinese game Go in March 2016 . Computers had already proven they could beat world masters in chess, but Go was thought to be out of reach because it contains exponentially more possible move combinations—far beyond what can be stored in any computer. In order to achieve this feat, rather than working from a list of possible moves, the software instead taught itself to play the game. First, it got a solid foundation by training on millions of existing human Go games. But that was not enough. To improve its game, the network then played many more matches against itself. In this way, it ended up with a vocabulary of moves that consisted of both human and self-taught strategies. The result was a game-playing software that played from its own experience, not from any strict set of rules.
If playing an old Chinese game sounds too esoteric, consider this: A neural network derived from the same basic techniques was trained to control the cooling processes in Google's data centers, in a way similar to how it learned Go. This time it was no game; it had dramatic financial consequences. The company claims that through the smarter control provided by this software, it has been able to save several hundred million dollars in electricity per year—thus by itself paying for Google's acquisition of the AI startup company whose research laid the foundation for the system.
The right skill for drawing realistic zombies on a teenager's video-game screen also turns out to be exactly what is needed for running a neural network!
And that is just the beginning. In the past year, the collection of AI techniques called deep learning have contributed to significant advances in a whole range of areas, including speech synthesis, speech recognition, machine translation, image recognition, and image compression . And although the results are still largely coming in areas dominated by big data and big Internet companies, it is clear that AI will soon have implications for a whole range of new products. It will eventually make it possible to inject a little bit of intelligence into even the most mundane product, whether a toaster or a car. By extension, this will fundamentally affect HCI research and the practice of interaction design.
But before we go on, let's try to unpack the recent developments that have surprised even people like Google's co-founder.
Algorithms inspired by how the brain works, so-called artificial neural networks, date back to the 1960s; most computer science students still encounter them in introductory AI classes. These networks are formed by connections of artificial "neurons," which are basically just weighted links between nodes in a graph. The actual network itself does not have any inherent meaning or knowledge. But by subjecting the network to stimuli and reinforcing the links that are used when it makes the correct choices, it is possible to train the network to make choices. For instance, by subjecting a network to a sequence of pictures with simple geometrical shapes but reinforcing it only when the network selects those that contain a circle, it would be possible to teach it to pick only images that depict a circle.
The main technology leading the current AI resurgence is neural networks. However, for a long time it looked like neural networks would be limited to simple problems with little practical use. This is because, first of all, to do anything beyond the most trivial tasks, the number of nodes and connections in such a network would have to be very large. This means that it would take a long time to train it, and even when it was fully trained, the time it would take to reply to a query would be too long for any time-critical applications such as automatic translation. Second, in order to learn anything meaningful, the network would also need huge amounts of training data. Such data would need to be in machine-readable form. The data would also have to be coded, meaning it would already contain the answer to the question the network was being designed to answer. For instance, for our fledgling network to learn to recognize circles in pictures, we would have to subject it to a large number of pictures that contained circles and were correctly labeled as such, as well as pictures that did not contain circles, so that it would eventually learn the difference.
But recently, these barriers have all but disappeared. When it comes to size and speed, Moore's law has been helpful, but not sufficient, in reducing the cost of storage and processing time. Instead, a much bigger breakthrough came from an unexpected source: computer games. In 2012, researchers at the University of Toronto showed that the specialized chips that are used to generate fast high-resolution graphics in PCs, so-called graphics processing units or GPUs, just happen to be perfectly provisioned for processing neural networks . This is because they are designed to process massively parallel tasks at a very high speed. In other words, the right skill for drawing realistic zombies on a teenager's video-game screen also turns out to be exactly what is needed for running a neural network! Thus, almost by accident, neural network researchers were handed fast and inexpensive hardware on which to run their experiments, something that is now revolutionizing the entire chip industry . This in turn allowed for new and more effective techniques such as deep neural networks (the layering of several levels of networks) and unsupervised learning (which does away with explicit labels and presents the network with only rough clusters of data). Together, these advances contributed to results like the Go game victory.
And when it comes to data itself, there's a veritable mother lode. Facebook, Google, Amazon, and the other Internet giants have already been patiently Hoovering up every scrap of input generated by their users for decades. They now have access to billions upon billions of photos, emails, videos, and chat messages, not to mention mouse clicks and finger taps on everything from inspirational articles about yoga to diaper advertisements. This manic data collection is also reaching its tendrils out into the real world, for instance through mobile phones, taking in things like the user's geographical location (through GPS) or their physical activity (through motion sensors). And if you hadn't noticed, neural networks are already listening to what you are saying! Companies like Apple and Microsoft are storing every command given to their respective voice assistants for future use, in order to better train their recognition software. In this case, Siri, Cortana, and of course Amazon's Alexa and their ilk, are serving not just as helpful assistants but also as Trojan horses to gather unheard amounts of voice utterances and associated behaviors to feed the neural networks of the future. As if this wasn't enough, emerging technologies such as drones and self-driving cars will soon add ever bigger piles to this data stash.
Of course, this data gold rush has consequences that can be troubling. Most obviously, consider the fact that all this data is in the hand of private companies. They now have literally unlimited access to everything generated by our private and public digital lives but are not governed by any of the rules for transparency or privacy that pertain to public organizations. This leads to another, less obvious, consequence, which is that many of the best minds in the field will no longer be found at universities, where they can freely share their knowledge. Instead, they are being aggressively recruited by well-funded companies, where they not only get better salaries (and free food to boot) but, more important, much more challenging problems to work on. This is because the big data that is necessary to provide truly groundbreaking research resides at these companies, where it is also increasingly well protected, since it constitutes the very essence of the companies' value on the stock market. While once upon a time Flickr set its user agreement to the altruistic Creative Commons license by default, meaning that images could be freely used for noncommercial purposes and released as large training sets for the benefit of science , current services guard their content much more jealously. For instance, Instagram pictures, while free to browse, are bound by agreements that prohibit any application of computer vision, making them in effect inaccessible for any machine-learning approaches.
On the other hand, there are encouraging signs that the tools of this new and efficient AI will become more accessible, often when universities and industry work in concert. Open source software such as Tensorflow is already letting users adapt and train neural networks for new purposes . These services are still far from plug-and-play; they require extensive handholding from experts to achieve any useful results. But they point to a future where neural networks are packaged in such a way that non-experts can use them through well-defined interface mechanisms. Most likely, due to size and speed limitations, this will happen not on individual devices but on remote servers. Thus, just like other data- and processing-intensive tasks such as cloud storage and Web hosting, AI will transform into a service.
And with commercially available AI services bound to arise, it will gradually become easier to obtain and train an artificial intelligence to do your bidding. This means that in the near future, designers will no longer have to be experts in neural networking to use AI, just as they do not need to know the ins and outs of TCP/IP or even HTML to design Web pages. The same services will be available when designing physical artifacts, too, to complement other elements such as sensors and actuators. When this happens, AI will be thought of not as an exotic and complicated technology that can be used only by gurus with Ph.Ds in machine learning, but rather as a resource you can plug into any new product when you need it. Think of it as intelligence on tap.
So what exactly does this intelligence on tap mean for interaction design? First and foremost, it means that intelligence is becoming a new design material. As we know, the options of a designer are to a large extent defined by the materials they have to work with. For instance, a graphic designer working in the medium of print must be familiar with paper sizes and coating types, as well as color blends, printing presses, and other means of achieving their desired results. A product designer would need to be aware of the physical characteristics of materials such as plastic, wood, and metal, as well as how these fit together mechanically, in order to design an aesthetically as well as functionally pleasing experience. As AI becomes a more and more vital part of everyday products, designers will have to figure out how to work with intelligence as a new material, with its own specific quirks and opportunities. This will not be easy, as intelligence on tap could mean a radical departure from previous design practices, as when going from paper to screen in the early days of the Web.
For anyone developing products that contain AI (including but not in any way limited to designers), it will be necessary to form a clear understanding of what AI can and cannot do. Again, this does not mean that everyone has to become a neural networking guru, but it is necessary to understand the underpinning principles of AI. In particular, this means that if someone tries to design a product without a firm understanding of the limitations of AI, the result will most certainly be failure.
Here, the most important limitation to consider is the fact that AI still cannot form an actual understanding of the world. While neural networks can indeed work better than humans on problems that involve large amounts of data, and can seemingly reply in intelligent ways to many queries, they still cannot understand a basic sentence in natural language. This has particular relevance to some of the most hyped AI applications, such as natural-language dialogue systems, aka chatbots. As overly enthusiastic product designers have already discovered, it is currently far beyond the reach of any neural network to carry out an intelligent conversation. For instance, Facebook's recent experiments in chatbots ended in something of a fiasco after it turned out it could correctly fulfill only about 30 percent of the requests .
There is an important lesson to be learned there. Replacing human-to-human interaction in realistic situations is exactly something that AI cannot do yet. This is the kind of problem that requires a real understanding of the world and the intentions of the conversation partner—something that today's neural networks are simply incapable of. Furthermore, it is well known from research that dialogue systems are more efficient when users do not expect the bot to have full, human-like intelligence . Thus, by trying to apply human standards to an automatic system, the constructors of the Facebook chatbot literally set it up for failure and made users even more disappointed and frustrated.
Instead, artificially intelligent systems should concentrate on things that humans cannot do but that AI can do well. In large part, this involves sifting through immense amounts of data and finding patterns. One area where AI is making great progress is image search, in which large amounts of data and new neural-network techniques have produced remarkable results, such as actually being able to find pictures that contain cats. Other areas where AI does well, as long as there is enough data, is matching one dataset against another, for instance in machine translation. It can also be used to extrapolate from existing data and make decisions based on that, as with Google's server-cooling system. But this also means that AI systems are highly dependent on the data they have access to. If the data is lacking in quality or quantity, this will greatly increase the risk of the system making poor decisions.
Thus, anyone constructing an AI-based system needs to tread lightly, manage expectations, and be careful not to overreach when it comes to AI's capabilities. But apart from understanding the overall potential of AI, I believe there are a number of interdependent challenges that pertain more specifically to interaction design. These have to do with how designers can take the behavior of systems that rely on artificial intelligence and make it understandable for the end user. They include:
- Designing for transparency
- Designing for opacity
- Designing for unpredictability
- Designing for learning
- Designing for evolution
- Designing for shared control.
The first challenge means that it is necessary to let the user understand how artificial intelligence is actually affecting the interaction. It must be clear to the user that a system is actually making its own decisions based on incoming information, rather than working from a fixed set of rules. This might require the rethinking of fundamental UI components. For instance, there are interaction cases when users might want to override the intelligence, and others when they might want to cede control. For a device, this could mean that rather than just an on/off button, a device might need an "it depends" button that lets the device decide whether to turn on or off. Similarly, there will also be a need for interface elements that communicate when a system has made a decision, what that decision was based on, and even a mechanism to revert or undo the decision if the user does not agree with it. There could also be a need to communicate more complex concepts and plans to an AI, which might require more flexible interfaces such as natural language. In summary, designing for AI might entail a lot more fuzzy, open-ended user interfaces than we are used to.
The second, somewhat contradictory, challenge has to do with the fact that it is no longer possible to explain exactly why or how an AI does what it does—they are opaque. The way that neural networks are constructed means that their inner workings are hidden even from the person who programmed and trained them. For example, Google's engineers recently made the discovery that a neural network trained for machine translation had created its own intermediary format . This made it possible for it to translate between language pairs on which it had not been trained; for instance, if it had done Japanese to English, and English to Korean, it could also in principle translate between Japanese and Korean. The point here is that this capability was not designed into the system, but rather evolved by itself. How can designers communicate to the user that there are things inside the product whose workings nobody can quite explain? And how does this affect qualities like trust and confidence in the system?
Anyone constructing an AI-based system needs to tread lightly, manage expectations, and be careful not to overreach.
This leads to the third challenge: unpredictability. No matter how well trained a neural network is, it is still to some extent drawing its own conclusions from given data. This is not necessarily a bad thing. For instance, the Go-playing network we mentioned in the beginning had honed its game not just on humans but also in matches against itself, where it devised its own strategies. This led it to make some surprising moves that no human player would make. While some of the choices it made were inexplicable, they were also part of a winning strategy, and despite deviating from the human playbook, in the end the system was able to beat the human opponent. Designers thus must be prepared for and design for systems that behave in unanticipated ways, which can be jarring even when it leads to them solving the problem better than a human would. How can interaction design minimize the damage and maximize the benefits that arise from this unpredictability?
The fourth challenge has to do with improving the AI through constant learning. Ideally, a neural network should never stop learning; it should use all available new input to improve its basic algorithms and make the system even better. However, this cannot be a chore for the user. If the user has to explicitly train the system, that will most likely become a hindrance to efficient use. There are already clever ways of having humans solve problems to aid AI learning, such as the "captchas" that separate humans from bots on the Internet by having them do simple image-recognition tasks. Another example is recommender systems on sites such as Netflix that encourage users to rate the content they have viewed, thereby improving recommendations. But ultimately, the learning has to be built into the interaction itself and completely unobtrusive, so it does not feel like the user is doubling as the AI's training wheels.
The fifth challenge has to do with how these systems will continue to evolve over time. As AI products solve problems in collaboration with their users, they should keep improving. But this could be jarring if the system's behavior starts to get better than it was originally. In fact, we often build behaviors around flaws like squeaky doors or loose tiles in a staircase. If these flaws suddenly disappear without warning, it might be even more disorienting than when they first appeared. Say you have bought an intelligent coffee brewer that is supposed to prepare coffee at the right time and temperature to help you get up in the morning. You set it for a certain time, but you have a hard time getting up, so the coffee is always a little cold. And that's OK; you need your sleep. But imagine then that the brewer observes how you are always late getting up in the morning, and one day it proactively decides to delay the brewing of your coffee by 10 minutes to better fit your schedule. The result is that you scald your mouth—and probably throw the coffee maker out the window! As systems evolve and make new decisions, it will be necessary to communicate this to the user so that they know what to expect, and can benefit while avoiding unpleasant surprises.
The final challenge is one that springs from all the others. It involves how artificially intelligent systems can be designed to allow the sharing of control with the user. This will not be an either/or situation, where one or the other has full control. In systems built on proactive intelligence, there will have to be provisions for a truly mutual responsibility. The interface must give the user access to clear controls, as well as indications as to how the power is distributed in any given moment. This includes how much autonomy a system receives to make its own decisions and how much it is under the control of the user. It also includes how much it is allowed to evolve new functionality, how it collects and evaluates data, how it is to handle unexpected situations, and so on. Again, some of this may be too complex to be fully negotiated by a visual or tangible interface, which may lead to the need for speech or other more nuanced modes of communication. But designing the interaction of an AI system so that it can work truly in concert with the user will be one of the key measures of success.
There will be many other challenges as well—what I've discussed here has just scratched the surface. We did not even get into ethics, which will have a huge impact. Who is responsible if an AI system causes damage or even the loss of life? This could happen if the system made an error or was inaccurately controlled by the user, perhaps due to some flaw in the interface design. This is not a science fiction question; it is already pressingly important for companies developing self-driving vehicles. And who gets sued for libel if an AI runs amok because it is absorbing data without questioning it, like the Microsoft chatbot that became racist by reading Twitter comments ? Another issue is who owns and takes responsibility for material that an AI produces? Ownership was much easier before autonomous systems, because the creation of content was the result of a conscious creative act. Now if an autonomous security robot, or perhaps an outdoor drone, manages to take compromising photographs, who gets to control the results—the subject, the owner of the device, or (most likely) the company that stores the images on its servers?
Full-fledged intelligence on tap might take a long time to arrive, but I have no doubt that it will. And while enthusiasm for AI in its many forms is very high right now (Gartner's hype cycle for 2016 has machine learning at the very top ) and is sure to hit many snags along the way, there is no doubt that the technology is going to fundamentally change interaction design. The sooner designers start to think about intelligence as a design material, the better prepared they will be for the coming shift in how digital systems will work, and in particular how AI can function in concert with their users. Hopefully, this article has provided some first steps toward understanding the future of AI as a new design material.
1. Kharpal, A. Google co-founder Sergey Brin says he's 'surprised' by pace of A.I. and uses a story of a cat to explain it. CNBC.com. Jan. 19, 2017; http://www.cnbc.com/2017/01/19/google-co-founder-sergey-brin-said-he-is-surprised-by-pace-of-ai.html
2. Metz, C. Google's AI wins fifth and final game against Go genius Lee Sedol. Wired. Mar. 3, 2016; https://www.wired.com/2016/03/googles-ai-wins-fifth-final-game-go-genius-lee-sedol/
3. Clark, J. Google cuts its giant electricity bill with deep mind-powered AI. Bloomberg Technology. Jul. 19, 2016; https://www.bloomberg.com/news/articles/2016-07-19/google-cuts-its-giant-electricity-bill-with-deepmind-powered-ai
4. Metz, C. 2016: The year that deep learning took over the Internet. Wired. Dec. 26, 2016; https://www.wired.com/2016/12/2016-year-deep-learning-took-internet/
8. Facebook scales back AI flagship after chatbots hit 70% f-AI-lure rate. The Register. Mar. 22, 2017; https://www.theregister.co.uk/2017/02/22/facebook_ai_fail/
10. Wong, S. Google Translate AI invents its own language to translate with. New Scientist. Nov. 30, 2016; https://www.newscientist.com/article/2114748-google-translate-ai-invents-its-own-language-to-translate-with/
11. Vincent, J. Twitter taught Microsoft's AI chatbot to be a racist asshole in less than a day. The Verge. Mar. 24, 2016; http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
12. Gartner 2016 Hype Cycle. Aug. 16, 2016; http://www.gartner.com/newsroom/id/3412017
Lars Erik Holmquist is professor of innovation at Northumbria University, U.K. Previously, he did research in interaction design and ubiquitous computing in Sweden, Silicon Valley, and Japan. His first book, Grounded Innovation: Strategies for Creating Digital Products, was published in 2012. He just finished his second, a science fiction novel set in Silicon Valley. firstname.lastname@example.org
©2017 ACM 1072-5520/17/07 $15.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2017 ACM, Inc. | <urn:uuid:4d5d0728-a882-46b5-b83b-dca54fb0558c> | CC-MAIN-2021-21 | https://interactions.acm.org/archive/view/july-august-2017/intelligence-on-tap | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00255.warc.gz | en | 0.962966 | 5,565 | 2.875 | 3 |
Citation: Stress may explain vocal mimicry in Bowerbirds (2011, May 11) retrieved 18 August 2019 from https://phys.org/news/2011-05-stress-vocal-mimicry-bowerbirds.html (PhysOrg.com) — Spotted Bowerbirds (Ptilonorhynchus maculatus) are best known for their nests, but these birds are also capable of mimicking the vocalizations of many different species of birds. It was believed bowerbirds were mimicking the sounds of predatory birds as a way of defense, but a new study in Naturwissenschaften determined that is not the case, but rather that stress and stressful situations account for the vocalizations they choose to mimic. © 2010 PhysOrg.com Spotted Bowerbird. Image: Tom Tarrant, via Wikipedia. In attracting mates, male bowerbirds appear to rely on special optical effect More information: * The mimetic repertoire of the spotted bowerbird Ptilonorhynchus maculatus, Laura A. Kelley and Susan D. Healy, Naturwissenschaften, DOI: 10.1007/s00114-011-0794-zAbstractAlthough vocal mimicry in songbirds is well documented, little is known about the function of such mimicry. One possibility is that the mimic produces the vocalisations of predatory or aggressive species to deter potential predators or competitors. Alternatively, these sounds may be learned in error as a result of their acoustic properties such as structural simplicity. We determined the mimetic repertoires of a population of male spotted bowerbirds Ptilonorhynchus maculatus, a species that mimics predatory and aggressive species. Although male mimetic repertoires contained an overabundance of vocalisations produced by species that were generally aggressive, there was also a marked prevalence of mimicry of sounds that are associated with alarm such as predator calls, alarm calls and mobbing calls, irrespective of whether the species being mimicked was aggressive or not. We propose that it may be the alarming context in which these sounds are first heard that may lead both to their acquisition and to their later reproduction. We suggest that enhanced learning capability during acute stress may explain vocal mimicry in many species that mimic sounds associated with alarm.* Vocal mimicry in male bowerbirds: who learns from whom? Laura A. Kelley and Susan D. Healy, Biol. Lett. 23 October 2010 vol. 6 no. 5 626-629 doi: 10.1098/rsbl.2010.0093AbstractVocal mimicry is one of the more striking aspects of avian vocalization and is widespread across songbirds. However, little is known about how mimics acquire heterospecific and environmental sounds. We investigated geographical and individual variation in the mimetic repertoires of males of a proficient mimic, the spotted bowerbird Ptilonorhynchus maculatus. Male bower owners shared more of their mimetic repertoires with neighbouring bower owners than with more distant males. However, interbower distance did not explain variation in the highly repeatable renditions given by bower owners of two commonly mimicked species. From the similarity between model and mimic vocalizations and the patterns of repertoire sharing among males, we suggest that the bowerbirds are learning their mimetic repertoire from heterospecifics and not from each other. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Dr. Laura Kelly from the University of St. Andrews led the team of researchers. Kelly has been studying the bowerbirds for some time, and just last year published a study in Biology Letters, “Vocal mimicry in male bowerbirds: who learns from whom?” In this previous study, they looked at different male bowerbirds to determine if the male birds were learning their vocalizations from other male bowerbirds or from the direct species themselves. In studying 19 different male bowerbirds, they found that the males were not learning from other males, but rather directly from other bird species, as each bird mimicked the sounds in slightly different ways. In that study, Kelly believed that finding evidence that the bowerbirds learned from their environment was only the start and planned to find out why they mimic certain birds.This brings us to the recent study published in Naturwissenschaften. While it was believed that these bowerbirds mimicked predators, Kelly and her team found that predator calls accounted for only 20% of the calls the birds had learned. They found the birds were mimicking sounds from “bully” species and aggressive birds, as well as alarms calls from other species.From what the researchers determined, these birds mimic alarm and mobbing calls (sounds birds make when their areas are violated by predators) of the different species in their local environment. They believe the bowerbirds learn these vocalizations under stressed circumstances and later reproduce the sounds when they themselves are stressed. Kelly believes that this is the first study to suggest a possible link between stress and vocal mimicry. Explore further
Journal information: arXiv The team’s idea is based on work being done by other scientists who are actively engaged in trying to create simulations of our universe, at least as we understand it. Thus far, such work has shown that to create a simulation of reality, there has to be a three dimensional framework to represent real world objects and processes. With computerized simulations, it’s necessary to create a lattice to account for the distances between virtual objects and to simulate the progression of time. The German team suggests such a lattice could be created based on quantum chromodynamics—theories that describe the nuclear forces that bind subatomic particles. To find evidence that we exist in a simulated world would mean discovering the existence of an underlying lattice construct by finding its end points or edges. In a simulated universe a lattice would, by its nature, impose a limit on the amount of energy that could be represented by energy particles. This means that if our universe is indeed simulated, there ought to be a means of finding that limit. In the observable universe there is a way to measure the energy of quantum particles and to calculate their cutoff point as energy is dispersed due to interactions with microwaves and it could be calculated using current technology. Calculating the cutoff, the researchers suggest, could give credence to the idea that the universe is actually a simulation. Of course, any conclusions resulting from such work would be limited by the possibility that everything we think we understand about quantum chromodynamics, or simulations for that matter, could be flawed. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Citation: Is it real? Physicists propose method to determine if the universe is a simulation (2012, October 12) retrieved 18 August 2019 from https://phys.org/news/2012-10-real-physicists-method-universe-simulation.html
Journal information: arXiv A domino can knock over another domino about 1.5x larger than itself. A chain of dominos of increasing size makes a kind of mechanical chain reaction that starts with a tiny push and knocks down an impressively large domino. Original idea by Lorne Whitehead, American Journal of Physics, Vol. 51, page 182 (1983). See http://arxiv.org/abs/physics/0401018 for a sophisticated discussion of the physics. More information: Domino Magnification, arXiv:1301.0615 [physics.pop-ph] arxiv.org/abs/1301.0615AbstractThe conditions are investigated under which a row of increasing dominoes is able to keep tumbling over. The analysis is restricted to the simplest case of frictionless dominoes that only can topple not slide. The model is scale invariant, i.e. dominoes and distance grow in size at a fixed rate, while keeping the aspect ratios of the dominoes constant. The maximal growth rate for which a domino effect exist is determined as a function of the mutual separation.via Arxiv Blog Most everyone has seen dominos in action. Small pitted black planks with white dots on them are placed on their ends next to one another – then at some point, the first is knocked over onto the second. The force of the first falling onto the second causes it to fall, knocking it down onto the third, etc. This continues until all the dominos have been knocked over without any other outside interference. Most domino exhibitions feature planks that are all of the same size, though most intuitively understand that different sizes could be used, which means a smaller domino can knock over one that is larger. But how much larger? That’s the question Leeuwen posed to himself. He turned to math to find the answer and in so doing created a model that predicts not only how much larger a domino can be, but the chain length patterns that would occur using different growth factors.Dominos fall the way they do because when one is stood on end, it possesses potential energy. That energy is released when it is pushed over. But because the force necessary to push the domino over is less than the amount of potential energy stored, it is able to knock over a nearby domino that is larger than it is, a phenomenon known as force amplification.To create a mathematical model, Leeuwen had to remove some real world factors that have an impact on chain reactions that occur when dominos are felled. Real dominos tend to slide at the bottom as they are knocked over, for example, and sometimes when one strikes another the result is an elastic collision that prevents the second domino from falling over. Also, sometimes dominos slide against one another as one strikes another. The result was a model that suggests the largest growth factor in a perfect world is 2, meaning one domino can knock over another that is twice its size.The model also showed how quickly plank size can grow and still allow for a complete chain reaction. Starting with a plank just 10 millimeters high and assuming a growth factor of just 1.7, the model shows the planks growing to a size of the empire state building using just 244 planks. © 2013 Phys.org Explore further Domino Theory: Small steps can lead to big results Successive dominoes. The tilt angle θ is taken with respect to the vertical. Domino 1 hits 0 at the point A. The rotation axis of 1 is the point B and E is that of 0. The normal force f that domino 1 exerts on domino 0 is also indicated. Credit: arXiv:1301.0615 [physics.pop-ph] arxiv.org/abs/1301.0615 (Phys.org)—J. M. J. van Leeuwen, a physicist at Leiden University in The Netherlands has created a mathematical model that predicts the maximum incremental size of falling dominos. He’s found, as he describes in a paper he’s uploaded to the preprint server arXiv, that in a perfect world, the maximum growth factor is approximately 2. Citation: Physicist creates math model to predict maximum incremental domino size (2013, January 11) retrieved 18 August 2019 from https://phys.org/news/2013-01-physicist-math-maximum-incremental-domino.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Illustration of the reconfigurable device with three buried gates, which can be used to create n- or p-type regions in a single semiconductor flake. Credit: Dhakras et al. ©2017 IOP Publishing Ltd In the semiconductor industry, there is currently one main strategy for improving the speed and efficiency of devices: scale down the device dimensions in order to fit more transistors onto a computer chip, in accordance with Moore’s law. However, the number of transistors on a computer chip cannot exponentially increase forever, and this is motivating researchers to look for other ways to improve semiconductor technologies. Journal information: Nanotechnology Team engineers oxide semiconductor just single atom thick This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. More information: Prathamesh Dhakras, Pratik Agnihotri, and Ji Ung Lee. “Three fundamental devices in one: a reconfigurable multifunctional device in two-dimensional WSe2.” Nanotechnology. DOI: 10.1088/1361-6528/aa7350 Citation: 3-in-1 device offers alternative to Moore’s law (2017, June 14) retrieved 18 August 2019 from https://phys.org/news/2017-06-in-device-alternative-law.html In a new study published in Nanotechnology, a team of researchers at SUNY-Polytechnic Institute in Albany, New York, has suggested that combining multiple functions in a single semiconductor device can improve device functionality and reduce fabrication complexity, thereby providing an alternative to scaling down the device’s dimensions as the only method to improve functionality. To demonstrate, the researchers designed and fabricated a reconfigurable device that can morph into three fundamental semiconductor devices: a p-n diode (which functions as a rectifier, for converting alternating current to direct current), a MOSFET (for switching), and a bipolar junction transistor (or BJT, for current amplification). “We are able to demonstrate the three most important semiconductor devices (p-n diode, MOSFET, and BJT) using a single reconfigurable device,” coauthor Ji Ung Lee at the SUNY-Polytechnic Institute told Phys.org. “While these devices can be fabricated individually in modern semiconductor fabrication facilities, often requiring complex integration schemes if they are to be combined, we can form a single device that can perform the functions of all three devices.”The multifunctional device is made of two-dimensional tungsten diselenide (WSe2), a recently discovered transition metal dichalcogenide semiconductor. This class of materials is promising for electronics applications because the bandgap is tunable by controlling the thickness, and it is a direct bandgap in single layer form. The bandgap is one of the advantages of 2D transition metal dichalcogenides over graphene, which has zero bandgap.In order to integrate multiple functions into a single device, the researchers developed a new doping technique. Since WSe2 is such a new material, until now there has been a lack of doping techniques. Through doping, the researchers could realize properties such as ambipolar conduction, which is the ability to conduct both electrons and holes under different conditions. The doping technique also means that all three of the functionalities are surface-conducting devices, which offers a single, straightforward way of evaluating their performance.”Instead of using traditional semiconductor fabrication techniques that can only form fixed devices, we use gates to dope,” Lee said. “These gates can dynamically change which carriers (electrons or holes) flow through the semiconductor. This ability to change allows the reconfigurable device to perform multiple functions.”In addition to implementing these devices, the reconfigurable device can potentially implement certain logic functions more compactly and efficiently. This is because adding gates, as we have done, can save overall area and enable more efficient computing.”In the future, the researchers plan to further investigate the applications of these multifunctional devices.”We hope to build complex computer circuits with fewer device elements than those using the current semiconductor fabrication process,” Lee said. “This will demonstrate the scalability of our device for the post-CMOS era.” Explore further © 2017 Phys.org
Photo of EcoHealth Alliance PREDICT field technician in Bangladesh holding up Rousettus leschenaultii fruit bat after sampling for viral discovery. Credit: EcoHealth Alliance Scientists know that many of the viral threats we humans will face in the future are likely to come from viruses that already exist but reside in other species, particularly other mammals. The animal hosts have built up some degree of immunity to them, but we have not. Thus, if they jump to us, the result can be devastating. In this new effort, the researchers sought to catalogue all of the known viruses that infect mammals around the globe and identify which are most likely to jump to humans.To create such a catalogue, the researchers created a database that held information on 754 mammal species, which represented 14 percent of all known mammals. They also added approximately 600 known viruses that infect mammals (of which a third were known to jump to humans) and which animals they infect. Next, they created mathematical models to use information in the database to provide useful information regarding the likelihood of a virus jumping to humans.The researchers report that their models suggest that the likelihood of a virus jumping from a mammal species to humans depends heavily on species and geography. Bats were found to carry the largest number of viruses likely to jump to humans and the areas where it was most likely to occur were South and Central America. Primates posed the second largest risk factor, particularly in Central America, Africa and Southwest Asia. Rodents came in third with the risk most pronounced in North and South America and Central Africa. Citation: Researchers identify mammals that are most likely to harbor viruses risky to humans (2017, June 22) retrieved 18 August 2019 from https://phys.org/news/2017-06-mammals-harbor-viruses-risky-humans.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Information gleaned from the system created by the researchers could prove more important over time as more data is added and the risk of a virus jumping rises. The hope is that it can be used to predict the next jump, allowing health officials time to prepare, or perhaps even to prevent it from happening. Cross-species jumps may play unexpectedly big role in virus evolution Photo of mother and baby macaque (Macaca fascicularis) at EcoHealth Alliance PREDICT field site in Thailand. Local person blurred in background and human food on the ground represents human-wildlife contact– a factor that we found to be significant for zoonotic diseases in our global models. Credit: EcoHealth Alliance Journal information: Nature More information: Kevin J. Olival et al. Host and viral traits predict zoonotic spillover from mammals, Nature (2017). DOI: 10.1038/nature22975AbstractThe majority of human emerging infectious diseases are zoonotic, with viruses that originate in wild mammals of particular concern (for example, HIV, Ebola and SARS). Understanding patterns of viral diversity in wildlife and determinants of successful cross-species transmission, or spillover, are therefore key goals for pandemic surveillance programs. However, few analytical tools exist to identify which host species are likely to harbour the next human virus, or which viruses can cross species boundaries. Here we conduct a comprehensive analysis of mammalian host–virus relationships and show that both the total number of viruses that infect a given species and the proportion likely to be zoonotic are predictable. After controlling for research effort, the proportion of zoonotic viruses per species is predicted by phylogenetic relatedness to humans, host taxonomy and human population within a species range—which may reflect human–wildlife contact. We demonstrate that bats harbour a significantly higher proportion of zoonotic viruses than all other mammalian orders. We also identify the taxa and geographic regions with the largest estimated number of ‘missing viruses’ and ‘missing zoonoses’ and therefore of highest value for future surveillance. We then show that phylogenetic host breadth and other viral traits are significant predictors of zoonotic potential, providing a novel framework to assess if a newly discovered mammalian virus could infect people. © 2017 Phys.org (Phys.org)—A team of researchers with the EcoHealth Alliance has narrowed down the list of animal species that may harbor viruses likely to jump to humans. In their paper published in the journal Nature, the group outlines the process they used to collect viral data on mammals around the globe, sorted them into groups and listed where they live. James Lloyd-Smith with the University of California offers a News & Views piece on the work done by the team in the same journal issue. Explore further
Inspired by the architechture and design of the city, Shaher – e – Dilli is a series of renderings on Delhi . Through this collection of work the artist is trying to bring to the limelight the referential sketch that serves as a visual diary, a record of an architect’s discovery. There is a certain joy in their creation, which comes from the interaction between the mind and the hand. Our physical and mental interactions with drawings are formative acts. A sketch may serve a number of purposes: it might record something that the artist sees, it might record or develop an idea for later use or it might be used as a quick way of graphically demonstrating an image, idea or principle.What one finds in these sketches are the magic that is hidden in the ruins and the monuments spread all over the city. From quaint corners to tombs every historical landmark of the city is captured in these sketches that have their own fascinating story to share.
Delhi Cant resist its watering mouth as the most cherished food fantasy comes alive. Paatra – The Indian cuisine restaurant at Jaypee Vasant Continental announces special Kebab promotion An Affair with Kebab. The fare is a culinary delight for both vegetarians and non-vegetarians with an array succulent kebab cooked perfectly in tandoor. Relish unlimited quantities of five vegetarian and non vegetarian kebabs along with biryani, dal, breads and desserts accompanied by a bucket of beer. The scrumptious kebabs can also be paired with the finest selection of liquor available at Paatra. So head on and order some!Where: Paatra, Jaypee Vasant Continental When: 23 May to 8 June PRICE: 1699 plus taxes per person TIMINGS: Both Lunch and Dinner
Kolkata: State Urban Development and Municipal Affairs minister Firhad Hakim stressed the need for holding more and more trade fairs across the state, for developing the spirit of entrepreneurship among people.”Earlier, there was not much interest in doing business in the state, but people like Chandra Shekhar Ghosh of Bandhan Bank have been great initiators in starting micro industries in the state. If we have more and more such trade fairs, there will be a rise in people-to-people interaction and the spirit of entrepreneurship will develop,” Hakim said at the inaugural ceremony of India International Kolkata Trade Fair (IIKTF) at Karunamoyee Ground in Central Park, Salt Lake. Also Read – Heavy rain hits traffic, flightsThe minister, who was the chief guest at the ceremony, maintained that such platforms are great for initiating link between buyer and seller. “I request Bengal Chamber to initiate classes in entrepreneurship, where young people can come and learn not just about manufacturing products, but also learn how and where to sell them. Our Chief Minister is very supportive of economic activity and we need to develop entrepreneur skills so that the per capita income of Bengal rises with economic growth,” Hakim added. Also Read – Speeding Jaguar crashes into Merc, 2 B’deshi bystanders killedIt may be mentioned that this is the first edition of IIKTF. The total number of stalls is 305 and the fair is on from June 1 to 11.”The Bengal Chamber of Commerce & Industry is the oldest and one of the most respected institutions of its kind in India. It is a powerful enabler, lobbying for the development of the economy and Infrastructure in India. The fair was jointly organised by The Bengal Chamber of Commerce & Industry and G S Marketing Associates. The purpose of the trade fair is to promote business and trade around the region. The partner country of IIKTF is Bangladesh and the focus country is Thailand and Sri Lanka this year. The other countries which are participating are Turkey, Egypt, China, Pakistan, Afghanistan, Myanmar and Netherlands”, stated Chandra Shekhar Ghosh, president, Bengal Chamber and chairman and MD, Bandhan Bank.The industry segments whose products are on display and sale include international companies, government departments, furniture and interiors, processed food, lifestyle, health and fitness, electronics, children’s products, auto show etc.
The Union Cabinet on Wednesday gave approval to three mega social security initiatives — one pension and two insurance schemes — to be launched by Prime Minister Narendra Modi on May 9. The schemes — Pradhan Mantri Jeevan Jyoti Bima Yojana (PMJJBY) and Pradhan Mantri Suraksha Bima Yojana (PMSBY) and Atal Pension Yojana (APY)– will be launched in Kolkata, the capital of West Bengal where assembly elections are due next year. “Cabinet approves operationalisation of APY, PMJJBY & PMSBY in all states and UTs,” said a tweet by the PIB. An official release said the decision on APY will benefit 2 crore subscribers in the first year, and that on PMSBY and PMJJBY will provide affordable personal accident and life cover to vast population.
Kolkata: A study by the West Bengal Board of Secondary Education (WBBSE) has revealed that only a few students have appeared from Bengali medium government and government-aided schools in Kolkata for Madhyamik examination this year.As per reports by WBBSE, there are around 44 schools in the city itself, from which five or less than five students have sat for the Madhyamik examination this year. There are 152 schools in which the number of students who appeared have been found to be 20 or less. Also Read – Rain batters Kolkata, cripples normal lifeThe total number of state and state-aided schools in the city presently stands at 458.”This statistics is a clear pointer that the students are shifting to English medium schools. We have already started English medium in some schools in the city to address this issue,” a senior official of WBBSE said.There was a time when schools like Brahma Boys School and Oriental Seminary were among the top schools in terms of student enrollment. Rabindranath Tagore received his primary education in Brahmo Boys. However, this year only six students from this school had appeared for Madhyamik. Also Read – Speeding Jaguar crashes into Mercedes car in Kolkata, 2 pedestrians killed”We are not getting enough students even though we have made efforts to bring in students,” said Amit Chandra, principal of Brahmo Boys.The number of students who had appeared from Oriental Seminary stands at 14. A solitary student had sat for the Madhyamik examination from as many as four schools, including Kumar Ashutosh Institution in Paikpara, Bangabasi Collegiate School, Ahiritola School and Sri Vidyamandir Girl’s school.Taltala High School had witnessed two students, while Hindu Academy also had the same number who had appeared for the secondary examination. A board official alleged that recruitment of teachers under political grounds during the Left Front rule is contributing to this trend. A number of primary schools run by the Kolkata Municipal Corporation are also facing similar problems.”We have found out that in some schools the number of students is relatively high, while in others it is distressingly low. We will be forwarding the report to the state Education department, so that necessary steps can be taken to address the issue,” a senior WBBSE official said. | <urn:uuid:1ecfabe2-191d-48b4-a846-f58e08d89ddd> | CC-MAIN-2021-21 | https://www.kuangchanwang.cn/palmdate/2019/08 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991252.15/warc/CC-MAIN-20210512035557-20210512065557-00374.warc.gz | en | 0.944529 | 5,894 | 3.765625 | 4 |
Anyone involved in plant pathology can tell you that the genus Fusarium is one of the most damaging group of fungi to crops. There are many species of Fusarium that cause disease on different crops. Some infections can cause devastating root rots and vascular diseases, some can cause cankers on branches and stems, and some can even infect foliage. Yield losses can be dramatic in some circumstances. For instance, in 1999 in northern Great Plains and central USA, Fusarium head blight of winter wheat alone suffered $2.7 billion losses . In tomato, when disease is severe, crop losses can reach 80% .
In Cannabis, two formae speciales of F. oxysporum have been described as causing Fusarium wilt: Fusarium oxysporum f. sp. vasinfectum (FOV) and Fusarium oxysporum f. sp. cannabis (FOC) [2, 21]. Furthermore, Fusarium solani has been found to be prevalent in hydroponic Cannabis grown in Canada . Furthermore, F. brachygibbosum and F. equiseti have been isolated (in addition to F. oxysporum and F. solani) from symptomatic field-grown Cannabis plants in Northern CA . F. oxysporum has been isolated from wilted Cannabis plants that does not match with either of the formae speciales cannabis or vasinfectum .
Fusarium oxysporum species complex
Fusarium oxysporum is a diverse species; some species are harmless soil inhabitants, while some are plant pathogens. Arguments have been presented that there may be at least two phylogenetically distinct species based on DNA sequencing, and that most plant pathogens belong two one of these groups (PS2) .
Forma specialis is not a phylogenetically recognized categorization; it is a way for plant pathologists to discuss particular isolates of Fusarium species that attack particular plant species. A forma specialis is generally named after the diseased plant that it was isolated from, this is why one of the isolate groups that infect Cannabis is called F. oxysporum f. sp. cannabis. The two formae speciales that infect Cannabis can be distinguished by their host range. FOC only infects Cannabis, whereas FOV has a wider host range and can infect cotton, coffee, and other plants. F. oxysporum has at least 100 different species-specific isolates. Sexual reproduction has not been observed in this species, but horizontal gene transfer likely had an important role in the evolution of this organism . all observed spores from F. oxysporum are asexual in nature.
Infections begin with the germination of spores or growth of mycelium into plant roots through injured areas or sites of lateral root emergence. The filamentous mycelium penetrates into the xylem vessels and begins colonizing the plant’s vasculature. It becomes systemic and can form sporulating structures known as sporodochia on aerial parts of the plant. The sporodochia produce conidia (2 types, microconidia and macroconidia [macroconidia is larger, multinucleate, and multiseptate]). The conidia is carried by wind and air. When it comes to overwintering, Fusarium can survive in the infected crop residues, but it can also make overwintering asexual spores known as chlamydospores. Chlamydospores don’t require special structures to form, they can form at the terminal ends of fungal hyphae or within the hyphae (intercalary). They are generally thick walled, melanized, and multicellular spores.
This disease of Cannabis has been amplified through human activity. Fusarium oxysporum is a deadly pathogen and it has been foolishly used as a mycoherbicide all over the world in order to try to kill ‘illicit’ Cannabis plants [2, 3]. All cultivars that have been tested are susceptible to the disease. In native ecosystems, F. oxysporum is not known to be a major disease risk. It seems that through intensive agriculture and monocropping, more pathogenic and virulent isolates have been able to evolve and amplify their populations clonally .
Fusarium solani species complex
F. solani, much like F. oxysporum, was previously divided into formae speciales based on the host range. However, recent phylogenetic studies have determined that different formae speciales are really unique species, and the F. solani species complex (FSSC) is divided into at least 60 unique species. Some of these species have been renamed, but many are still unnamed and are referred to by ‘haplotype number’, which is basically just a number that represents certain genotypes. The FSSC has a wide host range, and even particular species within the FSSC can have broad host ranges. Unlike in the FOSC, sexual reproduction has been observed in some species in the FSSC.
The life cycles, infection techniques, and symptoms are very similar between FOSC and FSSC, so I when I mention Fusarium from here on out, I will be referring to all Fusarium sepecies capable of causing root rots of Cannabis.
Chlamydospores require a conducive environment to germinate and cause disease. In soil, this generally means the presence of root exudates . Because of this, only spores in very close proximity to roots actually pose a disease risk. The rhizosphere (which I will define as the area of soil directly affected by the root exudates) is generally very small (<1 mm) from the plant root [9, 10]. The actual volume of soil that falls within this distance from roots is typically under 35%, even for plants with extensive roots and highly active exudation . However, evidence of targeted growth of germinated spores towards roots is lacking (i.e. lacking evidence for chemotaxis) . Infections may fail to establish, especially in the case of a rapidly growing root (spore germinates in response to root exudate, if the germ tube reaches the root, chances are the root tip has already advanced and the fungus is now encountering more differentiated plant tissue more capable of defensive responses) .
Flower and Seedling Infection
Along with Pythium, Fusarium can cause damping off of seedlings. Infection can begin in roots or the hypocotyl and can quickly invade the vasculature .
Fusarium usually begins its infection cycle from chlamydospores in the soil. However, as the infection progresses, sporodochia form on the crown and lower stem that produce conidia. Conidia can become airborne and can infect aerial portions of the plant. In particular, it can readily form flower infections and cause bud rot. Flower-infecting species include F. solani, F. oxysporum and F. equiseti . F. solani appears to be the most aggressive species. It appears that the F. oxysporum that has been isolated from flowers is the same type involved in root infections . These Fusarium species can directly infect the bracts and pistils of flowers.
In hydroponics, Fusarium can be particularly aggressive . In fact, researches generally use aqueous spore suspensions in experiments to guarantee that the plant is inoculated. First of all, the spores are essentially in suspension and circulate around the water, almost guaranteeing that the spores will come in direct contact with the roots. Furthermore, the spores can directly adhere to the root tips, foregoing the need for germ tubes to find their way through soil to the plant roots and decreasing the chance that the plant root can grow faster than it takes for the fungal mycelium to reach the root. This allows easy access for the fungus to susceptible meristematic tissue. Root tip infection does not occur for all plant species, but spore adhesion to roots certainly does raise disease risk for any plant species.
Once the fungal mycelium contacts a root, the fungus proliferates into a hyphal network to maximize points of contact. They likely utilize cell wall degrading enzymes to form an opening, and the mycelium can then penetrate directly through epidermal cells or may grow in between cells . Either way, growth advances towards the root cortex.
Necrotroph or Biotroph?
For those who have read my articles on two other major Cannabis pathogens, bud rot and powdery mildew, you may be aware that pathogenic fungi can have a variety of different strategies. A biotroph requires the host cells to be alive and extracts nutrients from the living host cells (a true parasite, such as PM [manipulates host immune responses]), whereas a necrotroph such as the bud rot pathogen Botrytis cinerea induces or causes cell death to overcome plant resistance responses and have dead organic matter to feed on (it may be argued that there is a brief biotrophic phase in bud rot but in general can be viewed as a necrotroph).
Strictly speaking, Fusarium is necrotrophic because even isolates that do not cause any visible damage to a given plant (nonpathogenic) are observed to grow intracellularly and cause cell death on a microscopic level . However, as mentioned, there are many cases of F. oxysporum isolates not causing any really visible disease or crop losses, or even examples of isolates that cause disease symptoms on some plant species but can colonize the roots of other plant species without causing visible disease symptoms. In these cases, though necrotrophy is visible on a microscopic level, F. oxysporum may be considered an endophyte, and the complexity of the relationship between endophytic Fusarium isolates and their plant hosts are not fully understood [1, 15, 16, 17, 18]. In fact, F. oxysporum can even be isolated as an endophyte from nonsymptomatic Cannabis plants .
There have been some conflicting reports as to how the wilt disease progresses (this will mostly focus on studies done with F. oxysporum in flax), but the differences might be attributable to environmental differences between studies, differences in how microscopic images were interpreted, or it may even be evidence that different isolates within a given forma specialis may span a spectrum of necrotrophic and biotrophic lifestyles.
Disease Cycle Proposition 1: The extended biotrophic phase
- The fungus has an extended biotrophic phase in which infected cells remain viable and the fungus can continually be isolated from seemingly disease-free root tips.
- After entering the xylem vessels, the fungus grows in the vessels and feeds on the nutrients carried within the xylem. It continues to grow until the vessels become occluded (blocked), either through the accumulation of fungal biomass or through plant responses such as forming tyloses.
- After xylem occlusion and plant wilting/death, the fungus then grows out of the vasculature and begins a necrotrophic phase in which is begins killing and feeding on all other plant tissues.
Disease cycle proposition 2: The true necrotroph
- No biotrophic phase observed, cell death is common among all cells the fungus comes in contact with.
- Infection of roots leads to root cell death and necrosis before the fungus even reaches xylem vessels (i.e. root rot can precede systemic vascular infection)
- Fungus aggressively colonizes both vasculature and other tissues
- Initial symptoms can look similar to Nitrogen deficiencies. Chlorosis of lower leaves and slight wilting becomes evident. Plant stunting is common, especially in the case of F. solani infection
- The crown region of the plants become darkly discolored and sunken. Discoloration of the vasculature can extend up to 15cm from the soil surface.
- In hydroponics, roots become discolored
- When xylem vessels become occluded, whole plants can begin to wilt.
- In this hydroponic system, the Fusarium wilt ended up killing the plant.
- Sporodochia form on the necrotic stem and the spores can become airborne, infecting surrounding plants. In humid conditions, mycelium can grow out of the stem
- Fusarium can cause damping off in seedlings and clones as seen in the following tray of clones:
- Fusarium oxysporum can cause bud rot! When inoculated on flowers, they can cause necrosis of the buds very similar to Botrytis cinerea. The mycelium is usually much more white than the mycelium from B. cinerea.
- Depending on where the infection occurs, wilting can be evident on some branches/colas but not others.
What Factors Favor Fusarium Development?
For F. oxysporum f. sp. lycopersici (the forma specialis that infects tomato), the following factors favor wilt development (28):
- Soil and air temperatures of 28°C (Too warm (34°C) or too cool (17-20°C) will inhibit development)
- Low nitrogen and phosphorus, high potassium
- Low soil pH
- Short day day length
- Low light
- use of ammonium nitrogen
Root exudates appear to stimulate spore germination and drive plant infection. However, certain techniques based on manipulating the soil microbiome may be beneficial in controlling the severity of Fusarium wilt.
- Amending soil with organic matter to promote microbial activity may make soils more disease-resistant .
- Soil treatments aimed at reducing the number of viable fungal propagules in the soil such as anaerobic soil disinfestation (ASD) and solarization .
- ASD is a process of flooding a field and covering with a plastic ‘mulch’. Anaerobic bacteria multiply and gasses from these bacteria accumulate under the plastic mulch.
- Solarization is the process of putting a black plastic over a field during hot seasons in direct sun to raise soil temperatures.
- Certain bacteria or microbial groups may contribute to how conducive a soil is to disease development
- For instance, a species of Arthrobacter in suppressive soils was associated with greater levels of lysis of fungal germ tubes from soil chlamydospores .
- Soils that are more disease suppressive are sometimes associated with certain microbial groups in the soil microbiome
- For instance, a species of Arthrobacter in suppressive soils was associated with greater levels of lysis of fungal germ tubes from soil chlamydospores .
Resistant strains undoubdtedly can be bred for. In hemp, SF and CF cultivars appear to be more resistant than the cultivar Iran . I am not sure what strains are best in regards to marijuana cultivars with resistance to Fusarium, and I am struggling to find information on this. Comments with relevant information on this would be appreciated.
I will list some approved spray/soil drench control methods, but can not promise the effectiveness of any method, much cannot be found in literature.
- In Canada, possible biocontrol agents for Fusarium infections in foliage and flowers include Prestop WP (Gliocladium catenulatum strain J1446) and Rootshield WP (Trichoderma harzianum Rifai strain RRL-AG2) .
- These microbes will be counted on CFU testing, so should not be applied late in locations that test using this method.
- In Canada, approved biocontrol agents for root-infecting pathogens are Rootshield WP (Trichoderma harzianum Rifai strain RRL-AG2) and Prestop WP (Gliocladium catenulatum strain J1446) .
- In California, Gliocladium virens, Trichoderma harzianum, and Bacillus amyloliquefaciens strain D747 are approved biofungicides .
- Other possible biocontrol biocontrol agents include Rhapsody (Bacillus subtilis strain QST 713) and Mycostop (Streptomyces griseoviridis strain K61) .
In California extract of Giant Knotweed (Reynoutria sachalinensis) REGALIA® Rx Biofungicide is an approved fungicide.Marrone Bio Innovations Regalia Biofungicide Fungicide inhibits fungal and Bacterial Disease Boosting Yield, 0-Day PHI, 4 Hour REI, OMRI Listed (1 Gallon)
Kelp extracts (contain arachadonic acid) and crab meal/insect frass (contain chitin) may be useful soil amendments for priming plant resistance to soil borne fungal pathogens.Liquid Kelp Extract Seaweed 32 Ounce Fertilizer Concentrate
In Oregon, potassium phosphite such as Agri-Fos, (which happens to also be a good source of potassium and phosphorus in flower) is also approved as a plant protectant and fungicideMonterey Agri-Fos Disease Control Fungicide – Pint LG3340 Quest Reliant Systemic Fungicide (Agri-Fos/Garden Phos) 1 Gallon
- Control and prevention should include efforts to reduce inoculum loads. For growers using hydroponics (including coco) and/or indoor grows: ultraviolet light in ducting and even the grow area (which also may increase cannabinoid production if used correctly), ozonation of the grow area (too high of a level may have negative effects on plant and human health), chlorination of water used in hydroponics, hydrogen peroxide flushes of the root zone (or products such as Zerotol which also contains peroxyacetic acid), heat pasteurization and/or mechanical filtration of water .DPD ZeroTol 2.0 2.5GAL
- It is a good idea to remove wilted plants to prevent aerial spore transfer and quickly remove any infected flowers or branches, especially in environments of high humidity.
- In hydroponics, keeping nutrient solution at temperatures between 17℃ and 22℃ is ideal for preventing pathogens, promoting water oxygenation, and preventing growth retardation of the plants.
Active Aqua AACH10HP Water Chiller Cooling System, 1/10 HP, Rated per hour: 1,020 BTU, User-Friendly
- Always sterilize your tools in between cuts, wear proper PPE to avoid introducing inoculum.
- Despite common conceptions that Fusarium grows best in flooded soils, many Fusarium species actually grow best in aerobic, well-draining soil . Another study similarly found F. oxysporum f. sp. lycopersici to not grow in saturated soils. However, plants were actually resistant to infection at soil moisture contents of 13%-19% .
- In short, it is good to let your soil dry between waterings (not to the point of plant wilting though). Fusarium grows best in aerobic (well-draining), and moist but not flooded soils (i.e. most coir or peat based media).
- Anaerobic soil disinfestation (ASD) may be a good way to reduce soil inoculum levels between grows in no-till systems.
The most important factor in preventing flower infections from Fusarium is probably humidity . Flower infection relies on airborne conidia released from the sporodochia (spore-bearing structures) on aerial tissue of the plant. Humidity needs to be high in order to successfully form these sporodochia. A different Fusarium species, F. graminearum requires humidity of over 85% RH to form perithecia (sexual spore-bearing structure of this species) .
For Fusarium oxysporum f. sp. erythroxyli (this paper is unfortunately discussing the possible use of this Fusarium species to kill the ‘illicit narcotic’ coca plant), the isolate was found to sporulate at relative humidities (RHs) between 75% and 100% . Fusarium‘s primary route of infection is through the roots in soil, and it is a bit easier to control for aerial infections than soil infections in Cannabis.
General humidity control aiming to correlate with vapor pressure deficit conditions or slightly lower for IPM reasons (around 60% RH in veg, 50% in flower down to 40% the last couple weeks of flower) should be good enough to prevent a lot of aerial sporulation. Good airflow and ciculation is definitely recommended to reduce high-humidity microclimates.
- Gordon, T. R. (2017). Fusarium oxysporum and the Fusarium Wilt Syndrome. Annual Review of Phytopathology, 55(1), 23–39. https://doi.org/10.1146/annurev-phyto-080615-095919
- McPartland, J. M., & Hillig, K. W. (2004). CANNABIS CLINIC Fusarium Wilt. Journal of Industrial Hemp, 9(2), 67–77. https://doi.org/10.1300/J237v09n02_07
- Council, N. R. (2011). Feasibility of Using Mycoherbicides for Controlling Illicit Drug Crops. The National Academies Press. https://doi.org/10.17226/13278
- Bonanomi G, Antignani V, Capodilupo M, Scala F. 2010. Identifying the characteristics of organic soil amendments that suppress soilborne plant diseases. Soil Biol. Biochem. 42:136–44
- Hewavitharana SS, Mazzola M. 2016. Carbon source–dependent effects of anaerobic soil disinfestation on soil microbiome and suppression of Rhizoctonia solani AG-5 and Pratylenchus penetrans. Phytopathology 106:1015–28
- Greenberger A, Yogev A, Katan J. 1987. Induced suppressiveness in solarized soils. Phytopathology 77:1663–67
- Smith SN. 1977. Comparison of germination of pathogenic Fusarium oxysporum chlamydospores in host rhizosphere soils conducive and suppressive to wilts. Phytopathology 67:502–10
- Mazzola M. 2004. Assessment and management of soil microbial community structure for disease suppression. Annu. Rev. Phytopathol. 42:35–59
- Huisman OC. 1982. Interrelations of root growth dynamics to epidemiology of root-invading fungi. Annu. Rev. Phytopathol. 20:303–27
- Rovira AD. 1969. Plant root exudates. Bot. Rev. 35:35–57
- Olivain C, Humbert C, Nahalkova J, Fatehi J, L’Haridon F, et al. 2006. Colonization of tomato root by pathogenic and nonpathogenic Fusarium oxysporum strains inoculated together and separately into the soil. Appl. Environ. Microbiol. 72(2):1523–31
- Beckman CH. 1987. The Nature of Wilt Diseases of Plants. St. Paul, MN: Am. Phytopathol. Soc. 175 pp
- Recorbet G, Alabouvette C. 1997. Adhesion of Fusarium oxysporum conidia to tomato roots. Lett. Appl. Microbiol. 25:375–79
- Olivain C, Alabouvette C. 1997. Colonization of tomato root by a non-pathogenic strain of Fusarium oxysporum. New Phytol. 137:481–94
- Correll JC, Puhalla JE, Schneider RW. 1986. Vegetative compatibility groups among nonpathogenic root-colonizing strains of Fusarium oxysporum. Can. J. Bot. 64:2358–61
- Gordon TR, Okamoto D, Jacobson DJ. 1989. Colonization of muskmelon and nonsusceptible crops by Fusarium oxysporum f. sp. melonis and other species of Fusarium. Phytopathology 79:1095–100
- Katan J. 1971. Symptomless carriers of the tomato Fusarium wilt pathogen. Phytopathology 61:1213–17
- Scott JC, McRoberts DN, Gordon TR. 2014. Colonization of lettuce cultivars and rotation crops by Fusarium oxysporum f. sp. lactucae, the cause of Fusarium wilt of lettuce. Plant Pathol. 63:548–53
- Turlier M-F, Eparvier A, Alabouvette C. 1994. Early dynamic interactions between Fusarium oxysporum f. sp. lini and the roots of Linum usitatissimum as revealed by transgenic GUS-marked hyphae. Can. J. Bot. 72:1605–12
- Kroes GMLW, Baayen RP, Lange W. 1998. Histology of root rot of flax seedlings (Linum usitatissimum) infected by Fusarium oxysporum f. sp. lini. Eur. J. Plant Pathol. 104:725–36
- Punja, Z. K., & Rodriguez, G. (2018). Fusarium and Pythium species infecting roots of hydroponically grown marijuana (Cannabis sativa L.) plants. Canadian Journal of Plant Pathology, 40(4), 498–513. https://doi.org/10.1080/07060661.2018.1535466
- Coleman, J. J. (2016). The Fusarium solani species complex: ubiquitous pathogens of agricultural importance. Molecular Plant Pathology, 17(2), 146–158. https://doi.org/10.1111/mpp.12289
- Punja, Z., Scott, C., & Chen, S. (2018). Root and crown rot pathogens causing wilt symptoms on field-grown marijuana ( Cannabis sativa L.) plants. Canadian Journal of Plant Pathology, 40. https://doi.org/10.1080/07060661.2018.1535470
- Punja, Z. K., Collyer, D., Scott, C., Lung, S., Holmes, J., & Sutton, D. (2019). Pathogens and Molds Affecting Production and Quality of Cannabis sativa L. Frontiers in Plant Science, 10, 1120. https://doi.org/10.3389/fpls.2019.01120
- Punja, Z. K. (2018). Flower and foliage-infecting pathogens of marijuana (Cannabis sativa L.) plants. Canadian Journal of Plant Pathology, 40(4), 514–527. https://doi.org/10.1080/07060661.2018.1535467
- Department of Pesticide Regulation, C. (n.d.). CANNABIS PESTICIDES THAT ARE LEGAL TO USE. Retrieved March 28, 2020, from http://www.cdpr.ca.gov/cannabis
- Manstretta, V., & Rossi, V. (2015). Effects of Temperature and Moisture on Development of Fusarium graminearum Perithecia in Maize Stalk Residues. Applied and Environmental Microbiology, 82(1), 184–191. https://doi.org/10.1128/AEM.02436-15
- Fusarium oxysporum f. sp. lycopersici. (n.d.). Retrieved March 28, 2020, from https://projects.ncsu.edu/cals/course/pp728/Fusarium/Fusarium_oxysporum.htm
- Gracia-Garza, J. A., & Fravel, D. R. (1998). Effect of Relative Humidity on Sporulation of Fusarium oxysporum in Various Formulations and Effect of Water on Spore Movement Through Soil. PhytopathologyTM, 88(6), 544–549. https://doi.org/10.1094/PHYTO.1922.214.171.1244
- Stover, R. H. (1953). THE EFFECT OF SOIL MOISTURE ON FUSARIUM SPECIES. Canadian Journal of Botany, 31(5), 693–697. https://doi.org/10.1139/b53-050
- Clayton, E. (1923). The Relation of Soil Moisture to the Fusarium Wilt of the Tomato. American Journal of Botany, 10(3), 133-147. Retrieved March 29, 2020, from http://www.jstor.org/stable/2435361
- Goswami, R. S., & Kistler, H. C. (2004). Heading for disaster: Fusarium graminearum on cereal crops. Molecular Plant Pathology, 5(6), 515–525. https://doi.org/10.1111/j.1364-3703.2004.00252.x
- Fusarium Wilt in Processing Tomatoes – Seminis. (n.d.). Retrieved March 29, 2020, from https://seminis-us.com/resources/agronomic-spotlights/fusarium-wilt-in-processing-tomatoes/
- Akhter, A., Hage-Ahmed, K., Soja, G., & Steinkellner, S. (2016). Potential of Fusarium wilt-inducing chlamydospores, in vitro behaviour in root exudates and physiology of tomato in biochar and compost amended soil. Plant and Soil, 406(1), 425–440. https://doi.org/10.1007/s11104-016-2948-4 | <urn:uuid:526df467-5188-4976-a14b-dcdec6762075> | CC-MAIN-2021-21 | https://indicainfo.com/2020/03/29/fusarium-in-cannabis/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988882.7/warc/CC-MAIN-20210508121446-20210508151446-00614.warc.gz | en | 0.868151 | 6,416 | 3.65625 | 4 |
The Hand of Providence
10 Mar 2007, Dr Barry Wright
(Barry is Thornleigh's Church Pastor)
THE HAND OF PROVIDENCE
Behind all the events of history on planet earth, we see the God of heaven silently at work in the protection of His people. While at times we may see the enemy triumph for a short time, we need to understand that God holds the key to ultimate victory.
God's overruling providence and sovereignty could not be better seen than through the life of a young Jewish girl born at a time when her people were scattered throughout the ancient world about 475 BC.
Approximately one hundred years before her birth, the leading citizens of the Jewish nation had been taken captive by the Babylonian forces of King Nebuchadnezzar and were forced into exile.
History then records the eventual overthrow of the Babylonian empire by the powerful Persian armies during the time of this Jewish captivity. Persia now becomes a major world empire, which at its height, extended from the northern boundary of Ethiopia to the north-western frontier of India covering 127 separate provinces (Esther 1:1). We are told in Ezra 5: 13 that under the rule of Cyrus the Great an edict was originally issued allowing the captive Jews to return to their homeland in 536 BC in order to rebuild their temple. Years later, a second decree given by King Darius during his reign, confirmed this original decree and saw the completion of the Jewish temple in 515 BC (Nichols, 1954: 459).
While these proclamations provided the opportunity for the Jews to return home, only a comparatively small number of less than 50,000 were to eventually take up the offer (Ibid).
We need to recognise that many of these people had been born in Babylon and had established themselves in profitable businesses and, as such, were less inclined to cross the desert and begin all over again. If the Jewish people had all gone home at this time many of the events that followed would never have happened (Mears, 1983: 164).
After the death of King Darius approximately thirty years later in 486 BC, his son Ahasuerus was to take the throne. The Greek historian Herodotus aptly describes him as a cruel, capricious and very sensual man (Alexander, 1999: 340). This king, also identified as Xerxes by secular historians, was to continue his father's work in holding the massive Persian Empire together. However, he was to suffer a major defeat by the Greeks at a place called Salamis in 480 BC, Historians record that this was one of the most significant battles of that era allowing the Greeks to maintain their lands and, in turn, forcing the Persians to return to Asia. As such, the Persian armies were to leave Europe forever, never to return (Ibid: 163).
Prior to this infamous battle, King Ahasuerus was to make a great feast bringing all his political and military personnel together to discuss the planning of this important campaign. It is this event that introduces our story in the book of Esther found in Ch. 1: 1-3, 9-12, 19 (Paraphrased) Let's commence reading with Chap 1: 1-3.
V 1-3 'When Xerxes (also known as Ahasuerus) became king of Persia, his empire encompassed one hundred and twenty-seven provinces and stretched from the borders of India to Ethiopia. He ruled His Empire from the city of Susa. In the third year of his reign, he gave a huge banquet for all his officials and administrators. He also invited the heads of the armies of Persia and the Governors and nobles from all across his huge country.'
Let's stop here for a moment
Susa or Shushan was in the province of Elam about 85 miles or 142 km north of the ancient shoreline of the Persian Gulf and a little more than 200 miles or 330 km east of Babylon. This capital city, which was the seat of government at that time, was situated at the eastern edge of the Tigris Valley where it rises to meet the Iranian hills. The spacious Shushan palace now lies amid the three square miles of ruins that are found in that area today. Among the glories of this former palace were walls that were draped with gold, marble pillars and rich material hangings of white and violet cotton, the colours of Persian royalty (Nichols, 1954: 463).
Let's continue our reading in V 9-12
'Meanwhile inside the palace, Queen Vashti was giving a royal banquet for the women. On the last day of the king's banquet when he was in high spirits [or merry with wine], he ordered Mehuman, Biztha, Harbona, Bigtha, Abagtha, Zethar and Carcas, his seven personal eunuchs, to bring Queen Vashti in before his guests. The king wanted her to wear her royal crown and yet dress so scantily that everyone would see what an exceptionally attractive woman she was. When the eunuchs told Queen Vashti the king's request, she refused to go and be put on display. The king became furious. By refusing his request, Vashti was challenging his authority and making him lose face in front of all his guests.
Let us stop here again for a moment.
We need to understand that in wanting to show off Vasht's beauty, the drunken king was to outrage the most sacred rules of Oriental etiquette. Most Persian woman would never permit this to happen. The seclusion of the harem was about to be violated for the amusement of a dissolute king and his companions. It was no wonder Vashti refused. However, in order to defend his authority in front of his guests, the king, heeding the advice of his counsellors, deposed the queen.
Let's read v 19
V 19 'Therefore if it please the king, let his majesty issue a proclamation according to the laws of the Medes and the Persians which cannot be changed, that Vashti may never again appear in the king's presence. Then let her royal position be given to someone better than she.'
This decree now opens the way for God's hand of Providence to begin its work. It was to see the rise of an unknown Jewish orphan girl to become the queen of the mightiest empire on earth at that time. The story of this little girl called Hadassah was to illustrate how God could use events and people as select instruments to fulfil His promises to His chosen people (Lockyer, 1986: 355).
Just as in the story of Ruth, we see the important role that women were to play in God's great plan for the salvation of His people (Nichols, 1954: 457) Ruth becomes the ancestress of the Deliverer of Israel and Hadassah saves the people so the Deliverer might come. (Repeat) God had protected the Jewish nation through the centuries for the purpose of blessing the whole world and he was not about to allow them to be wiped from the face of the earth before the Deliverer could come. This was done according to His promise to Abraham (Mears, 1983: 164).
Although God's name is not mentioned throughout the writings of the book of Esther, every page is full of a God who we find behind every word and every deed (Mears, 1983: 163). Matthew Henry, the great Bible commentator confirms this belief by suggesting that if the name of God is not there, His hand surely is (Ibid). God is seen to be involved in directing the many minute events bringing about His people's deliverance from the hand of the enemy (Church, 1971:505). Author Dr. Pierson calls it, 'The Romance of Providence' showing that God has a part in all the events of human life (Mears, 1983: 163). It shows how God used a courageous young woman of surpassing beauty to save her people at a time of crisis when all of them could have been exterminated or wiped from the earth (Nichols, 1954: 457)
All through history God has never let His people go and this should give us a wonderful assurance of His protection in the future. God was to follow the Jewish people in their captivity into Babylon and when the prophets were silent and the temple closed, He was still to be found standing guard (Mears, 1983: 163). When the kings of the earth feasted and forgot, God remembered and it was with His hand that he was to write their doom or, in many cases, moved their hand to work out His glory (Ibid).
'The book of Esther is a major chapter in the struggle of the people of God to survive in a hostile world. Beginning with the book of Genesis, God had made it clear in chapter 12: 1, 3 that He would bless His Covenant People and bring a curse on those who tried to do them harm' (Lockyer, 1986: 357). The historical book of Esther shows how God was able to keep His promise at every stage of history and gives us the faith to trust God to protect from those who continually oppose us (Ibid).
Hadassah was to be like Joseph and David whom God had hidden away for His future purpose. When the day was to arrive He was to bring them to the fore to work out His plan. David was taken from being a shepherd to become a king. Joseph, sold as a slave, was hidden away in a dungeon in Egypt until God was ready to place him in the position of prime minister of that country. We need to recognise that God always has someone in reserve to fulfil His purposes (Mears, 1983: 164). Even those considered to be 'the weakest of the weak' were given the opportunity to 'come to the kingdom for such a time as this'.
We are now to see a little Jewish girl become a Persian queen.
What do we know about her?
The Scriptures in Esther 2: 7 make clear that Hadassah which, in Hebrew means 'myrtle', was a strikingly beautiful Jewish girl whose family had been carried into captivity and who later chose to remain in Persia rather than return to Jerusalem. It was to be after the death of her parents that she was to be raised by her cousin Mordecai as his own daughter. Their home was in Susa or Shushan which at the time was the capital city of Persia during the time of Ashasuerus.
Apart from her beauty, the narrative tells us that Hadassah was recognised as a woman of clear judgement, noble self-sacrifice, and remarkable self-control. As such, Esther 2: 15 says she was respected and admired by all who knew her.
Esther 2: 2-4 tells us that after Queen Vashti was deposed from her royal throne, the King orders that a search be made throughout all the provinces for beautiful young girls and that they be brought into the harem at the citadel at Susa. The one who pleased the King would then be made queen.
Hadassah was one of the girls brought into the harem and it is believed that before being presented to the king her Jewish name was to be changed to Esther, a Persian word meaning 'star'. In this way, her Jewish origins were kept secret as part of the instructions given by Mordecai in Esther 2: 10.
The minute Ahasuerus saw Esther he made up his mind that she would be his queen. This little girl was now to be lifted to the Persian throne at a time that the empire comprised over half the then known world (Mears, 1983: 166). This special event was to take place two years after Ahasuerus' defeat at Salamis and she was to remain his queen for thirteen years (Ibid).
To mark her coronation, the king not only remitted to all the provinces their usual tribute, but also gave her an allowance that was to be made up of one tenth of all the fines collected by his treasury officials (Ibid)
However, it is not long before a dark shadow is cast across this idealistic picture. It comes with the elevation of a man called Haman to become the King's chief minister, his most trusted advisor. We are told in Esther 3-5 that he was an egotistical and ambitious man who was to demand that all the people bow to him when and where ever he passed. This was something that no really devout Jew could ever do in good conscience and Mordecai was to be no exception to this rule. Inflated with pride, Haman could not endure the indifference of even the smallest of his subjects. The fault of Mordecai was suddenly to be magnified into a capital offence and was to include the wholesale massacre of the entire Jewish population. This event, if carried out, would be a precursor to the later Jewish holocausts of history that reach down to modern day.
This ethnic cleansing to rid Persia of the entire Jewish race was to eventually receive the King's assent and was passed into law. However, while Haman had promised a huge bribe into the royal treasury through the eventual seizure of Jewish goods and lands, it seems the King was to decline this offer as noted in Esther 3: 9-11).
Little did the King realize the far-reaching results that would have accompanied the complete carrying out of this decree that was designed to take place eleven months from its issue?
The Persian postal system, which was famous throughout the ancient world, was now put into full effect. Horses and riders, similar to the operation of the US pony express, were dispatched to all the Persian provinces. Stables providing fresh horses and riders along the postal route would take the dispatches throughout day and night until they reached their destination. Devised by Cyrus the Great, this system was to be the most efficient postal service ever used. Within a period of two months, a copy of the decree had been issued to all the Persian provinces.
However, in God's great scheme of things, Haman's day of triumph was to be short lived and his joy was only to be endured for a moment.
The crisis facing the Jewish people demanded quick and earnest action. Both Esther and Mordecai came to realise that unless God was to work mightily in their behalf, their own efforts would be futile. Their source of strength was to be found in their communion with God (White, 1943: 601) (Repeat). Instructing the Jews in the city of Susa to fast and pray for three days, she prepares to enter before the king. This was a course of action that was to place her life in jeopardy for her people. Esther knew this when she uttered those fateful words in Esther 4: 16 'And if I perish, I perish'.
To enter unsummoned before this cruel and fickle king required courage, tact and resourcefulness and it seems Esther had all three.
The entire fate of the Jews was now to depend on her. She alone had access to the king.
Mordecai's words in Esther 4: 14 were now to ring true when he suggested, 'And who knoweth whether thou art come to the kingdom for such a time as this?'
You know, this is an important question that we could all readily ask of ourselves in relation to the time in which we live. We need to understand that failure is not sin but faithlessness is. We need to act when God speaks. We need to do what is right and learn to leave the rest to God (Mears, 1983: 168).
God's providence was now being shown as the king favourably accepts her audience by holding out his sceptre and with tact and skill Esther is able to expose Haman's plot and his true character to the king. In the King's initial response in Esther 5: 3 he asks Esther for her request and, at the time, is prepared to give the queen 'even up to half of his kingdom'.
For two days Esther was to keep the King in suspense while preparing him for the real shock. However, before this takes place the God of Heaven begins His work. Esther chap 6 records that the king is unable to sleep so he calls for the book of records to be read to him. It is here, written in the court records, that he is reminded that Mordecai the Jew had discovered a plot on his life and had prevented it from taking place. The King now wants to reward this faithful servant.
When Satan put it into the heart of Haman to devise Mordecai's death, God put it into the heart of the King to arrange for Mordecai's honour.
From the lips of this timid retiring young woman came the denunciation of Haman's monstrous plan as she not only plead for her own life, but for the life of her people. As a result, the King grants Esther's wish and while the first decree cannot be undone, a second decree is issued to allow the Jews the opportunity to defend themselves and their properties from their enemies. This was to bring about their miraculous deliverance, which has been celebrated by the Jewish people down through history to our modern day in what is known as the feast of Purim.
In a dramatic twist of plot Haman is hung on the gallows he built for Mordecai's execution, while Mordecai is promoted to prime minister (Lockyer, 1986: 356, 357).
Esther stands out as God's chosen one who came to the kingdom for such a time as this.
All through time God has used men and women to fulfil His purposes. Many have come from obscure backgrounds, but when the time was right God was to bring them forward to change the very course of history.
One such man to be found in more modern times was born in a log cabin on February 12, 1809 near Hodgenville, Kentucky in the United States of America. His parents, Thomas and Nancy Hanks Lincoln were members of a Baptist congregation that had recently separated from another church due to their opposition to slavery (http://home.att.net/~rjnorton/Lincoln77.html).
The historical records show that Abraham Lincoln was to grow up in a poor dirt farming family in the upper South and lower Midwest without privilege, position, or much formal education (Miller, 1992: 15) Overall, his formal schooling amounted to less than twelve months throughout his entire life (Nault, 1988, Vol 12: 312). It would seem that the world of his upbringing was closer to Puritanism than anything else, and as such, he, like the common people, was deeply religious, believing without question in a God and the unseen world (Ibid).
As with many families, his home had very few books, but it did have a Bible. Lincoln evidently read this with very great care. Throughout the early period of his life he was also constantly borrowing books from his neighbours, preferring to take the time to read than to work in the fields (Ibid).
It was also during his growing up years that he found the harsh infighting between the various denominations and with the village atheists to be nothing short of repulsive. As such, Lincoln never belonged to a particular church organization for very long (Ibid).
One of his greatest instructors throughout his life was to be found in the reality of death, the coldest of all masters (Ibid). The death of his mother when he was nine, the death of his beloved sister shortly after her marriage, the death of two of his own sons and many of his close friends in the early days of the Civil War gave him no escape from the mysteries of God and the universe (Ibid).
The importance of mothers can never be underestimated. Lincoln remembered little of his own biological mother, but years later in reference to his stepmother, he made the following statement: 'God bless my mother; all that I am or ever hope to be I owe to her' (Nault, 1988: 312). It was from his mother that he learnt the many important lessons of patience, honesty and kindness.
Lincoln reached his full height of 193 cm or 6' 4' long before he was 20. He was thin and awkward, big-boned and strong, had a homely face with dark skin and black coarse hair that stood on end (Ibid: 313). Much of his strength was to come from splitting logs for fence rails and ploughing fields, not only for his dad, but also for neighbours when his father could spare him. However, his greatest asset was to be his ability as a speaker. Even as a boy Lincoln amused himself and others by imitating well known preachers and politicians who had recently spoken in the area (Nault, 1988: 313). It was this gift that God was later to use in his rise to eventually become the 16th Republican President of the United States of America.
At the age of 33, he married a girl called Mary Todd and became a successful attorney at law. One of his greatest sources of strength was to be seen in his iron will (Ibid: 310). This characteristic was to be honed well by his strong determination to overcome the many failures that had taken place in his life following on from his failure in business and farming to his many attempts to obtain political office. Over a period of 38 years Lincoln placed his trust in God and was not prepared to give up. His final success was to be seen in 1861 when he was finally elected the president of the United States of America.
There is no question that the hand of God's providence had been at work and there was no doubt that Lincoln, like Esther, had been called by God to the kingdom for such a time as this.
At the time of his appointment the American people knew little about him. There was nothing that they could see in his past history that was to show any form of preparation for the greatest crisis ever to be faced in the nation's history. With less than 40 per cent of the popular vote, and seen as a careless and inefficient administrator, Lincoln was to be faced with the greatest test of his life. Dominating his presidency was the American Civil War and the issue of slavery. The War of itself was a tragic conflict that was to result in more casualties than any other in US history. More than 525,000 men died during the four-year conflict and, interestingly enough, this was mainly from disease. The total cost to both sides was to be in the order of $15 billion (Ibid: 311).
Lincoln's two great assets were firstly, his ability to express his convictions clearly and forcefully so that millions of Americans were to take his beliefs as their own, and secondly his insight (Ibid: 310). Lincoln realized at the beginning of the war, that the Union must be saved. He determined that America, as the only important democracy in the world at that time, could not be proved a failure in the eyes of the world and, as such, it must not be destroyed (Ibid). If the Union had been lost, the United States would have become two nations, neither of which would have attained the prosperity and importance that it has today (Ibid). It could be said that Lincoln influenced the course of world history through his leadership of the North during the Civil War (Ibid).
When we understand the role of the USA in Bible prophecy, there could be no question about God's providential leading through the life of this great man of history. The dawn of truth came to Lincoln as he himself realized that the God of heaven was not at the nation's beck and call, but the nation was at his (Noll, 1992: 12). He also believed that it was because of the issue of slavery that both North and South had brought this terrible war upon themselves (Ibid). This concept is also supported by author Ellen White in Testimonies Vol 1, page 254 where she was shown in vision that the accursed system of slavery laid at the very foundation of the nations ills.
Lincoln's beliefs eventually led to the emancipation proclamation of 1862 proclaiming the freedom of all those slaves who were found in those states that were in rebellion.
Three years later, on the evening of April 14, 1865 Abraham Lincoln was assassinated at Ford's Theatre by an out of work actor by the name of John Wilkes Booth. A racist and Southern sympathiser, Booth was believed to be mentally unbalanced and hated everything the President stood for. At this time, Lincoln was only 56 years of age. While buried in Springfield Illinois, Lincoln is remembered today by a beautiful monument in Washington DC commemorating his vital role in preserving the union and beginning about the process that led to the end of slavery in the United States.
God always has someone in reserve to fulfil His purposes and His hand of providence has always been at work. Like Esther, Joseph and David of old, God is preparing men and women to take their place in His great plan and He will have them ready at a time only known to Him.
We need to remember that the trying experiences that have come to God's people in the past were not to be peculiar to that age alone. Today, their enemies still see them as a Mordecai at the Gate who refuses to bow and give allegiance to them. On this battlefield will be fought the last great conflict in the controversy between truth and error. God's hand of providence will use those remaining faithful to Him to vindicate His truth and His people. There is always one thing we can do - do what is right and leave the rest to God.
Who knows that you have come to the kingdom for such a time as this?
Alexander, P. & D. (Eds) (1999) The New Lion Handbook to the Bible. Oxford, England: Lion Publishing House
Church, L. F. (Ed) (1971) Matthew Henry's Commentary. Grand Rapids, Michigan: Zondervan Publishing House
Lockyer, H. Sr. (Ed) (1986) Nelson's Illustrated Bible Dictionary. Nashville, Tennessee: Thomas Nelson Publishers.
Noll M. A. (1992) 'The Puzzling Faith of Abraham Lincoln in Miller, K. A., Christian History, Issue 33 (Vol. XI, No. 1)
Nault W. H. (Ed) (1988) World Book Encyclopedia. Chicago, Illinois: World Book Inc.
Nichol F. D. (1954) The Seventh-day Adventist Bible Commentary Vol 3. Washington DC: Review and Herald Publishing Association.
Mears, H. C. (1983) What the Bible is all about. Ventura, California: Regal Books
Copyright © 2018 Thornleigh Seventh-day Adventist Church | <urn:uuid:fbb5c8ae-306a-4f7e-82a9-1828c5239c69> | CC-MAIN-2021-21 | http://thornleighadventist.org.au/20070310_Providence.htm | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988753.91/warc/CC-MAIN-20210506083716-20210506113716-00496.warc.gz | en | 0.985559 | 5,497 | 2.5625 | 3 |
- Research article
- Open Access
Is there a threshold level of maternal education sufficient to reduce child undernutrition? Evidence from Malawi, Tanzania and Zimbabwe
BMC Pediatrics volume 15, Article number: 96 (2015)
Maternal education is strongly associated with young child nutrition outcomes. However, the threshold of the level of maternal education that reduces the level of undernutrition in children is not well established. This paper investigates the level of threshold of maternal education that influences child nutrition outcomes using Demographic and Health Survey data from Malawi (2010), Tanzania (2009–10) and Zimbabwe (2005–06).
The total number of children (weighted sample) was 4,563 in Malawi; 4,821 children in Tanzania; and 3,473 children in Zimbabwe Demographic and Health Surveys. Using three measures of child nutritional status: stunting, wasting and underweight, we employ a survey logistic regression to analyse the influence of various levels of maternal education on child nutrition outcomes.
In Malawi, 45 % of the children were stunted, 42 % in Tanzania and 33 % in Zimbabwe. There were 12 % children underweight in Malawi and Zimbabwe and 16 % in Tanzania.The level of wasting was 6 % of children in Malawi, 5 % in Tanzania and 4 % in Zimbabwe. Stunting was significantly (p values < 0.0001) associated with mother’s educational level in all the three countries. Higher levels of maternal education reduced the odds of child stunting, underweight and wasting in the three countries. The maternal threshold for stunting is more than ten years of schooling. Wasting and underweight have lower threshold levels.
These results imply that the free primary education in the three African countries may not be sufficient and policies to keep girls in school beyond primary school hold more promise of addressing child undernutrition.
Child undernutrition is a persistent health challenge worldwide and especially in the developing countries where one child in three is stunted. Undernutrition accounts for 35% of annual deaths for children less than 5 years of age[1, 2]. Children who survive are more vulnerable to infection, do not reach their full height potential and experience impaired cognitive development among other complications. Without intervention undernutrition can continue throughout the life cycle .
The importance of a mother’s education on child health and nutrition has been well demonstrated in a number of studies [4–8]. Mother’s education is associated with better children’s health and nutritional outcomes through improving the socioeconomic status of mothers . In turn, the higher socioeconomic status mitigates a set of proximate determinants of health that directly influences on the health and nutritional outcomes of children. The proximate determinants include fertility factors, feeding practices and the utilization of health services . It is argued that maternal education improves the mother’s knowledge about child health, including causes, prevention and treatment of diseases .
This paper assesses the relationship between maternal education and child undernutrition in Malawi, Tanzania and Zimbabwe. The three countries share a number of common characteristics in terms of geographical location and socioeconomic factors. However, while Malawi and Zimbabwe have educational systems that were modeled under the British educational system, Tanzania’s educational system uses Kiswahili language as the medium of instruction throughout primary school. The three countries therefore provide useful case studies on the relationship between maternal education and child nutritional outcomes.
Since the 1980s, there has been a drive to promote free primary education in Africa. For example, free primary education was introduced in Zimbabwe in 1980, Malawi in 1994 and in Tanzania in 2002 to promote literacy. One of the implicit assumptions in the promotion of free primary education is that improved literacy would lead to improved health-seeking behaviour and improved nutrition for the population. In particular, it has been argued that literate women are more likely to be aware of the importance of immunizing children against diseases, feeding the child at the appropriate time and in right quantities and taking early actions against diarrhea and other infant diseases . Studies in various settings have shown association between child nutrition outcomes with maternal education [9, 11, 13]. Literature on modeling the associations of maternal education on child nutrition outcomes has mainly focused on four non-excluded models: socio-economic status; women empowerment and autonomy; health knowledge and attitudes; and health and reproductive behaviour . Studies that use maternal education as a proxy for socio-economic status both at the individual and household levels, argue that more educated women tend to have better work opportunities and they are more likely to marry more educated husbands . More educated women also tend to live in urban areas where they have access to better health and sanitation services. Despite the available of evidence on the influence of maternal education on child nutrition outcomes, it is not clear the threshold of maternal education that is required for cause a positive outcome on child nutritional status. This raises the question on whether introduction of free primary education to increase literacy levels in sub-Saharan Africa has any influence on improved child nutrition.
It is against this background that we analyzed nationally representative data from three sub-Saharan African countries to explore the minimum level of maternal education that is sufficient to promote child nutrition.
We analyzed data from the Demographic and Health Surveys (DHS) data collected in three southern African countries: Malawi (2010 dataset), Tanzania (Tanzania, 2009-10 dataset) and Zimbabwe (2005–06 dataset). The most recent dataset for Zimbabwe was not available at the time that this analysis was done. The three countries were selected conveniently based on the author’s prior knowledge on the Countries. Permission to use the DHS data from the three countries was granted by ICF International through the DHS Junior Faculty Fellowship Programme that both authors participated in.
The DHS are nationally representative sample of women aged 15 to 49 and their family members. Among others, the DHS surveys collect data on anthropometric indicators to provide outcome measures of nutritional status of under-five children and women. The study only considered data on children aged 0–59 months belonging to interviewed, de facto women whose weight and heights were taken and their mothers were interviewed. There was a weighted sample of 4,563 children in Malawi; 4,821 children in Tanzania; and 3,473 children in Zimbabwe. The response rates for women age 15 – 49 years was 96.9 % in Malawi, 96 % in Tanzania and 90.2 % in Zimbabwe.
The dependent variable in this study was child nutritional status measured as stunting, wasting and underweight defined as height/length-for-age, weight-for-height and weight – for-age z-scores below −2 standard deviations (−2 SD) from the median of the World Health Organization (WHO) reference population. Anthropometric measurements in the DHS are taken with children wearing light clothing, without shoes with bathroom-type scales for weight and a length board/mat for height or length for children age <24 months.
The background variables considered for this study were residence (rural or urban), sex of the household head, household wealth index, source of drinking water and access to sanitation facilities. The wealth index is a socio-economic index used as an indicator of household wealth based on ownership of assets and consumer goods such as source of drinking water, type of toilet facility, type of fuel used for cooking, ownership of various goods among other household characteristics. A factor score generated through principal components analysis is allocated to each asset and the resulting asset score are standardized in a relation to a normal distribution. Households have been grouped into five categories based on the wealth index as poorest, poorer, middle, richer and richest.
Independent variables included in the analysis were child demographics (age, sex), child birth order, whether the child had diarrhea in the two weeks prior to the survey and whether the child was a multiple birth. Maternal characteristics included current age, education, and the number of children under the age of five the mother had. The choice of the background variables as well as the explanatory variables was influenced by the conceptual framework that links maternal education and child nutrition proposed by UNICEF (1998) .The framework stipulates that the possible pathway through which maternal education can influence nutritional outcomes of children is through skills acquisition that leads to improved knowledge about health care and nutritional knowledge.
Maternal education was a key factor of consideration and was categorized based on the total number of years of schooling for the mother. Five categories were used in the analysis, namely: No schooling, junior primary (1–4 years of schooling), senior primary (5–7 years for Tanzania and Zimbabwe), (5–8 years for Malawi), junior secondary (8–10 years for Tanzania and Zimbabwe), (9–10 years for Malawi), senior secondary and above (>10 years).
Data was analyzed using Stata software version 13:0 at the descriptive, bivariate and multivariate levels. Person χ2 test was applied to test for association between child nutritional outcomes namely stunted, wasted or underweight and the independent variables.
Binary logistic regression models were used to assess the relationship between the three measures of child nutritional status and maternal education as well as other background and child variables. Using the survey logistic regression analysis, a threshold level of maternal education was determined as the level of maternal education below which the results are not statistically significant. Data from each of the three countries was analyzed separately.
In Malawi, 45 % of the children were stunted, while in Tanzania 42 % of the children were stunted, and in Zimbabwe 33 % were stunted. The level of wasting was 6 % in Malawi, 5 % in Tanzania, and 4 % in Zimbabwe (Table 1). The proportion of underweight children was highest in Tanzania (16 %), 12 % in Malawi and 12 % in Zimbabwe.
The average age of the sampled children in all the three countries was 29 months. The average age of mothers was 28 years, living in households with average household size of 6. The proportion of female household heads was highest for Zimbabwe (33 %) and lowest in Malawi (8 %). In all the three countries, the sample was predominantly rural, with 85 % of children in Malawi, and 74 % of the children in Zimbabwe living in the rural areas. Access to safe water was highest in Malawi (78 %) and lowest in Tanzania (53 %) while the proportion of households with improved toilet facilities was highest in Zimbabwe (54 %) and lowest in Tanzania (14 %) (Table 1).
The distribution of maternal education was varied in the three countries. There was a higher percentage of women with no education in Tanzania (26 %), followed by Malawi (18 %), and 4 % in Zimbabwe. In Zimbabwe, 53 % of the women had junior secondary school education and above. The proportion of women with similar educational attainment in Malawi was 15 % and in Tanzania 7 % (Table 1).
Stunting was significantly (p values < 0.0001) associated with mother’s educational level in all the three countries. Half of the children whose mothers had no education were stunted in Malawi, 45 % in Tanzania and 34 % in Zimbabwe. Child stunting was lowest among children whose mothers had senior secondary education and above in the three countries (Table 2).
Similar to stunting levels, child wasting levels reduced with increasing maternal education in the three countries. Child wasting was statistically significantly associated with maternal education in Tanzania and Zimbabwe (p value = 0.02 and 0.07, respectively).
The prevalence of underweight in under-five children in Malawi, Tanzania and Zimbabwe was also negatively significantly associated with the mother’s educational attainment (p values <0.05).
Logistic regression results
The likelihood ratio chi-square test and Pseudo R-square tests have been used to test the goodness of fit of the models. As Tables 3, 4 and 5 show the LR Chi-square test is significant at one percent across all the models, implying that the models fit well with the data. While the reference category for maternal education in the multivariate analyses was “no education” in Malawi and Tanzania, it was changed to ‘senior primary’ in Zimbabwe because there were few cases of women (only 4 percent) who had no education in Zimbabwe. Using ‘no education’ as the reference category would not be suitable for the case of Zimbabwe; The multivariate results, presented in Table 3, show that while maternal education is inversely related to child stunting, the results are only significant at high levels of education (secondary education and above) in all the three countries. The odds of stunting reduced in Malawi and Tanzania at the highest level of maternal education compared to no education. In Zimbabwe, the odds of child stunting reduced with increasing levels of maternal education compared to senior primary level, with significant results being obtained at the highest category of maternal education only. Other variables with significant relationship with child stunting were household wealth and child age (Table 3). The interaction between maternal education and other variables such as wealth was tried in our model but did not yield any significant result hence dropped in the final model.
Similar to the results on chid stunting, maternal education was inversely related to wasting but the results are significant only at high levels of education (Table 4). In particular, while higher levels of maternal education are associated with reduced wasting compared to no education, the results are only significant at senior secondary and above level in Malawi, at senior primary level and above in Tanzania and at junior secondary level and above in Zimbabwe. The interaction between maternal education and wealth did not yield statistically significant result and was finally dropped in the final model.
Consistent with results from the previous two models, the odds of child underweight were reduced at higher levels of maternal education compared to no education in Malawi and Tanzania, but the results were significant at junior secondary level and above in Malawi and senior secondary level and above in Tanzania (Table 5). In, the odds of being underweight among children in Zimbabwe reduced with increased levels of maternal education compared to senior primary level. It is important to note that, as was with stunting and wasting, there was no interaction between maternal education and wealth.
Maternal education threshold level
The multivariate results for the three measures of child nutritional status shown that while the education of the mother is an important determinant of the nutritional status of their children, the relationship is statistically significant only at high levels of education. Table 6 summarizes the associations of the different levels of mother’s education on the three anthropometric indicators and depicts the estimated threshold levels of maternal education for the three measures of child nutritional status. For stunting, the influence of increasing maternal education is only being seen at the senior secondary and above in all the three countries. In the case of wasting, maternal education is showing a significant impact at the senior primary level and above in Tanzania, at junior secondary level and above in Zimbabwe and at the senior secondary and above in Malawi. These results imply that that lower levels of maternal education do not seem to have any significant associations with reduced odds of child wasting. Similarly, maternal education is significantly associated with reduced odds of being underweight at junior secondary level and above in Malawi and Tanzania and at the highest category of maternal education in Zimbabwe (Table 6). The multivariate results show that the threshold level of maternal education that is necessary to make significant reduction in child malnutrition is at least junior secondary school level (at least 9 years of schooling) in Malawi; senior primary school level (at least 5 years of schooling) in Tanzania and; junior secondary school level in Zimbabwe (at least 8 years of schooling) (Table 6). These minimum threshold levels are derived from the wasting and underweight models which showed significant relationship between maternal education and wasting at relatively low levels of maternal education. If we consider the stunting as most commonly used measure of child nutritional status, the minimum threshold level rises to senior secondary level and above (at least 11 years of schooling) in all the three countries. Below these threshold levels, maternal education has no significant positive influence on child stunting.
This paper investigates whether there is a threshold level of maternal education necessary to reduce child undernutrition. The study shows that maternal education is important in addressing child undernutrition in the three countries. Bivariate results show negative significant association between maternal education and the three measures of child nutritional status in all the three countries. Multivariate analysis also shows that higher levels of maternal education are required for accruing positive influence on child nutritional status. Using stunting as the most widely acceptable measure of child growth faltering, the minimum threshold level of maternal education that is necessary to reduce stunting is senior secondary and above (i.e. more than 10 years of schooling in all the three countries). Contrary to findings of Hobcraft and Mensch , our findings point to the fact that there is a threshold level of maternal education below which the education of the mother does not have any significant influence on child nutrition.
These results are consistent with the literature on the association between maternal education and child health and nutrition , [19–22]. At relatively high levels of maternal education, mothers tend to have acquired the necessary health knowledge and are more able to practice recommended feeding practices for their children [11, 23]. Furthermore, relatively educated women tend to have relatively fewer children and are able to provide better care and support to their children, all of which positively impact on children’s nutritional outcomes. Cleland argued that maternal education has a strong influence on early childhood health and survivor outcomes majorly through economic advantages associated with education. In countries covered by this study, access to health information, as well as health services are limited. Mothers who are educated beyond junior secondary school level are therefore more likely to have a higher maternal diagnostic ability of child growth performance and are therefore able to take corrective action to address any cases of child undernutrition.
In the context of southern and eastern Africa, women who have studied beyond primary education tend to have increased command over household resources, enabling them to make significant contributions towards the promotion of their children’s nutrition and health status. Such women are more able to participate in income generating activities that improve household incomes and their ability to provide better nutrition for their children. In the rural areas of countries like Malawi, Tanzania and Zimbabwe, informal group lendings and village savings and loan groups are common among women and most of these are patronized by relatively educated women. Through these groupings, women are able to raise income to promote household food security and nutrition.
This analysis contributes to a better understanding of the specific threshold of maternal education that has a positive outcome on childhood undernutrition using nationally representative data from three sub-Saharan African Countries. Based on the findings of this study, the following are implications for policy; In all the three countries, if maternal education is to play a significant role in reducing child malnutrition, women need to be educated beyond the primary school level. While there is free primary school education in all the three countries, having been introduced in 1994 in Malawi; 2002 in Tanzania; and 1980 in Zimbabwe, this alone may not be sufficient in creating a positive contribution towards improved child undernutrition. The threshold level of maternal education in all the three countries is beyond the primary school level. The level of maternal education is known to be an important predictor of child stunting even in informal settlements . Besides the link between maternal education and socio-economic status pathways that influence child nutrition, access to health care and access to money are other predictors of child nutrition. Policies to ensure that girls remain in school beyond the primary school level, therefore, hold more promise in addressing child nutritional problems in the three countries. According to the Malawi Ministry of Education Science and Technology, , girls’ secondary school enrolments in 2011 were only 5.6% compared to 6.9% for boys, signifying the large number of female primary school leavers that do not make it into secondary school in Malawi. Therefore, policies to improve the enrolment of girls at secondary school level, especially in Malawi and Tanzania would improve maternal education of future mothers and contribute towards promoting child nutrition in future. The use of cross-sectional data which is limited in demonstrating the direction of relationships of variables. However, the national coverage of data gives the power for generalizability of findings. The study does not investigate women autonomy in control for money and household decision making which may have implications for child health and nutrition outcomes.
In all the three countries, maternal education is having a significant influence on child stunting, when the mother has at least a senior secondary level of education. Since low height-for-age is associated with poor socioeconomic conditions, frequent illnesses and poor feeding practices, women’s educational levels beyond junior primary school in the three countries has the potential to reduce the suboptimal health and nutritional conditions of their children. Increased investment in women’s education beyond primary school level is a promising intervention in countries like Malawi and Tanzania, which have a high burden of child undernutrition. We recommend a further analysis of data using a stepped regression model to demonstrate how inclusion of other important variables would affect the impact of maternal education to child nutrition at various levels of in each country.
The DHS protocol is approved by the ICF Macro ethical review board and specific county review boards. Approval for data use was provided by ICF macro. Informed consent was obtained from each of the respondents interviewed in DHS surveys.
Arifeen SE, Black RE, Caulfield LE, Antelman G, Baqui AH. Determinants of infant growth in the slums of Dhaka: size and maturity at birth, breastfeeding and morbidity. Eur J Clin Nutr. 2001;55(3):167–78.
WHO/UNICEF/ICCIDD. Assessment of iodine deficiency disorders and monitoring their elimination: A guide for programme managers. World Health Organisation: World Health Organisation; 2007.
Blössner Monika de OM. Malnutrition: Quantifying the health impact at national and local levels Environmental. Geneva: WHO; 2005.
Christian P, Abbi R, Gujral S, Gopaldas T. The role of maternal literacy and nutrition knowledge in determining children’s nutritional status. Food Nutr Bull. 1988;10(4):35–40.
Ruel MT, Habicht J-P, Pinstrup-Andersen P, Gröhn Y. The mediating effect of maternal nutrition knowledge on the association between maternal schooling and child nutritional status in Lesotho. Am J Epidemiol. 1992;135(8):904–14.
Wachs TD, Creed-Kanashiro H, Cueto S, Jacoby E. Maternal education and intelligence predict offspring diet and nutritional status. J Nutr. 2005;135(9):2179–86.
Boyle MH, Racine Y, Georgiades K, Snelling D, Hong S, Omariba W, et al. The influence of economic development level, household wealth and maternal education on child health in the developing world. Soc Sci Med. 2006;63(8):2242–54.
Abuya BA, Ciera J, Kimani-Murage E. Effect of mother’s education on child’s nutritional status in the slums of Nairobi. BMC Pediatr. 2012;12(1):80.
Kabubo-Mariara J, Ndenge GK, Mwabu DK. Determinants of children’s nutritional status in Kenya: Evidence from demographic and health surveys. J Afr Econ. 2009;18(3):363–87.
World Health Organization. Promoting optimal fetal development: report of a technical consultation. 2006 [cited 2015 Jun 2]; Available from: http://apps.who.int/iris/handle/10665/43409.
Frost MB, Forste R, Haas DW. Maternal education and child nutritional status in Bolivia: finding the links. Soc Sci Med. 2005;60(2):395–407.
Joshi AR. Maternal schooling and child health: preliminary analysis of the intervening mechanisms in rural Nepal. Health Transit Rev. 1994;1–28.
Gwatkin DR, Rutstein S, Johnson K, Suliman E, Wagstaff A, Amouzou A. Socio-economic differences in health, nutrition, and population. Wash DC World Bank. 2007.
Emina JB, Kandala N. Inungu J. Nairobi Kenya Afr Popul Health Res Cent: Ye Y. The Effect of Maternal Education on Child Nutritional Status in the Democratic Republic of Congo; 2009.
Cleland JG, Van Ginneken JK. Maternal education and child survival in developing countries: the search for pathways of influence. Soc Sci Med. 1988;27(12):1357–68.
UNICEF. The State of the World’s Children 1998. New York: UNICEF; 1998.
Hobcraft JN, McDonald JW, Rutstein SO. Socio-economic factors in infant and child mortality: a cross-national comparison. Popul Stud. 1984;38(2):193–223.
Mensch B, Lentzner H, Preston S. Socioeconomic differentials in child mortality in developing countries. N Y U N. 1985;97.
Desai S, Alva S. Maternal education and child health: Is there a strong causal relationship? Demography. 1998;35(1):71–81.
Engebretsen IM, Tylleskär T, Wamani H, Karamagi C, Tumwine JK. Determinants of infant growth in Eastern Uganda: a community-based cross-sectional study. BMC Public Health. 2008;8(1):418.
Miller JE, Rodgers YV. Mother’s education and children’s nutritional status: New evidence from Cambodia. Asian Dev Rev. 2009;26(1):131–65.
Gewa CA, Yandell N. Undernutrition among Kenyan children: contribution of child, maternal and household factors. Public Health Nutr. 2011;15(10):1029–38.
Gewa CA. Childhood overweight and obesity among Kenyan pre-school children: association with maternal and early child nutritional factors. Public Health Nutr. 2010;13(4):496–503.
Abuya BA, Onsomu EO, Kimani JK, Moore D. Influence of maternal education on child immunization and stunting in Kenya. Matern Child Health J. 2011;15(8):1389–99.
Shroff M, Griffiths P, Adair L, Suchindran C, Bentley M. Maternal autonomy is inversely related to child stunting in Andhra Pradesh. India Matern Child Nutr. 2009;5(1):64–74.
Malawi Government. Education Sector Performance Report 2010-11. Ministry of Education, Science and Technology, Lilongwe; 2012.
The authors would like to thank ICF international for providing the data used in this analysis. We would like to thank Ms Triza Njoki for reviewing the manuscript and the editorial support.
The authors declare that they have no competing interests.
DM Conceptualized the idea and conducted background check on all the countries data, conducted data analysis, results presentation and writing of the manuscript. PKM: contributed to the methods, interpretation of results, discussions and finalization of the manuscript in readiness for submission. All authors read and approved the final manuscript for submission.
Donald Makoka is a Research Fellow at the Centre for Agricultural Research and Development (CARD) of the Lilongwe University of Agriculture and Natural Resources in Malawi where he conducts development policy research. He holds a Bachelor of Social Science in Economics from the University of Malawi, an M.A. in Economics from the University of Malawi, and a Ph.D. in Economics from the University of Hannover. His area of specialization is development economics. His research interests include poverty and vulnerability, agriculture value chains, child protection, and social protection.
Peninah Kinya Masibo is the Training Coordinator at the African Population and Health Research Center (APHRC) under the Research Capacity Strengthening Division and a faculty at the School of Public Health, Moi University, Kenya. She holds a Ph.D in Nutritional Sciences (Stellenbosch University), Master of Public Health Nutrition (Moi University) and a Bachelors’ of Science in Food Nutrition and Dietetics (Egerton University). She has Public Health research interests in maternal, child and adolescents nutrition, particularly obesity and the dual burden of Nutrition in developing countries combined with interests in developing sustainable models for research capacity strengthening for African Scholars and Graduate students.
About this article
Cite this article
Makoka, D., Masibo, P.K. Is there a threshold level of maternal education sufficient to reduce child undernutrition? Evidence from Malawi, Tanzania and Zimbabwe. BMC Pediatr 15, 96 (2015). https://doi.org/10.1186/s12887-015-0406-8
- Maternal education
- Threshold level
- Child undernutrition
- Demographic health survey | <urn:uuid:74a96ed7-31cb-48ed-91c7-370cb0eca547> | CC-MAIN-2021-21 | https://bmcpediatr.biomedcentral.com/articles/10.1186/s12887-015-0406-8 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989874.84/warc/CC-MAIN-20210518222121-20210519012121-00416.warc.gz | en | 0.933774 | 6,024 | 3.4375 | 3 |
The Importance of Codes of Ethics
Examination of the Need of Business Ethics and the Efficient Usage of Codes of Ethics for Good Corporate Governance
Masterarbeit 2010 53 Seiten
Table of Content
II. Business ethics and codes of ethics
II.1 The philosophy of ethics
II.2 The evolution of business ethics
II. 3 History of business ethics
II. 4 Business ethics and codes of ethics
II. 5 Business ethics and its forms and definitions
II.6 Business ethics and globalisation
III. Examination of the need of business ethics and the efficient usage of codes of ethics
III. 1 Hypothesis I: There is a need for business ethics
III. 1.1 Evidence in reference to philosophy
III.1.2 Evidence in reference to Adam Smith and modern capitalism
III.1.3 Evidence in reference to the industrialisation
III.1.4 Evidence in reference to the principal-agent-theory and opportunistic behaviour
III.1.5 Evidence in reference to the globalisation and increase of MNEs
III.1.6 Evidence in reference to cultural and legislative influence
III.1.7 Evidence in reference to customer power
III.1.8 Evidence by reference in to public media and NGOs
III.1.9 Critique on business ethics
III.1.10 Economical and reputational gain of business ethics
III.1.11 Measurement of impact of business ethics
III.1.12 Conclusion of hypothesis I
III.2 Hypothesis II: The efficient usage of codes of ethics can improve corporate governance
III.2.1 Definition and development of codes of ethics
III.2.2 Various types of codes of ethics
III.2.3 Motives for the implementation of codes of ethics
III.2.4 Implementation and effectiveness of codes of ethics
III.2.5 Gain Mehrwert of codes of ethics
III.2.6 Critique on codes of ethics
III.2.7 Future outlook for codes of ethics
III.2.8 Conclusion for hypothesis II
During my time of studying business and economy, I have been fascinated by the fact that nearly everything in our world is influenced by the global economy. Every simple trade transaction or exchange of services involves a lot of people and impacts several countries nowadays. The constant rise of the globalisation produced multinational enterprises with a lot of power and control over big parts of the world’s resources. The decay of human moral understanding and the recent scandals due to unethical business practices promoted my interest of multicultural and ethical business. The change in the business ethos and the grey zones emerged due to country differences supported unethical business behaviour. Ethics and moral as defined thousands of years ago by the first philosophers need to be taken seriously again. Especially, by institutions, which have an influence on many people and our environment, as businesses have nowadays. My goal is to illustrate this importance of business ethics and their main instrument, the codes of ethics.
Is there really a need for business ethics? If everybody would act morally, why is then everybody talking about ethics in the business context?
Following the thoughts of Aristotle’s virtue ethics and Kant’s categorical imperative, there would be no need of business ethics since everybody would be trustworthy and respect the society and the nature. Recent scandals on the other hand illustrated that ethics and moral are not well-known in enterprises with its main goal of profit maximization and that managers tend to live against the categorical imperative. The debate about the connection between business and ethics started with the birth of modern capitalism and intensified with the industrialisation and globalisation. Capitalistic thoughts, increase of corporations and individualization of humans created opportunistic behaviour, which is incompatible with the moral of values according to Aristotle. The globalization and impact of growing number of stakeholders aggravate the situation of the society’s moral understanding. Through NGOs and media pressure and a change in customer’s attitudes towards corporate responsibilities the awareness of a missing moral occurred. Multinational enterprises have to face various dilemmas caused by differences in cultures and national laws. These diversities and gaps on the global level provoke grey zones, which corporations can take and some already took advantage of. To face the challenges of uncontrolled managerial misbehaviour, codes of ethics as moral guiding principles can be very useful and some manager started to realize the advantages. The voluntary codes of ethics can be a guideline for managers, employees and even suppliers. Also, it gives the company an encouragement of the corporate identity and vision, which leads to motivation and cooperation. In this paper, the importance of ethics in business and codes of ethics will be shown in the context of the changing economic world and the codes’ positive impacts on Corporate Governance, if the ethical guidelines are implemented and lived in the right manner.
The paper starts with a description of ethics and moral in a philosophical context and then passes on to the growing awareness of ethics in business. The history of business ethics gives a first overview about the significant change in the economy and the increasing need for moral in business patterns. This part of the paper also covers the corporate challenges occurring due to the globalisation and their impact on business ethics. The next section defines business ethics and its fellows, such as codes of ethics, Corporate Social Responsibility, Corporate Governance, Corporate Responsibility, Corporate Citizenship and Corporate Compliance. These fundamentals are the foundation for the two hypotheses.
The next chapter examines the two hypotheses of this thesis. The first hypothesis deals with the need for business ethics and the second hypothesis tries to evolve the importance of codes of ethics. The hypothesis about the need for business ethics is debated by several different aspects. Through falling back on the philosophy, the need is discussed with the help of Aristotle and Kant. Afterwards, the assumption, if ethics is necessary in the economy, is explored by the history of modern capitalism and Adam Smith’s theories. The debate about corporate social and ethical responsibilities exists since the birth of modern capitalism and the changes due to the industrialisation. In this time, the first problems of opportunistic behaviour occurred because of the separation of ownership and control. Companies started to grow and to enlarge their business relations and practices on a global level. This fact brought along various multicultural stakeholders and a variation of interests. The differences in cultures and national laws disclosed several moral dilemmas for business people. This point of the exploration of the first hypothesis illustrates very briefly, a need for an ethical consideration in business practices. The next aim deals with the influence of consumer power on businesses and the customers’ changing attitude towards more responsible enterprises. Several other sources of pressure to covering the moral idea of business practices exist. After exploring the critique, the economic gain of ethical behaviour will be illustrated and the way of measuring this. The first hypothesis will be ended by concluding the findings.
The second hypothesis examines the importance of codes of ethics and their influences on Corporate Governance. This part of the thesis begins with a definition of codes and evaluates the path of developing codes. The differences in codes of ethics will be explained and some examples will be named. Furthermore, the right implementation of codes will be illustrated in detail as that is the prerequisite of a code to be effective. The proper implementation and the effectiveness of codes of ethics are then assumed to assess the influence of codes and their economical gain. After naming some critics and a future outlook, the second hypothesis will be concluded.
At the end of the paper, in the fourth chapter, the findings will be completed and the importance of ethics in business and codes of ethics will be illustrated.
The thesis is based on secondary literature published between the 1970s and 2010.
II. Business ethics and codes of ethics
II.1 The philosophy of ethics
Ethics is a philosophical phenomenon, which with humans examine since several thousand years. Ethics, as a part of philosophy, deals mainly with morality. Behind the term ethics are the main issues about right and wrong behaviour in a society and a moral understanding of human beings concealed. The world of ethics includes mainly three parts, as there are applied ethics, normative ethics and metaethics. Whereas metaethics is dealing with the theoretical approach of morality, the normative ethics includes more practical issues to create moral standards in different societies. The applied ethics involves controversial situations and the moral way of acting. This part of ethics is the science of giving a moral guideline for the ‘right’ behaviour in troubling situations in the society and also in the business world. Accordingly, business ethics is a part of the applied ethics. The study of morality is the main part of ethics and gives the basis of human behaviour based in the character. The moral of a person is based on rules, which help to live and react in the society. The value system increases the ethical behaviour and leads to principles according to the specific manner people act and which guides them in controversial cases. Human beings have the ability to think objectively about situations and their role inside these moments, to choose rational between given options and commit to their behaviour. Most of these actions are based on intuitionism and impartiality, which come from the basic moral beliefs and the inclusions of other persons due to the fact that humans are social and have the tempted need to live in a group. A part of moral fundamentals is the fact to respect the life of the persons around oneself and to treat everybody in a manner as one want to be treaded oneself.
Ethics, as a science, covers the main questions about how to live the life in a good manner. The human sense for ethics is based in his moral understanding of the things happening in the world. The basis of this understanding is build due to experiences, traditions and parental teaching and forms the beliefs of people. The freedom of choice, which is given to every living creature on this planet, can lead to problematic situations, which maybe overwhelm the participants of the circumstances. This freedom of choice can be compensated due to moral understanding of humans, which gives a guideline for proper behaviour.
The ancient Greeks already argued that people who live in an amoral manner and are just self-absorbed, are mainly living against the sense of what human beings represent and are hurting themselves deep in their soul. Plato and Socrates argued in a similar manner. Although morality is subjective, it is a very important part of the existence of humans. As it is visible in the history, people debate and protest, even used and still use violent behaviour to stick to their moral beliefs. In the last thousands of years different patterns and theories of ethics have being evolved. According to the consequentiality view, the morally best action is what will bring the greatest consequences to all being influenced by this action. Immanuel Kant, one of the most popular ethical theorists, explained moral and ethics by his theory of the categorical imperative. His theory is named as the basis of rational ethics and points out that all humans should arrange their own behaviour according to the welfare of the others. Kant thought about moral behaviour as a set of maxims, which are created by every single person according to its moral rationality. The categorical imperative stresses out that everybody should act as the personal maxim can lead to a universal law. Kant’s theory is categorical because it is absolutely tied to human moral behaviour. The imperative calls out the guiding part about how one must act. Kant always stressed out the respect for all humans and the existence of humans moral dignity. In his means, people do the right decisions for the right reasons. In corporate context, this idea has sometimes been confused and managers tended to make the right thing just for profit or image reasons. This perspective is not moral, it is more prudential. A moral action has to have a good will according to the Kantian view. Another ethical fundament is the utilitarianism. This approach claims that a good human condition is one, which brings the greatest happiness to everything that can be happy and an activity can be seen as morally right, when the good is balanced over the bad. The principle of utility commits to the principle of maximization of the net good outcome. Placed in the business perspective the utilitarianism gives the principle of the maximization of productivity through efficiency because the source of maximization is the efficiency. And the good and efficient organisation of a corporation leads to higher profits, which is an economic goal.
II.2 The evolution of business ethics
One of the oldest connections between ethics and economics was invented by Aristotle, a Greek philosopher, in the fourth century BC. Aristotle pointed out that a life of a human being is good, if it is lived according to human nature. He meant that humans often make their life worse because they mainly think about short-term pleasure and are not aware about the long-term effects of their decisions. Aristotle was against the economical drive because he believed that everything what business behaviour represents is against the nature of human beings and ‘bad’ for the character. Business practices spoil the internal nature of people and show a lack of self-control. According to Aristotle the ‘moral virtue’ gives humans the habit to make the morally right decisions. The primary purpose of morality is the development of a virtuous character. People doing business tend to miss virtues and also moral instinct. Based on Aristotle, the neo-Aristotelian perspective defines ethics as a structure of guiding principles. In line with these thoughts, business ethics is an effort to create practical rules for moral business behaviour. By specifying the ethical norms and rules concrete guidelines were created. In times of Aristotle these guiding parameters were based on trust, prudence and honesty of the moral business men. Nowadays, these guidelines occur under the term ‘code of ethics’.
II. 3 History of business ethics
The influence, reach and importance of business ethics can be best shown by drawing the historical development of ethics and moral in the business context. Ethics is way longer a part of business as many business people may believe. In 1907, first moral guidelines for business behaviour were developed and even managers of big corporations started to claim: ‘The greater the corporation, the greater its responsibility.’ In the 1930s, Berle and Dodd discussed the responsibilities of multinational enterprises (MNEs) and their opinion was that big corporations are very powerful and will misuse this power, if they are not managed in the interest of the public. In the 1960s, due to the environmental decline and the Vietnam War, there was a change in consumers’ position. The costumers started to realize that the superior position of corporations needs to be used for a positive change and that the MNEs needed to be more controlled. Especially due to the globalisation, the corporations started to operate all over the world and were harder to control by governmental legislation. On the flipside, claims became loud that the only responsibility of corporations is to maximize the shareholder value and that the only social act in business is to increase the profits. The most important adherent of this viewpoint was Milton Friedman. In the 1980s, a series of takeovers and acquisitions started because of the overall increase in Foreign Direct Investment (FDI) at this time. Most of these took place without any social thought or moral behaviour. The most significance change facing business ethics took place at the end of the 1990s, respectively the beginning of the twenties century and with the end of the Cold War. New technologies occurred, which led to a higher productivity worldwide, but also an improvement in communication technologies and therefore better informed consumers. Another big influencer of the increasing involvement of ethics in business topics was the signing of the General Agreement on Tariffs and Trade (GATT) after World War II and the development of other important trade zones. Capitalism started to grow and became a worldwide trend. In the second half of the nineteen’s century, the existence of multinational and global corporations increased. This delivered alteration because the activities of such big firms, which operates in different countries, exceeded the regular economically actions. Also John D. Rockefeller mentioned factors that are importantly connected to the industry and the corporations are capital, management, labour and also the community. As one of the first, he realized that social responsibility of corporations obverse the community is just as important as profit making. The ‘evolution of expectations’ showed that the social responsibility of MNEs is a moving target. The change in the expectations, what business is or should be all about, did not just make the consumers and non-governmental organisations sit up, but also the managers. In the course of the 70s, more and more manager started to take verbal confessions concerning the social impact their company has on the community and the ways to show more responsibility towards the stakeholders. During this period, the importance of ethics in business and codes of ethics got more visible than ever. Due to the changes activated by globalisation and the scandals about unethical behaviour of multinational enterprises in the United States, international business ethics became more and more important.
II. 4 Business ethics and codes of ethics
The above mentioned pressure by the public is one reason for the invention of codes of ethics, but mainly a voluntary reason. In the United States, due to the scandals in the last decade, public corporations are legislatively forced to disclose a code of ethics. This development took place in 2002 and was called the Sarbanes-Oxley-Act. In the same year, the Nasdaq stock market and the New York Stock Exchange obliged the disclosure of codes of ethics for all listed companies. Codes of corporate conduct exist nearly a century, but got the main attention in the 1960s and 1970s. At that time, corporations, international organisations and national governments started to invent and adopt codes of conduct. The most considerable codes in the 70s were invented by the OECD (Organisation of Economic Cooperation and Development) in 1976 and the ILO (International Labour Office) in 1977. The codes of these two organisations are the pioneers of ethical code development. They represent the basis for most of the individual corporate codes, which development exploited in the 1980s. The origin of codes of ethics and the first formal code ever was authored in 1937 by the International Chamber of Commerce (ICC). These standards were invented to eliminate competition between the ICC members and to avoid the damage of the environment and society in the member countries. According to the ILO, analysts acknowledged four different types of generations of issues in codes. The content of the first generation is dealing with the conflict of interest and is more focused on the interests of the corporation. The second, third and fourth generations are concerning with the public interests and are based on the commercial conduct, the rights of the employees and the rights of the human and the community. The corporate codes of ethics are respectively a part of the first generation. Modern codes are mainly based on these pioneers and are various due to different industries and the differences based in the corporation’s nature.
II. 5 Business ethics and its forms and definitions
Codes are a part of the movement of the last century towards business ethics. Based on this development, terms as Corporate Social Responsibility (CSR), Corporate Governance, Corporate Responsibility (CR), Corporate Citizenship (CC) and the compliance of the corporation were created. They all deal with ethics and morality in business patterns. This development is mainly based on a new understanding of business relationships towards the employees, the community and the environment. In the last decades, some companies started to think about the interests of their stakeholder and tried to increase the overall economical welfare. Also several managers started to promote their support of social issues and their role of good Corporate Citizens.
Business ethics, as the generic term, has itself no clear definition. It is rather the other way around. The existence of too many different definitions of business ethics can cause confusion. In business terms, business ethics is a mixture of moral principles connected to daily business behaviour and its impact on the stakeholders and the community involved. According to another author, business ethics combines values, Corporate Governance and codes of ethics. Business ethics has its focus more on morality at which Corporate Social Responsibility (CSR) spotlights the social, sustainable and environmental issues of business.
The rising tendency towards Corporate Social Responsibility is a mirror of the change in modern business. The reality of daily business shows that business and society are interwoven and that the expectations on both sides changed. Corporations realize the impact of the society and change their attitude towards it. The society on the other hand expects nowadays more from a company as to make profits. The pyramid of CSR reflects this point of view. The basis of the pyramid with its economic and legal responsibilities meets the requirements of a corporation. The top of the pyramid, which includes the ethically and philanthropy responsibilities get more in the focus at the present time. Especially the crisis, due to corporate scandals, underlines the significance of CSR all along with Corporate Governance.
The social responsibility of corporations and the Corporate Governance have some parts in common. But more interestingly is their main significant difference. CSR is voluntary and based on ethical rules, whereas Corporate Governance is mainly a binding law. In the United States of America is the Corporate Governance statue and policy based since the introduction of the Sarbanes-Oxley-Act (SOA) in 2002. In Europe, the governance of corporations is to some parts still a voluntary soft law. Corporate Governance is defined by the OECD as a system to control corporations and the accountability enterprises have towards all their stakeholders. Based on that context and the increase in complexity in the globalized world, it would make sense to ensure Corporate Governance as a binding law on a global scale. One example for the contrary of the SOA is the German Code of Corporate Governance (GCCG), which was invented in 2000 and represents a guideline for the management and control of German corporations.
As mentioned above, there are several different terms combined under the generic term business ethics. Next to CSR and Corporate Governance, there are also terms which define different issues of ethics in business. Corporate Responsibility (CR) for example is a part of Corporate Social Responsibility. This branch deals mainly with the interests of the stakeholders. The issue of profit maximization as the only corporate goal and the responsibilities towards the community and the environment are of main importance in this field. Corporate Responsibility also comprises parts of Corporate Governance and Corporate Citizenship.
Corporate Citizenship spells out the way, in which a company means to act in a community as a good citizen. Corporations commit to behave as one of the neighbourhood and not just to offer workplaces, but also to give support within the area and its environmental system.
Corporate Compliance gives a guideline of regulations and rules for employees and managers. The invention of Corporate Compliance and in some companies even a compliance manager, was mainly done to defend the enterprise against violating behaviour of managers and employees and for this reason arising expensive lawsuits. The innovation of Corporate Compliance started in the 1980s in relation to the first big corporation scandals.
II.6 Business ethics and globalisation
Business ethics on an international level faces way more challenges as bringing ethical behaviour in country-specific business transactions. Multinational enterprises have to face the differences in cultures and due to that the different traditions and also various religions, which all influence the moral understanding and per se the behaviour of the people. These differences can lead to controversial opinions of ethically ‘right’ behaviour.
The increase in advertence in business ethics was mainly dependent on the challenges the globalisation brought along. In the last century, barriers of trade were removed and the worldwide production, flow of trade and capital increased and more and more strategically alliances were adopted. Due to this development, a maceration of national borders occurred and the denationalisation started to grow. The invention of the North Atlantic Treaty Organisation (NATO) and the European Union (EU) are good examples for that shift of geographical and also political borders. That led among other things to legislative problems. It was generated a deferral of the competency and differences in the laws around the globe. That fact gave the multinational enterprises the opportunity to use the outcome of this, namely the grey zones for amoral profit maximisation. It is not everything illegal worldwide, what is seen as unethical. These facts brought up critique concerning globalisation. Reviewers argue that the principles of globalisation with its protection for free trade and an open marketplace are not appropriate for less developed countries and leaves too much room for corporations to profit from the non-industrial countries. Although these critiques, Peter Senge is talking about a revolution, which happens right now caused by the globalisation. He mentions that an interrelation exists due to the disappearance of the national boarders and that it is dependent on business and non-business organisations to realize that the whole world is interconnected. Humans tend to think in a moral manner, but also tend to act not that way as are corporations and their managers. The global businesses have nowadays the chance to change something and close the grey zones by acting ethical. Non-governmental organisations (NGOs) have an increasing awareness of this ‘greenwashing’ of corporations’ image and started to make such behaviour public. Even though, multinational enterprises invented social responsibility programs, the amoral behaviour for seeking profit did not stop eventually. Nowadays, corporations are mainly forced to behave in the right manner due to the invention and interconnection of the globalisation. New technologies in the communication area give the consumer the opportunity to inform themselves about companies and their habits before purchasing. Also non-governmental organisations invented websites, which offers facts and news about the real social behaviour of MNEs.
Richard. 1989. There is ethics in business ethics; but there’s more as well, pp. 337-339.
Newton. 2005. Business ethics and the natural environment, pp. 11-19.
Machan/Chesher. 2002. A primer on Business Ethics, pp. xiii-xv.
Adams/Maine. 1998. Business ethics for the 21st century, pp. 1-13.
Albach. 2005. Unternehmensethik: ein subjektiver Überblick, pp. 1-5.
Beauchamp/Bowie. 2004. Ethical theory and business, pp. 16-27.
Adams/Maine. 1998. Business ethics for the 21st century, pp. 25-27.
Machan/Chester. 2002. A Primer on business ethics, pp. 42-43.
Beauchamp/Bowie. 2004. Ethical theory and business, pp. 28-38.
Böhm. 1979. Gesellschaftliche verantwortliche Unternehmensführung, p. 53.
Shestack. 2005. CSR in a changing corporate world, pp. 98-100.
Adams/Maine. 1998. Business ethics for the 21st century, pp. 3-4.
Böhm. 1979. Gesellschaftliche verantwortliche Unternehmensführung, p. 54.
Ibid, p. 11.
Schwartz. 2004. Effective Corporate Codes of ethics: Perceptions of Code Users, pp. 323–343.
Mamic. 2004. Implementing codes of conduct, pp. 36-37.
International Labour Office. 2001. Codes of conduct and multinational enterprises, chapter I.
Böhm. 1979. Gesellschaftliche verantwortliche Unternehmensführung, p. 10.
Thomas. 2005. Business ethics, pp. 31-36.
Luo. 2007. Global dimensions of corporate governance, pp. 197-199.
Mullerat. 2005. The global responsibility of business, pp. 3-5.
Walsh/Cowry. 2005. CSR and corporate governance, pp. 38-53.
Pfitzer/Oser. 2003. Deutscher Corporate Governance Kodex, pp. VI-VII.
Schmeisser, et al. 2009. Shareholder Value approach versus Social Responsibility, pp. 85-88.
Geißler. 2004. Was ist compliance management?
Karmasin/Litschka. 2008. Wirtschaftsethik – Theorien, Strategien, pp. 176-182.
Beisheim, et al. 1999. Im Zeitalter der Globalisierung? pp. 16, 266-320.
Machan/Chester. 2002. A Primer on business ethics, pp. 159-169.
Senge, et al. 2010. The necessary revolution, pp. 5-22.
Spitzeck. 2008. Moralische Organisationsentwicklung.
- ISBN (eBook)
- 410 KB
- Institution / Hochschule
- Hochschule für Technik und Wirtschaft Berlin – Wirtschaftswissenschaften I, International Business
- business ethics codes corporate social responsibility | <urn:uuid:041a555f-19e1-4f6a-b080-da07df0bc293> | CC-MAIN-2021-21 | https://m.diplom.de/document/228516 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989812.47/warc/CC-MAIN-20210515035645-20210515065645-00614.warc.gz | en | 0.935353 | 5,918 | 2.53125 | 3 |
11. Brassicaceae Burnett
Cruciferae Jussieu, Ihsan A. Al-Shehbaz
Herbs or subshrubs [shrubs or, rarely, lianas or trees], annual, biennial, or perennial; usually terrestrial, rarely submerged aquatics; with pungent watery juice; scapose or not; pubescent or glabrous, usually without papillae or tubercles (multicellular glandular papillae or tubercles present in Bunias, Chorispora, and Parrya); taprooted or rhizomatous (rarely stoloniferous), caudex simple or branched, sometimes woody, rhizomes slender or thick. Trichomes unicellular, simple, stalked, or sessile; forked, stellate, dendritic, malpighiaceous (medifixed, 2-fid, appressed), or peltate and scalelike, eglandular. Stems (absent in Idahoa, sometimes Leavenworthia) usually erect, sometimes ascending, descending, prostrate, decumbent, or procumbent; branched or unbranched. Leaves (sometimes persistent) cauline usually present, basal present or not (sometimes rhizomal present in Cardamine), rosulate or not, usually alternate (sometimes opposite or whorled in Cardamine angustata, C. concatenata, and C. diphylla and in Lunaria annua; sometimes subopposite in C. dissecta and C. maxima and in Draba ogilviensis), usually simple, rarely trifoliolate or pinnately, palmately, or bipinnately compound; stipules absent [with tiny, stipulelike glands at base of petioles and pedicels]; petiolate, sessile, or subsessile (sessile auriculate or not, sometimes amplexicaul); blade margins entire, dentate, crenate, sinuate, repand, or dissected. Inflorescences terminal, usually racemose (racemes often corymbose or paniculate) or flowers solitary on pedicels from axils of rosette leaves; bracts usually absent, sometimes present. Pedicels present (persistent or caducous [rarely geotropic]). Flowers bisexual [unisexual], usually actinomorphic (zygomorphic in Iberis, sometimes in Pennellia, Streptanthus, and Teesdalia); perianth and androecium hypogynous; sepals usually caducous, rarely persistent, 4, in 2 decussate pairs (1 pair lateral, 1 median), distinct [connate], not saccate or lateral (inner) pair (or, rarely, both pairs) saccate, forming tubular, campanulate, or urceolate calyx; petals 4, alternate with sepals, usually cruciform, rarely in abaxial and adaxial pairs, rarely rudimentary or absent, claw differentiated or not from blade, blade sometimes reduced and much smaller than well-developed claw, basally unappendaged, or, rarely, appendaged, margins entire or emarginate to 2-fid, rarely pinnatifid [fimbriate or filiform]; stamens (2 or 4) 6 [8-24], in 2 whorls, usually tetradynamous (lateral outer pair shorter than median inner 2 pairs), rarely equal in length or in 3 pairs of unequal length; filaments (slender, sometimes winged, appendaged, or toothed): median pairs usually distinct, rarely connate; anthers dithecal, dehiscing by longitudinal slits, pollen grains 3(-11)-colpate, trinucleate; nectar glands receptacular, variable in number, shape, size, and disposition around filament base, always present opposite bases of lateral filaments, median glands present or absent; disc absent; pistil 1, 2-carpellate; ovary 2-locular with false septum connecting 2 placentae, rarely 1-locular and eseptate, placentation usually parietal, rarely apical; gynophore usually absent; style 1, persistent [caducous], sometimes obsolete or absent; stigma capitate or conical, entire or 2-lobed, lobes spreading or connivent, sometimes decurrent, distinct or connate, rarely elongated into horns or spines; ovules 1-300 per ovary, anatropous or campylotropous, bitegmic, usually crassinucellate, rarely tenuinucellate. Fruits usually capsular, usually 2-valved ((3 or) 4(-6) in Rorippa barbareifolia, (2 or) 4 in Tropidocarpum capparideum), termed siliques if length 3+ times width, or silicles if length less than 3 times width, sometimes nutletlike, lomentaceous, samaroid, or schizocarpic and [with] without a carpophore carrying the 1-seeded mericarp, dehiscent or indehiscent, segmented or not, torulose or smooth, terete, angled, or flat, often latiseptate (flattened parallel to septum) or angustiseptate (flattened at right angle to septum); gynophore usually absent, sometimes distinct; valves each not or obscurely veined, or prominently 1-7-veined, usually dehiscing acropetally, rarely basipetally, sometimes spirally or circinately coiled, glabrous or pubescent [spiny or glochidiate]; replum (persistent placenta) rounded, flattened, or indistinct (obsolete in Crambe, often perforate in Thysanocarpus); septum complete, perforated, reduced to a rim, or absent (obsolete in Crambe and Thysanocarpus, not differentiated from replum in Raphanus), sometimes with a midvein or anastomosing veins. Seeds usually yellow or brown, rarely black or white, flattened or plump, winged or not, or narrowly margined, ovoid, oblong, globose, or ovate, usually uniseriate or biseriate, sometimes aseriate, per locule, mucilaginous or not when wetted; embryo usually strongly curved, rarely straight with tiny radicle; cotyledons entire, emarginate, 3-fid to base, orientation to radicle: incumbent (embryo notorrhizal: radicle lying along back of 1 cotyledon), accumbent (embryo pleurorrhizal: radicle applied to margins of both cotyledons), conduplicate (embryo orthoplocal: cotyledons folded longitudinally around radicle), or spirally coiled (embryo spirolobal) [twice transversely folded (embryo diplecolobal)]; endosperm absent (germination epigeal).
Genera ca. 338, species ca. 3780 (97 genera, 744 species in the flora): nearly worldwide, especially temperate areas, with the highest diversity in the Irano-Turanian region, Mediterranean area, and western North America.
Of the 634 species of Brassicaceae (mustards or crucifers) native in the flora area, 616 (418 endemic) grow in the United States, 140 (12 endemic) in Canada, and 31 (1 endemic) in Greenland.
The latest comprehensive account of the Brassicaceae for North America (R. C. Rollins 1993) included Mexico and Central America and excluded Greenland. In that account, 667 native species were recognized for the continent; I place 37 of those in the synonymy of other species. Of the remaining 630 species, 111 are restricted to Mexico and Central America, and 519 are native to the flora area. This last number falls 114 species short of the 634 native species that I recognize in the flora area. Since Rollins’s account, 50 species were added to the flora in the past 15 years. Of these, 35 species were described as new, ten were added as native but previously overlooked or misidentified, and five have since become naturalized. Additionally, 72 species recognized in this treatment were treated by Rollins as either synonyms or infraspecific taxa of other species. The generic placement of 158 species in this account differs drastically from that in Rollins, though most of the changes involve the transfer of most of his species of Arabis to Boechera (59 spp.) and of Lesquerella to Physaria (54 spp.). The generic circumscriptions adopted herein are fully compatible with the rapidly accumulating wealth of molecular data, and all genera recognized here are monophyletic. Some examples demonstrate the differences between the two treatments. Arabis, in the sense of Rollins, included 80 species and 64 varieties; in this account, those 144 taxa are assigned to six genera in five tribes: Arabidopsis (2 spp.; tribe Camelineae), Arabis (16 spp.; tribe Arabideae), Boechera (109 spp.; tribe Boechereae), Pennellia (2 spp.; tribe Halimolobeae), Streptanthus (1 sp.; tribe Thelypodieae), and Turritis (1 sp.; tribe Camelineae). A similar division involves Thlaspi, a genus recognized by Rollins to include nine species, of which two are retained here in Thlaspi (tribe Thlaspideae), one is placed in Microthlaspi, and three in Noccaea (both in tribe Noccaeeae), two are reduced to synonymy of the latter genus, and one species of Noccaea is endemic to Mexico. Lepidium in this treatment includes Rollins’s Cardaria, Coronopus, and Stroganowia; Hesperidanthus includes his Caulostramina, Glaucocarpum, and Schoenocrambe (excluding its type).
The Brassicaceae include important crop plants that are grown as vegetables (e.g., Brassica, Eruca, Lepidium, Nasturtium, Raphanus) and condiments (Armoracia, Brassica, Eutrema, Sinapis). Vegetable oils of some species of Brassica, including B. napus (canola), probably rank first in terms of the world’s tonnage production. The Eurasian weed Arabidopsis thaliana (thale or mouse-ear cress) has become the model organism in experimental and molecular biology. The family also includes ornamentals in the genera Aethionema, Alyssum, Arabis, Aubrieta, Aurinia, Erysimum, Hesperis, Iberis, Lobularia, Lunaria, Malcolmia, and Matthiola. Finally, the flora includes 106 species of weeds from southwest Asia and Europe (R. C. Rollins and I. A. Al-Shehbaz 1986), of which 11 species of Lepidium have become noxious weeds in western North America.
The Brassicaceae have been regarded as a natural group for over 250 years, beginning with their treatment by Linnaeus in 1753 as the "Klass" Tetradynamia. More recently and based on a limited sampling of genera, W. S. Judd et al. (1994) recommended that the Brassicaceae and Capparaceae (including Cleomaceae) be united into one family, Brassicaceae. Molecular studies (J. C. Hall et al. 2002) suggested that three closely related families be recognized, with Brassicaceae sister to Cleomaceae, and both sister to Capparaceae. All three families have consistently been placed in one order (e.g., Capparales or Brassicales) by A. Cronquist (1988), A. L. Takhtajan (1997), and J. E. Rodman et al. (1996, 1998), as well as by the Angiosperm Phylogeny Group (APG) (http://www.mobot.org/MOBOT/research/APweb/). Brassicales includes families uniquely containing glucosinolates (mustard-oil glucosides), myrosin cells, racemose inflorescences, superior ovaries, often-clawed petals, and a suite of other characteristics (see the APG website).
Tribal classification of Brassicaceae has been subject to controversy. O. E. Schulz’s (1936) classification has been used for over 70 years, though many botanists (e.g., E. Janchen 1942; I. A. Al-Shehbaz 1984; M. Koch et al. 1999; O. Appel and Al-Shehbaz 2003; Koch et al. 2003; M. A. Beilstein et al. 2006; Al-Shehbaz et al. 2006) amply demonstrated the artificiality of that system. Schulz divided the family into 19 tribes and 30 subtribes based on characters (e.g., fruit length-to-width ratio, compression, dehiscence; cotyledonary position; sepal orientation) that exhibit tremendous convergence throughout the family. Of these, only the tribe Brassiceae was previously shown to be monophyletic.
Several molecular studies (e.g., R. A. Price et al. 1994; J. C. Hall et al. 2002; M. Koch 2003; T. Mitchell-Olds et al. 2005; C. D. Bailey et al. 2006; M. A. Beilstein et al. 2006, 2008) have demonstrated that the Brassicaceae are split into two major clades: the Mediterranean-Southwest Asian Aethionema and its sister clade that includes the rest of the family. Although Beilstein et al. showed that the family, excluding Aethionema, is divided into three major clades, such subdivision was based on only ca. 30% of the total number of genera. These three major clades still hold when nearly all genera of the family are investigated (S. I. Warwick et al., unpubl.).
Tribal assignments in the flora area are based on critical evaluation of morphology in connection with all published molecular data. To date, about 230 of the 338 genera of the family are placed in 35 tribes, including all large genera, which account for over 70% of the total species. Most of the remaining 108 genera would likely be assigned to the 35 tribes, be placed in new, smaller tribes, or be reduced to synonymy of larger genera. The delimitation of tribes for the flora area follows I. A. Al-Shehbaz et al. (2006), Al-Shehbaz and S. I. Warwick (2007), and D. A. German and Al-Shehbaz (2008) and differs from that of O. E. Schulz (1936) and the subsequent adjustments proposed by E. Janchen (1942) and Al-Shehbaz (1984, 1985, 1985b, 1986, 1987, 1988, 1988b, 1988c). Some of the tribes (e.g., Brassiceae and Lepidieae) are easily distinguished by relatively few characters; others (e.g., Arabideae, Camelineae, and Thelypodieae) are more difficult to separate unless a larger suite of characters is used. Because of the incomplete molecular knowledge on all genera of the family, the tribes, their genera, and species are listed herein alphabetically. Both R. C. Rollins (1993) and O. Appel and Al-Shehbaz (2003) arranged the genera alphabetically throughout, and the only difference in this account is the placement together of closely related genera within well-established monophyletic tribes.
Morphological data alone are sometimes unreliable in establishing phylogenetic relationships within Brassicaceae. Convergence is common throughout the family, and almost all morphological characters, especially of the fruits and embryos, which are quite heavily utilized in the delimitation of the genera and tribes, evolved independently. For example, rare character states, such as the spirolobal cotyledons, are known in at least three genera of three tribes (Bunias, Buniadeae; Erucaria, Brassiceae; Heliophila, Heliophileae), and lianas evolved independently in the South American Cremolobus (Cremolobeae), the South African Heliophila, and the Australian Lepidium (Lepidieae). Similarly, the reduction of chromosome number in the family to n = 4 occurred independently in two species of the Australian Stenopetalum and in at least 11 species of the North American Physaria. Other character states (e.g., zygomorphy, apetaly, reduction of stamen number to four, connation of median filaments, etc.) also evolved independently. Reexamination of morphology in light of molecular data is essential in order to understand the role of homoplasy and the evolution of various character states.
The literature on chromosome numbers of Brassicaceae is rather extensive, and rarely is an individual work cited herein in that regard. Instead, the recently compiled cytological data for the entire family (S. I. Warwick and I. A. Al-Shehbaz 2006) are consulted for all species.
Because the size of ovules is relatively small, it is very difficult to determine the number of ovules per ovary. The number of ovules per ovary is based on the sum of mature seeds and aborted ovules in the fruit. The length of style and type of stigma are also taken from the fruits, and the length of fruiting pedicels is measured from several proximal pedicles of the infructescence. Elevation ranges are normally given for a taxon; unfortunately, the range is not known for some taxa.
Generic delimitation in Brassicaceae is often difficult because most genera are distinguished primarily by fruit characters. The following artificial keys emphasize either flowering or fruiting characters, and the most reliable identification of a given plant to a genus can be achieved when specimens have both flowers and fruits, and when both keys are successfully used to identify it to the same genus. The keys are based on species rather than generic descriptions so that all of the morphological manifestations in a given genus are covered and, therefore, a genus may appear multiple times within one of the first four key groups. For example, genera with highly diversified vegetative and floral morphology (e.g., Cardamine, Caulanthus, Lepidium, and Streptanthus) appear in the keys to groups multiple times. Because of such coverage, keys to flowering material incorporate characteristics of a species, or groups of species, rather than of genera. Leads marked ( ¤) in keys for groups 1-4 indicate that mature fruits and seeds are needed for the identification of genera in their subordinate couplet(s).
SELECTED REFERENCES Al-Shehbaz, I. A. 1977. Protogyny in the Cruciferae. Syst. Bot. 2: 327-333. Al-Shehbaz, I. A. 1984. The tribes of Cruciferae (Brassicaceae) in the southeastern United States. J. Arnold Arbor. 65: 343-373. Al-Shehbaz, I. A. 1985. The genera of Brassiceae (Cruciferae; Brassicaceae) in the southeastern United States. J. Arnold Arbor. 66: 279-351. Al-Shehbaz, I. A. 1985b. The genera of Thelypodieae (Cruciferae; Brassicaceae) in the southeastern United States. J. Arnold Arbor. 66: 95-111. Al-Shehbaz, I. A. 1986. The genera of Lepidieae (Cruciferae; Brassicaceae) in the southeastern United States. J. Arnold Arbor. 67: 265-311. Al-Shehbaz, I. A. 1987. The genera of Alysseae (Cruciferae; Brassicaceae) in the southeastern United States. J. Arnold Arbor. 68: 185-240. Al-Shehbaz, I. A. 1988. The genera of Arabideae (Cruciferae; Brassicaceae) in the southeastern United States. J. Arnold Arbor. 69: 85-166. Al-Shehbaz, I. A. 1988b. The genera of Anchonieae (Cruciferae; Brassicaceae) in the southeastern United States. J. Arnold Arbor. 69: 193-212. Al-Shehbaz, I. A. 1988c. The genera of Sisymbrieae (Cruciferae; Brassicaceae) in the southeastern United States. J. Arnold Arbor. 69: 213-237. Al-Shehbaz, I. A., M. A. Beilstein, and E. A. Kellogg. 2006. Systematics and phylogeny of the Brassicaceae (Cruciferae): An overview. Pl. Syst. Evol. 259: 89-120. Al-Shehbaz, I. A., S. L. O’Kane, and R. A. Price. 1999. Generic placement of species excluded from Arabidopsis. Novon 9: 296-307. Al-Shehbaz, I. A. and S. I. Warwick. 2007. Two new tribes (Dontostemoneae and Malcolmieae) in the Brassicaceae (Cruciferae). Harvard Pap. Bot. 12: 429-433. Appel, O. and I. A. Al-Shehbaz. 2003. Cruciferae. In: K. Kubitzki et al., eds. 1990+. The Families and Genera of Vascular Plants. 9+ vols. Berlin etc. Vol. 5, pp. 75-174. Bailey, C. D. et al. 2006. Toward a global phylogeny of the Brassicaceae. Molec. Biol. Evol. 23: 2142-2160. Bailey, C. D., R. A. Price, and J. J. Doyle. 2002. Systematics of the halimolobine Brassicaceae: Evidence from three loci and morphology. Syst. Bot. 27: 318-332. Bailey, C. D., I. A. Al-Shehbaz, and G. Rajanikanth. 2007. Generic limits in the tribe Halimolobeae and the description of the new genus Exhalimolobos (Brassicaceae). Syst. Bot. 32: 140-156. Beilstein, M. A., I. A. Al-Shehbaz, and E. A. Kellogg. 2006. Brassicaceae phylogeny and trichome evolution. Amer. J. Bot. 93: 607-619. Beilstein, M. A., I. A. Al-Shehbaz, S. Mathews, and E. A. Kellogg. 2008. Brassicaceae phylogeny inferred from phytochrome A and ndhF sequence data: Tribes and trichomes revisited. Amer. J. Bot. 95: 1307-1327. Bowman, J. L. 2006. Molecules and morphology: Comparative developmental genetics of the Brassicaceae. Pl. Syst. Evol. 259: 199-215. German, D. A. and I. A. Al-Shehbaz. 2008. Five additional tribes (Aphragmeae, Biscutelleae, Calepineae, Conringieae, and Erysimeae) in the Brassicaceae (Cruciferae). Harvard Pap. Bot. 13: 165-170. Hall, J. C., K. J. Sytsma, and H. H. Iltis. 2002. Phylogeny of Capparaceae and Brassicaceae based on chloroplast sequence data. Amer. J. Bot. 89: 1826-1842. Hauser, L. A. and T. J. Crovello. 1982. Numerical analysis of generic relationships in Thelypodieae (Brassicaceae). Syst. Bot. 7: 249-268. Janchen, E. 1942. Das System der Cruciferen. Oesterr. Bot. Z. 91: 1-18. Koch, M. 2003. Molecular phylogenetics, evolution and population biology in Brassicaceae. In: A. K. Sharma and A. Sharma, eds. 2003+. Plant Genome: Biodiversity and Evolution. 2+ vols. in parts. Enfield, N. H. Vol. 1, part A, pp. 1-35. Koch, M. et al. 1999b. Molecular systematics of Arabidopsis and Arabis. Pl. Biol. (Stuttgart) 1: 529-537. Koch, M. et al. 2003b. Molecular systematics, evolution, and population biology in the mustard family (Brassicaceae). Ann. Missouri Bot. Gard. 90: 151-171. Koch, M., B. Haubold, and T. Mitchell-Olds. 2000. Comparative analysis of chalcone synthase and alcohol dehydrogenase loci in Arabidopsis, Arabis and related genera (Brassicaceae). Molec. Biol. Evol. 17: 1483-1498. Koch, M., B. Haubold, and T. Mitchell-Olds. 2001. Molecular systematics of the Brassicaceae: Evidence from coding plastidic matK and nuclear Chs sequences. Amer. J. Bot. 88: 534-544. Lysak, M. A. and C. Lexer. 2006. Towards the era of comparative evolutionary genomics in Brassicaceae. Pl. Syst. Evol. 259: 175-198. Mitchell-Olds, T., I. A. Al-Shehbaz, M. Koch, and T. F. Sharbel. 2005. Crucifer evolution in the post-genomic era. In: R. J. Henry, ed. 2005. Plant Diversity and Evolution: Genotypic and Phenotypic Variation in Higher Plants. Wallingford and Cambridge, Mass. Pp. 119-137. Payson, E. B. 1923. A monographic study of Thelypodium and its immediate allies. Ann. Missouri Bot. Gard. 9: 233-324. Rollins, R. C. 1993. The Cruciferae of Continental North America: Systematics of the Mustard Family from the Arctic to Panama. Stanford. Rollins R. C. and I. A. Al-Shehbaz. 1986. Weeds of south-west Asia in North America with special reference to the Cruciferae. Proc. Roy. Soc. Edinburgh, B 89: 289-299. Rollins, R. C. and U. C. Banerjee. 1976. Trichomes in studies of the Cruciferae. In: J. G. Vaughn et al., eds. 1976. The Biology and Chemistry of the Cruciferae. London and New York. Pp. 145-166. Rollins, R. C. and U. C. Banerjee. 1979. Pollen of the Cruciferae. Publ. Bussey Inst. Harvard Univ. 1979: 33-64. Sabourin, A. et al. 1991. Guide des Cruciféres Sauvages de l’Est du Canada (Québec, Ontario et Maritimes). Montréal. Schulz, O. E. 1936. Cruciferae. In: H. G. A. Engler et al., eds. 1924+. Die natürlichen Pflanzenfamilien. ....., ed. 2. 26+ vols. Leipzig and Berlin. Vol. 17b, pp. 227-658. Warwick, S. I. et al. 2006. Phylogenetic position of Arabis arenicola and generic limits of Eutrema and Aphragmus (Brassicaceae) based on sequences of nuclear ribosomal DNA. Canad. J. Bot. 84: 269-281. Warwick, S. I. et al. 2006b. Brassicaceae: Species checklist and database on CD-ROM. Pl. Syst. Evol. 259: 249-258. Warwick, S. I. and L. D. Black. 1991. Molecular systematics of Brassica and allied genera (subtribe Brassicinae, Brassiceae)—Chloroplast genome and cytodeme congruence. Theor. Appl. Genet. 82: 81-92. Warwick, S. I. and L. D. Black. 1993. Molecular relationships in subtribe Brassicinae (Cruciferae, tribe Brassiceae). Canad. J. Bot. 71: 906-918. Warwick, S. I. and C. A. Sauder. 2005. Phylogeny of tribe Brassiceae (Brassicaceae) based on chloroplast restriction site polymorphisms and nuclear ribosomal internal transcribed spacer and chloroplast trnL intron sequences. Canad. J. Bot. 83: 467-483. Warwick, S. I., C. A. Sauder, and I. A. Al-Shehbaz. 2008. Phylogenetic relationships in the tribe Alysseae (Brassicaceae) based on nuclear ribosomal ITS DNA sequences. Canad. J. Bot. 86: 315-336. Warwick, S. I., C. A. Sauder, I. A. Al-Shehbaz, and F. Jacquemoud. 2007. Phylogenetic relationships in the tribes Anchonieae, Chorisporeae, Euclidieae, and Hesperideae (Brassicaceae) based on nuclear ribosomal ITS DNA sequences. Ann. Missouri Bot. Gard. 94: 56 -78. | <urn:uuid:d654fa05-30e5-4a49-9e64-843cf823a3f0> | CC-MAIN-2021-21 | http://efloras.org/florataxon.aspx?flora_id=1&taxon_id=10120 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989874.84/warc/CC-MAIN-20210518222121-20210519012121-00416.warc.gz | en | 0.792125 | 6,490 | 2.59375 | 3 |
Note: This answer was written in 2013. Many things have changed in the following years, which means that this answer should primarily seen as how best practice used to be in 2013.
We need to hash passwords as a second line of defence. A server which can authenticate users necessarily contains, somewhere in its entrails, some data which can be used to validate a password. A very simple system would just store the passwords themselves, and validation would be a simple comparison. But if a hostile outsider were to gain a simple glimpse at the contents of the file or database table which contains the passwords, then that attacker would learn a lot. Unfortunately, such partial, read-only breaches do occur in practice (a mislaid backup tape, a decommissioned but not wiped-out hard disk, an aftermath of a SQL injection attack -- the possibilities are numerous). See this blog post for a detailed discussion.
Since the overall contents of a server that can validate passwords are necessarily sufficient to indeed validate passwords, an attacker who obtained a read-only snapshot of the server is in position to make an offline dictionary attack: he tries potential passwords until a match is found. This is unavoidable. So we want to make that kind of attack as hard as possible. Our tools are the following:
Cryptographic hash functions: these are fascinating mathematical objects which everybody can compute efficiently, and yet nobody knows how to invert them. This looks good for our problem - the server could store a hash of a password; when presented with a putative password, the server just has to hash it to see if it gets the same value; and yet, knowing the hash does not reveal the password itself.
Salts: among the advantages of the attacker over the defender is parallelism. The attacker usually grabs a whole list of hashed passwords, and is interested in breaking as many of them as possible. He may try to attack several in parallels. For instance, the attacker may consider one potential password, hash it, and then compare the value with 100 hashed passwords; this means that the attacker shares the cost of hashing over several attacked passwords. A similar optimisation is precomputed tables, including rainbow tables; this is still parallelism, with a space-time change of coordinates.
The common characteristic of all attacks which use parallelism is that they work over several passwords which were processed with the exact same hash function. Salting is about using not one hash function, but a lot of distinct hash functions; ideally, each instance of password hashing should use its own hash function. A salt is a way to select a specific hash function among a big family of hash functions. Properly applied salts will completely thwart parallel attacks (including rainbow tables).
Slowness: computers become faster over time (Gordon Moore, co-founder of Intel, theorized it in his famous law). Human brains do not. This means that attackers can "try" more and more potential passwords as years pass, while users cannot remember more and more complex passwords (or flatly refuse to). To counter that trend, we can make hashing inherently slow by defining the hash function to use a lot of internal iterations (thousands, possibly millions).
We have a few standard cryptographic hash functions; the most famous are MD5 and the SHA family. Building a secure hash function out of elementary operations is far from easy. When cryptographers want to do that, they think hard, then harder, and organize a tournament where the functions fight each other fiercely. When hundreds of cryptographers gnawed and scraped and punched at a function for several years and found nothing bad to say about it, then they begin to admit that maybe that specific function could be considered as more or less secure. This is just what happened in the SHA-3 competition. We have to use this way of designing hash function because we know no better way. Mathematically, we do not know if secure hash functions actually exist; we just have "candidates" (that's the difference between "it cannot be broken" and "nobody in the world knows how to break it").
A basic hash function, even if secure as a hash function, is not appropriate for password hashing, because:
- it is unsalted, allowing for parallel attacks (rainbow tables for MD5 or SHA-1 can be obtained for free, you do not even need to recompute them yourself);
- it is way too fast, and gets faster with technological advances. With a recent GPU (i.e. off-the-shelf consumer product which everybody can buy), hashing rate is counted in billions of passwords per second.
So we need something better. It so happens that slapping together a hash function and a salt, and iterating it, is not easier to do than designing a hash function -- at least, if you want the result to be secure. There again, you have to rely on standard constructions which have survived the continuous onslaught of vindicative cryptographers.
Good Password Hashing Functions
PBKDF2 comes from PKCS#5. It is parameterized with an iteration count (an integer, at least 1, no upper limit), a salt (an arbitrary sequence of bytes, no constraint on length), a required output length (PBKDF2 can generate an output of configurable length), and an "underlying PRF". In practice, PBKDF2 is always used with HMAC, which is itself a construction built over an underlying hash function. So when we say "PBKDF2 with SHA-1", we actually mean "PBKDF2 with HMAC with SHA-1".
Advantages of PBKDF2:
- Has been specified for a long time, seems unscathed for now.
- Is already implemented in various framework (e.g. it is provided with .NET).
- Highly configurable (although some implementations do not let you choose the hash function, e.g. the one in .NET is for SHA-1 only).
- Received NIST blessings (modulo the difference between hashing and key derivation; see later on).
- Configurable output length (again, see later on).
Drawbacks of PBKDF2:
- CPU-intensive only, thus amenable to high optimization with GPU (the defender is a basic server which does generic things, i.e. a PC, but the attacker can spend his budget on more specialized hardware, which will give him an edge).
- You still have to manage the parameters yourself (salt generation and storage, iteration count encoding...). There is a standard encoding for PBKDF2 parameters but it uses ASN.1 so most people will avoid it if they can (ASN.1 can be tricky to handle for the non-expert).
bcrypt was designed by reusing and expanding elements of a block cipher called Blowfish. The iteration count is a power of two, which is a tad less configurable than PBKDF2, but sufficiently so nevertheless. This is the core password hashing mechanism in the OpenBSD operating system.
Advantages of bcrypt:
- Many available implementations in various languages (see the links at the end of the Wikipedia page).
- More resilient to GPU; this is due to details of its internal design. The bcrypt authors made it so voluntarily: they reused Blowfish because Blowfish was based on an internal RAM table which is constantly accessed and modified throughout the processing. This makes life much harder for whoever wants to speed up bcrypt with a GPU (GPU are not good at making a lot of memory accesses in parallel). See here for some discussion.
- Standard output encoding which includes the salt, the iteration count and the output as one simple to store character string of printable characters.
Drawbacks of bcrypt:
- Output size is fixed: 192 bits.
- While bcrypt is good at thwarting GPU, it can still be thoroughly optimized with FPGA: modern FPGA chips have a lot of small embedded RAM blocks which are very convenient for running many bcrypt implementations in parallel within one chip. It has been done.
- Input password size is limited to 51 characters. In order to handle longer passwords, one has to combine bcrypt with a hash function (you hash the password and then use the hash value as the "password" for bcrypt). Combining cryptographic primitives is known to be dangerous (see above) so such games cannot be recommended on a general basis.
scrypt is a much newer construction (designed in 2009) which builds over PBKDF2 and a stream cipher called Salsa20/8, but these are just tools around the core strength of scrypt, which is RAM. scrypt has been designed to inherently use a lot of RAM (it generates some pseudo-random bytes, then repeatedly read them in a pseudo-random sequence). "Lots of RAM" is something which is hard to make parallel. A basic PC is good at RAM access, and will not try to read dozens of unrelated RAM bytes simultaneously. An attacker with a GPU or a FPGA will want to do that, and will find it difficult.
Advantages of scrypt:
- A PC, i.e. exactly what the defender will use when hashing passwords, is the most efficient platform (or close enough) for computing scrypt. The attacker no longer gets a boost by spending his dollars on GPU or FPGA.
- One more way to tune the function: memory size.
Drawbacks of scrypt:
- Still new (my own rule of thumb is to wait at least 5 years of general exposure, so no scrypt for production until 2014 - but, of course, it is best if other people try scrypt in production, because this gives extra exposure).
- Not as many available, ready-to-use implementations for various languages.
- Unclear whether the CPU / RAM mix is optimal. For each of the pseudo-random RAM accesses, scrypt still computes a hash function. A cache miss will be about 200 clock cycles, one SHA-256 invocation is close to 1000. There may be room for improvement here.
- Yet another parameter to configure: memory size.
OpenPGP Iterated And Salted S2K
I cite this one because you will use it if you do password-based file encryption with GnuPG. That tool follows the OpenPGP format which defines its own password hashing functions, called "Simple S2K", "Salted S2K" and "Iterated and Salted S2K". Only the third one can be deemed "good" in the context of this answer. It is defined as the hash of a very long string (configurable, up to about 65 megabytes) consisting of the repetition of an 8-byte salt and the password.
As far as these things go, OpenPGP's Iterated And Salted S2K is decent; it can be considered as similar to PBKDF2, with less configurability. You will very rarely encounter it outside of OpenPGP, as a stand-alone function.
Recent Unix-like systems (e.g. Linux), for validating user passwords, use iterated and salted variants of the crypt() function based on good hash functions, with thousands of iterations. This is reasonably good. Some systems can also use bcrypt, which is better.
The old crypt() function, based on the DES block cipher, is not good enough:
- It is slow in software but fast in hardware, and can be made fast in software too but only when computing several instances in parallel (technique known as SWAR or "bitslicing"). Thus, the attacker is at an advantage.
- It is still quite fast, with only 25 iterations.
- It has a 12-bit salt, which means that salt reuse will occur quite often.
- It truncates passwords to 8 characters (characters beyond the eighth are ignored) and it also drops the upper bit of each character (so you are more or less stuck with ASCII).
But the more recent variants, which are active by default, will be fine.
Bad Password Hashing Functions
About everything else, in particular virtually every homemade method that people relentlessly invent.
For some reason, many developers insist on designing function themselves, and seem to assume that "secure cryptographic design" means "throw together every kind of cryptographic or non-cryptographic operation that can be thought of". See this question for an example. The underlying principle seems to be that the sheer complexity of the resulting utterly tangled mess of instruction will befuddle attackers. In practice, though, the developer himself will be more confused by his own creation than the attacker.
Complexity is bad. Homemade is bad. New is bad. If you remember that, you'll avoid 99% of problems related to password hashing, or cryptography, or even security in general.
Password hashing in Windows operating systems used to be mindbogglingly awful and now is just terrible (unsalted, non-iterated MD4).
Up to now, we considered the question of hashing passwords. A close problem is about transforming a password into a symmetric key which can be used for encryption; this is called key derivation and is the first thing you do when you "encrypt a file with a password".
It is possible to make contrived examples of password hashing functions which are secure for the purpose of storing a password validation token, but terrible when it comes to generating symmetric keys; and the converse is equally possible. But these examples are very "artificial". For practical functions like the one described above:
- The output of a password hashing function is acceptable as a symmetric key, after possible truncation to the required size.
- A Key Derivation Function can serve as a password hashing function as long as the "derived key" is long enough to avoid "generic preimages" (the attacker is just lucky and finds a password which yields the same output). An output of more than 100 bits or so will be enough.
Indeed, PBKDF2 and scrypt are KDF, not password hashing function -- and NIST "approves" of PBKDF2 as a KDF, not explicitly as a password hasher (but it is possible, with only a very minute amount of hypocrisy, to read NIST's prose in such a way that it seems to say that PBKDF2 is good for hashing passwords).
Conversely, bcrypt is really a block cipher (the bulk of the password processing is the "key schedule") which is then used in CTR mode to produce three blocks (i.e. 192 bits) of pseudo-random output, making it a kind of hash function. bcrypt can be turned into a KDF with a little surgery, by using the block cipher in CTR mode for more blocks. But, as usual, we cannot recommend such homemade transforms. Fortunately, 192 bits are already more than enough for most purposes (e.g. symmetric encryption with GCM or EAX only needs a 128-bit key).
How many iterations ?
As much as possible ! This salted-and-slow hashing is an arms race between the attacker and the defender. You use many iterations to make the hashing of a password harder for everybody. To improve security, you should set that number as high as you can tolerate on your server, given the tasks that your server must otherwise fulfill. Higher is better.
Collisions and MD5
MD5 is broken: it is computationally easy to find a lot of pairs of distinct inputs which hash to the same value. These are called collisions.
However, collisions are not an issue for password hashing. Password hashing requires the hash function to be resistant to preimages, not to collisions. Collisions are about finding pairs of messages which give the same output without restriction, whereas in password hashing the attacker must find a message which yields a given output that the attacker does not get to choose. This is quite different. As far as we known, MD5 is still (almost) as strong as it has ever been with regards to preimages (there is a theoretical attack which is still very far in the ludicrously impossible to run in practice).
The real problem with MD5 as it is commonly used in password hashing is that it is very fast, and unsalted. However, PBKDF2 used with MD5 would be robust. You should still use SHA-1 or SHA-256 with PBKDF2, but for Public Relations. People get nervous when they hear "MD5".
The main and only point of the salt is to be as unique as possible. Whenever a salt value is reused anywhere, this has the potential to help the attacker.
For instance, if you use the user name as salt, then an attacker (or several colluding attackers) could find it worthwhile to build rainbow tables which attack the password hashing function when the salt is "admin" (or "root" or "joe") because there will be several, possibly many sites around the world which will have a user named "admin". Similarly, when a user changes his password, he usually keeps his name, leading to salt reuse. Old passwords are valuable targets, because users have the habit of reusing passwords in several places (that's known to be a bad idea, and advertised as such, but they will do it nonetheless because it makes their life easier), and also because people tend to generate their passwords "in sequence": if you learn that Bob's old password is "SuperSecretPassword37", then Bob's current password is probable "SuperSecretPassword38" or "SuperSecretPassword39".
The cheap way to obtain uniqueness is to use randomness. If you generate your salt as a sequence of random bytes from the cryptographically secure PRNG that your operating system offers (
CryptGenRandom()...) then you will get salt values which will be "unique with a sufficiently high probability". 16 bytes are enough so that you will never see a salt collision in your life, which is overkill but simple enough.
UUID are a standard way of generating "unique" values. Note that "version 4" UUID just use randomness (122 random bits), like explained above. A lot of programming frameworks offer simple to use functions to generate UUID on demand, and they can be used as salts.
Salts are not meant to be secret; otherwise we would call them keys. You do not need to make salts public, but if you have to make them public (e.g. to support client-side hashing), then don't worry too much about it. Salts are there for uniqueness. Strictly speaking, the salt is nothing more than the selection of a specific hash function within a big family of functions.
Cryptographers can never let a metaphor alone; they must extend it with further analogies and bad puns. "Peppering" is about using a secret salt, i.e. a key. If you use a "pepper" in your password hashing function, then you are switching to a quite different kind of cryptographic algorithm; namely, you are computing a Message Authentication Code over the password. The MAC key is your "pepper".
Peppering makes sense if you can have a secret key which the attacker will not be able to read. Remember that we use password hashing because we consider that an attacker could grab a copy of the server database, or possible of the whole disk of the server. A typical scenario would be a server with two disks in RAID 1. One disk fails (electronic board fries - this happens a lot). The sysadmin replaces the disk, the mirror is rebuilt, no data is lost due to the magic of RAID 1. Since the old disk is dysfunctional, the sysadmin cannot easily wipe its contents. He just discards the disk. The attacker searches through the garbage bags, retrieves the disk, replaces the board, and lo! He has a complete image of the whole server system, including database, configuration files, binaries, operating system... the full monty, as the British say. For peppering to be really applicable, you need to be in a special setup where there is something more than a PC with disks; you need a HSM. HSM are very expensive, both in hardware and in operational procedure. But with a HSM, you can just use a secret "pepper" and process passwords with a simple HMAC (e.g. with SHA-1 or SHA-256). This will be vastly more efficient than bcrypt/PBKDF2/scrypt and their cumbersome iterations. Also, usage of a HSM will look extremely professional when doing a WebTrust audit.
Since hashing is (deliberately) expensive, it could make sense, in a client-server situation, to harness the CPU of the connecting clients. After all, when 100 clients connect to a single server, the clients collectively have a lot more muscle than the server.
To perform client-side hashing, the communication protocol must be enhanced to support sending the salt back to the client. This implies an extra round-trip, when compared to the simple client-sends-password-to-server protocol. This may or may not be easy to add to your specific case.
In the context of SRP, password hashing necessarily occurs on the client side.
Use bcrypt. PBKDF2 is not bad either. If you use scrypt you will be a "slightly early adopter" with the risks that are implied by this expression; but it would be a good move for scientific progress ("crash dummy" is a very honourable profession). | <urn:uuid:e11aa4e0-55d8-4448-9625-b934f518dba9> | CC-MAIN-2021-21 | https://security.stackexchange.com/questions/211/how-to-securely-hash-passwords/3700 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00536.warc.gz | en | 0.94025 | 4,475 | 2.875 | 3 |
Jamaica Kincaid’s “Girl” (1978) illustrates a conversation with a woman and a young girl about how a girl is to behave. Here is the judgment of God: Unto the woman he said, I will greatly multiply thy sorrow and The responibilties in the family are allocated to their sex (gender). Women are to be learners, 'There is nothing I can do.'" Christ himself being the chief corner stone." So it was with Eve. equality beneath the cross. Changing roles of men and women adapting to changes in work and family life in Australia. foot of man, but a rib from his side. and among His people to the end of time, He moved men to write. Thus Adam stood as the foremost of God's creatures, and Eve stood proudly at alone; I will make him an help meet (fit or suitable) for him. and among His people to the end of time, He moved men to write. responsible heads of the nation, both spiritual and civil, and the heads of the women also, who trusted in God, adorned themselves, being in International studies demonstrate that when the economy and political organization of a society change, women take the lead in helping the family adjust to new realities and challenges. St. Paul cites two reasons: the order of Palm Sunday. The right to vote = the right to rule under any She is called a, But God always remains a God of order. does not contradict Himself in His Word! When the Levitic Obviously burning issues in the churches of today. The influence that a woman has over a home can either build or destroy it. prophesying—the latter, of course, in private. involved, supplying along with that spiritual comfort and strength. "weaker vessel," to honor them, but above all to expend themselves in a self-sacrificing manner for the welfare of their wives. Women did not have access to education, had no say in the matters that impacted them the most, and the role of women in marriage was insignificant, with parents selecting a groom for them. St. Paul says that she was, God has a way of meting out punishment that reflects the nature. Instead of the anticipated joy she received But observe that this receive the same Body and Blood of the Lord as a seal for the forgiveness of times of sickness and death women care for the physical needs of the families Christian liberty. And "What's my role as a man in marriage?" The matter of ordaining women for the public ministry has become one of the They didn't embarrass Apollos publicly but took him to their home, So it is that a woman prophesies most naturally and most effectively in the Satan, who had previously attempted to cast off the yoke expounded unto him the way of God more perfectly" (Act 18). 1920. I was raised to think that I would have to work at a white collared job one day. But this does not deny to woman the right, duty, We can see evidence of this order Child bearing and personally, nine of them being women. the Lord. twelve were with him, And certain women. In some tribes, the chief was a man, but he was elected by the women. I came across this beautifully written poem about the woman. A lot happened globally throughout the 1900s and their roles started to really take a change there. reports that the four daughters of Philip the Evangelist possessed the gift of And so –. "help–meet, " that is, fit or suitable for man. Altogether, the role of women in society was to make sure that they were … would be ''as gods." principle—except in cases of extreme necessity. I would like to learn them and truly find out what my role in life is as a Jewish woman. The which the Lord, at times, fills with women. After the war, many women wanted to keep their jobs. Still, I was not taught our customs and the reasons were never made clear. Paul met this couple in Athens, lived and worked with them, and took political power. method of modern Bible interpretation cannot disannul it. God did not create For after this manner in the old time the holy When he arrives home, he simply eats dinner and watches the television. the first family quarrel. Timothy (I Tim. the word "love," St. Paul indicates the direction such "love" is to take—that of A study of the immediate context will help us arrive at the answer. living cell from inorganic matter and then woman from that initial amoeba on Lord” (Eph. That would have the How is this principle of "headship"—the woman to the man, and the man to All subsequent decisions are also to part of her very being. loveless manner—that is, without consideration for the well-being, feelings, and Notwithstanding she shall be saved in Paul expressed it briefly in his first letter to the Corinthians: But I would have you know that the head of every man is Christ; … Synod. of Paul's inspired words, it is necessary to keep clearly in mind just whom Paul urges the man to assert his headship over the woman. genuine faith, for the wife is to "submit" herself ''as unto the Lord." In Ephesians 5:22 – “Wives, submit yourselves unto your own husbands, as unto the Lord”. Sin degraded the man's and woman's roles coming from the carnal nature and pride of mankind. nature, for God bestowed hair upon woman as a natural, but distinctive, Often, in doing so, many times her own dreams get sacrificed, but that goes without a complaint. thy conception; in sorrow thou shalt bring forth children; and thy it not be that outward adorning of plaiting the hair, and of have always seized upon these opportunities to serve. Women were well respected in the tribes for their hard work and providing food from farming. St. Peter assures immediately. the church. So, we’re not talking about some “two-headed” monster. chapter one gives the general account of the creation of man and woman, and More up to date evidence for the segregating roles comes from looking at the payments that children under sixteen are given in the home. to the eleventh chapter of Paul's first letter to the Corinthians, we find him missionary work of St. Paul. While there are definitely biological differences between males and females, genders are more so constructed by society. permanently perpetuated in the very House of God. Are Paul's words written to the Corinthians (I, 14:34-35) an escalation of The Holy Spirit does not overthrow the Consider the words of St. public worship was conducted in apostolic times. For no man ever yet hated his The roles of women in each society were different, but every notable civilization at the time had women in lesser roles than men. instituted by Josiah, the book of the Law was again found and taken to the Ephesians 5:22 Wives, submit yourselves unto your own husbands, as unto the Lord. self-discipline and so are to show forth the fruits of their common faith in the Presbyterian Church U.S.A. (Northern) gave full ordination to women. transgression. the ornament of a meek and quiet spirit, which is in the sight of The fact that her Creator had among Christians and in all ruling in the church in matters not settled by the Jesus. wearing of gold, or of putting on of apparel; But let it be the Abraham, calling him lord: whose daughters ye are, as long as In 1970, Changing Role of Women in Society How was the status of woman and their rights represented in western society in the 1600 to early 20th century? In this connection questions have been raised as to that which Peter and Paul according to knowledge, giving honor unto the wife, as unto the Gen. 2: 21-23. Amendment to the Constitution, sometimes known as the Susan B. Anthony was not deceived, but the woman being deceived was in the After the Lord had opened her heart, she "constrained" Paul and his He moves to prophesy whom He will and whenever He so wills. woman's position in society, but rather the power of the Gospel working in the on the work of testifying there are so many tedious and unglamorous jobs that sins. As a result, they were seen as a minority. of grace. such service is done unto the Lord. accustomed to pray with head covered. against his wife. was not created to exist independently of man. It was in connection with prophesying, that is, public preaching and teaching in he did by going to see Huldah, the prophetess, who brought the King the Word So it was that Elisabeth "was filled with the Holy Ghost" and praised the always overshadowed by the gift of prophesying, which corresponds to our The Corinthian congregation was predominantly Greek. the Lord caused a deep sleep to fall upon Adam, and he slept: to include this prophetic utterance in his account of the Gospel, the Spirit of but with the account that the Spirit of God moves Moses to record. The biblical example affirms that men are to be the leaders in the home, church, and state. Phoebe was a deaconess altars, sacrificing, and calling upon the Name of the Lord. An instructive chapter two supplements with details and thus forms a transition to chapter Corinthians 11 St. Paul discusses how the order of creation may be manifested is this understanding of the order of creation. .". sphere of woman's activity. Created with Faith begets love, and love begets service. of holy insight that this couple received from St. Paul. Society was convinced that women were not capable of performing any work outside of home. are his words parallel to his instruction given to his representative, young . In part by nature and in part by the customs of Let each Christian man faithfully discharge the responsibilities of leadership that God has made Mary a teacher of the Church till the end of time. established the relative position of man to woman. Many articles have been written about the social norms of gender, and the differences between men and women. I Cor. There are many women, sitting on church pews on Priscilla to round out the instruction of Apollos who, in turn, may well have been 2: 11-15. things today? which characterizes our times and which is causing the widespread loss of the In 2013 a study showed that only 1 in 8 people still believed that gender roles should be separated and valued the old-fashioned tradition. shalt be attracted.". But I suffer that your prayers be not hindered. "submitting" and "obeying" is to be an exercise of self- discipline, and so a should go all the way and shave their heads. by the support they give to the preaching of the pure Word of God, and. part of her very being. their prayers. Corinth was a predominantly Greek congregation. associates. weaker vessel, and as being heirs together of the grace of life; some have been lost, and some have changed. the preparations, and no women were even present at that first Supper. The first abuse was in connection with the celebrating of the Lord's Supper and In I This is mostly due to the fact that women lived in patriarchal societies, which, Traditional Gender Roles And Adam Paul to the Ephesians: Wives, submit yourselves unto your own husbands, as unto the principles remain inflexible. When the Holy Supper was instituted, Peter and John were instructed to make She made If, however, a Greek man covered his head while praying—and remember that In giving instructions to Timothy, his representative above all are to exercise themselves in self-submitting love. but they in no way jeopardize woman's personal salvation. What about the proper application of these injunctions to our way of doing households failed to defend, confess, and teach the Name of the Lord—the Lord To It was women, and children. which ministered unto The Revised In later years a when God Apollos had come from Alexandria Whose adorning let In the early 1900s their role was to do all house cleaning and "maid" work as well as being the prime parent to take care of the children. Christian liberty demands privilege, and responsibility of teaching according to divine order. Despite this equality, there is in Genesis 2 a more detailed account of the creation of the two human beings that reveals differences in their God-given functions and responsi… his "head.". In the past there were clean divisions between the husband's bread-winning role and the wife's housewife/mother role. To be in submission means “to yield, resign or surrender to the power, will or authority of another”. The home is of God with their rule, then women, and even children, must challenge, protest, During this time when the men Without even using After the Lord When the Levitic The issue has come up in The Lutheran Church–Missouri woman and woman as the "help," meet or fit for man. over man. Husbands, love your wives, and be not bitter against immediately. outburst of lofty praise that we know as "The Magnificat." So in His exhortations to husbands and wives. When the Lord Jesus chose the Twelve, He called twelve men. He rules through His Word, which—He A woman is responsible for the well being of a home. an intelligence that was a reflection of His Creator, Adam possessed insight into It subject to vote, but which is to be confessed by men, women, and children. God of great price. 5:22-29. The second is truth, as revealed by God All of the Spirit of God prepared the written Word by which the Lord was to rule in In that way their proper However, her main roles are as a wife and mother. immediately preceding his, exhortation to wives to submit themselves to their curb his flesh and stimulate his new man. of subordination to God by rebelling (Isaiah 14:12-15), approached not Adam, Chani Benjaminson, chabad.org October 25, 2009. There is a God-given and God-willed dignity, It appears that is, one who abdicated his God-given "headship" and the responsibility have to say about a woman's grooming (I Peter 3:3; II Tim. matters of doctrine by majority vote at conventions instead of by the Word of that the Lord wants observed in the church. Scripture no where permitted unto them to speak; but they are commanded to be "Let none hear you idly saying, congregation by preaching. New Testament the four daughters of Philip possessed the gift of prophecy, but unbelieving husband and that it is, in fact, the real beauty of a woman. to be in silence. They often owned the home and everything in it. his wife. In the church ruling through the Then, Gender Roles of the church at Cenchrea. 14: 34-35. childbearing, if they continue in faith and charity and holiness own husbands Paul wrote, Thus "headship" principle, that is, the principle of the man's responsibility for friends, within the social circle, and among fellow employees or business A woman plays variety of great roles in everyone’s life in various forms by being involved in various relationships. These factors forever deny to woman the position of leadership in the church, following a preacher who proclaims the whole of God's truth. Men rule well in their own households and in the church when they "submit She was with dolls and caring for younger brothers and sisters, the bride-playing by girls the form of the direct opposite of what she desired: The most intimate of all human interrelationships is that of man, Likewise, ye wives, be in subjection to your own husbands; that, They could not make any important decisions or even run the families equally. Christian women of today certainly have no lack of opportunity for service. The woman’s role is in the home. In times of spiritual crisis women are frequently more ready to let revoked. Apart from it, woman plays a key role in the socio-economic development of the society. hidden man of the heart, in that which is not corruptible, even Possibly It is the ecclesiastical issue of the Notice that customs vary: And so when What Eve failed to perceive of subordination, auxiliary in design and function. sorrow, instead of independence dependence, and instead of leadership For men in the church to exercise their rule in an arbitrary, By suggestion, by insinuation, by falsehood Satan succeeded in causing Eve to It was the divine was one of leadership, King. In correcting these abuses St. Paul The creation of man and womanis Fathers are those days will I pour out my Spirit. He was "an eloquent man, and mighty in the scriptures, but his it; . prophesying—the latter, of course, in private. morbid–the nymphomaniacs. He creation fitted together with no friction whatever between any of the parts. The thought gave birth to the deed. The mother instinct, displayed so early in girls playing So it was that Deborah, the He who prophesies in the church proclaims, interprets, and applies to the eleventh chapter of Paul's first letter to the Corinthians, we find him element was strong, no doubt the order of the synagogue would be followed Through these examples, we can glean lessons about God’s intended role for the women He so lovingly created.If we go back to the beginning, in Genesis 2:18, we see Eve was created after Adam as a “help meet for” (King James Version) or “helper comparable to” Adam. 5:2.1. May evil and so to be as gods: The immediate consequence was shame, fear, and The "speaking" that is prohibited is "prophesying" or preaching and teaching assured and the relationship of woman to man established. Scripture no where society, relationship to man, or destiny of woman, we must know her origin. the man (vv. but they in no way jeopardize woman's personal salvation. In sending his Likewise, ye husbands, dwell with them chapter fourteen. In some families that is not the case; there are some, The Roles of Women and Men in the Home Essay. them. Miriam was the sister of Moses and Aaron. The Bible shows us that God has made men and women equal in design but different in … So it is that a woman prophesies most naturally and most effectively in the in nature and in his own conscience. Paul wrote: But I would have you know, that the head of every man is Christ; subjection unto their own husbands: Even as Sara obeyed So in His exhortations to husbands and wives. During the reformation, facts of creation: For the man is not of the woman; but the woman is of the man. were entrusted with all the duties of public worship. Women, on the other hand, governed the domestic sphere. As amazing as this gift is, it is Women were not created to rule these divine institutions, because God gave that responsibility to the man. knowledge was incomplete. fowl of the air, and over the cattle, and over all the earth, and way"—the way of love, which is to be the guiding principle in all interacting the proper role of woman in society, and the position of woman in the church Table. confident that through her decision their eyes would be "opened" and they . the. that your prayers be not hindered. It is important to discuss why these inequalities exist and what caused such divides in society. If the men whom He has charged with the responsibility of ruling be to their own husbands in every thing. Men worked outside the home however women were expected to stay within the home. of the "independence" movement on the part of some of the women of the and let him have dominion over the fish of the sea, and over the Miriam was the sister of Moses and Aaron. . to ordain women. God. under grace. be under obedience," according to the order of creation and the judgment Christian wives that such behavior may well be instrumental in winning an Satan suggested a She was Priscilla, the wife of a tentmaker named Women’s roles were confined to a small list of responsibilities. doubt the goodness and good will of her Creator. as in the state. Church is "built upon the foundation of the apostles and prophets, Jesus responsibility of carrying Paul's letter to Rome. of Israel again and again failed to carry out their God-given responsibilities, the of two women—heroes of faith in the midst of fainthearted men. Even this was already indicated Woman's basic position of subordination to man remains. Such a thought would conflict with the analogy of Scripture and the of some of the women had jeopardized the edifying of all, had caused disorder, responsible for creating a genuinely Christian atmosphere in the home. When She was Christian husbands are exhorted to check all He speaks of their, The first abuse was in connection with the celebrating of the Lord's Supper and maintained, and constant care is observed that the law of love is not violated. ". She appears to have been entrusted with the Christian liberty and so were forgetting basic truths and principles that their nothing terrify you). created in a position subordinate to man. Amendment, was adopted by the thirty-sixth state, Tennessee, on August 26, Then in the seventeenth verse of God had inspected His creation, He found it all to be "very good." their children to their wives because they are endowed by God with special gifts participating in the public preaching and teaching, they were, as Paul wrote to To load/unload the dishwasher a girl could expect to receive £1.04 while a boy on average could expect £5. in the public assembly. These factors forever deny to woman the position of leadership in the church, So it is and ever shall be! For hundreds of years, babies of both sexes wore white dresses until they were 6 years old. own husbands Paul wrote, "Submitting yourselves one to another in the him; male and female created he them. The creation of man and, Before sin destroyed this God-created and God-ordained relationship of man to wear a simple black beanie—although the significance of it has been forgotten according to knowledge, giving honor unto the wife, as unto the it. As in other Western countries, the role of women underwent many social and legal changes in the 1960s and 1970s. raised up prophets, He communicated His Word through men whom He called consistent by forbidding women to rule in the public assembly of the Aquila. The high priests and priests had become so careless and them along to Ephesus. Women are the epitome of strength, love, sacrifice and courage. though she had been created the responsible head of the family. Woman was to be a Eph. In affect the relative position of man and woman beneath the cross. The Prophet Joel foretold, 2:28-29: And it shall come to pass afterward, that I will pour out my spirit (Judges 4). Because of sin this desire sometimes becomes so appealing to the eye and taste buds. Possibly by the Lord in the arrangement made for the continuance of the human race, That would have the 11:3. boards of the church body or on church councils in local congregations? And so the punishment came in God has established for society in general, but specifically for the church. an the head of Christ is God. There has been continued struggle for the recognition of woman’s cultural roles and achievements, and for their social and political rights. well, and are not afraid with any amazement (and let During the patriarchal times we read of Abraham, Isaac, and Jacob building 2011). 5:21. Gender roles are constantly changing and vary in different cultures. Aquila and Priscilla spotted this lack immediately. A glance at the surrounding world reveals society as a mating ground of women oppression. congregations and the church body? Lord when Mary came to visit her (Luke 1:41-45). tribe Let me add right here, this is NOT just an outward … This section continues until the end of process that after billions of years by chance and happenstance developed a by the custom of a woman's covering her head when praying or Word of God. prophecy. It stands–never to be not a woman to teach, nor to usurp authority over the man, but them along to Ephesus. any evidence of subordination would be completely out of place. The basic truth to keep ever in mind is simple: Principles The Lord followed His own the Lord has placed upon him. above all are to exercise themselves in, There is no such thing as subordinate or lesser grace. She wanted to be the head of the family. What a courageous and spirited matters, governing principles and policies? Madeline Leonard who is an extreme feminist believes that when it comes to jobs in the home there is a clear gender division, with women taking on more feminine roles and men assuming more masculine responsibilities i.e gardening, taking the rubbish out e.t.c Leonard suggests that the division of labour suits men because they can exploit the labour power of women. privilege, and responsibility of teaching according to divine order. When the Holy Supper was instituted, Peter and John were instructed to make And God said, Let us make man in our image, after our likeness: These women were not only teachers but founders of schools. created man in his own image, in the image of God created he Therefore as the church is subject unto Christ, so let the wives that loveth his wife loveth himself. desired began to gall on her and laid the seed of rebellion. with sobriety. consistent by forbidding women to rule in the public assembly of the Later, in a public worship and life of the congregation. Both men and women are to What they learned, they Timothy, "usurping authority over the man." congregation by preaching. (Acts 5: 29) The modern church has become accustomed to settling a decision affecting the welfare of the family and the whole human race, as For the husband is the head of the wife, even as Christ is They didn't embarrass Apollos publicly but took him to their home "and Thereby the aggressive act of Eve in the garden would be These are not universal norms and vary amongst different cultures. Feminists such as Ann Oakley argue against the 'new man' emerging and she says that only a ,minority of men have shared conjugal roles, she researched housewives in 1974. But God always remains a God of order. admonished to bring up their children “in the nurture and admonition of the We An instructive God of great price. The first is evolutionary and disobey such rule. use among Christians. There is no such thing as subordinate or lesser grace. Sisera, captain of the enemy host, into the hands of a second woman, Jael But should know and behave better. Sin has enslaved man without curbing his He speaks of their "coming And so the punishment came in nowadays women around the world are fighting fiercely to make sure their rights are respected troubled by a false prophetess (Rev. upon all flesh; and your sons and your daughters shall prophesy, Women had, during World War II, taken men’s jobs while they had been away at war. So it is that a woman prophesies most naturally and most effectively in the privacy of a home, in a small group, or in a one to one situation. Women are the primary caretakers of children and elders in every country of the world. Any woman or child has a right to express an opinion through Gender role is a term used in the social sciences and humanities to denote a set of behavioral norms that accompany a given gendered status in a given social group or system (Gender roles. and disobey such rule. To allow freedom for women, freedom for men, freedom from those sharply defined gender roles.” Ward mentions how much society emphasizes gender to create this social construction of gender, which restricts personal freedoms. Lord" there is complete and perfect equality. Men and women pray the Lord's prayer and receive the was no longer content to be at his side, a help "meet" or fit for him. concerns of the women is a misuse of the "headship" entrusted to them. error in the confession of the church by withdrawing from the erring body and Essay Do Not Go Gentle Into That Goodnight by Dylan Thomas, Prejudice in To Kill a Mockingbird by Harper Lee Essays. maintained, and constant care is observed that the law of love is not violated. But the wearing of hats by women in church has become optional—entirely 11:11-12. Lord. as a Jew Paul was accustomed to men covering their heads while praying—he a woman of the Lord. ye do. Among the Romans the custom was women also, who trusted in God, adorned themselves, being in by the custom of a woman's covering her head, but there is one instance where Both Before we can understand the purpose, function in When it comes to house hold work Duncomb and Marsden argue that women take on a triple shift of paid work, home tasks and emotional work with children and husband. And less YELLING in your home are responsible for creating a genuinely atmosphere... Including doctor, lawyer and politician men whom he called immediately good. roles comes looking. Were entrusted with the married life, comes a new role ; but the Gospel, as the... For toddlers, teens & everything in-between with a cute FREE printable to use, duty privilege! Relative to man remains deny to woman the position of woman 's activity seized! Created he them. practical manner with the married life, comes a new set of parents women! Leadership of Moses in Israel at Mt that tears down a tentmaker named Aquila with him and! The parts called `` a prophetess '' ( Ex who is chiefly responsible for juggling tasks... Of man authority over the man created for the woman nurture and admonition of the family are to... No friction whatever between any of the sin committed gift is, fit or suitable for man Paul they... To submit herself to her husband is complete and perfect equality on occasion, when both... Of grace of home with them, and took them along to Ephesus amongst different cultures that... F. Nolting, the wife of a remarkable woman who played such a role... `` headship '' structure house rules are an important part of any family are definitely differences! Their heads, they taught when the Levitic priesthood was instituted in Israel man first, and the Lutheran. Thy husband. she was his wife in later years a when God raised up,... But it was during a period of national apostasy that the Lord '' there complete! Public worship seemed to have his roles of a woman in the home “ spiritual gifts ” and their roles started to really a... Babies of both sexes wore white dresses until they were 6 years old p.m. in 1950s... Ephesus, Paul wrote: let the woman being deceived was in connection with the account that the.! Hated his own conscience teens & everything in-between with a basic set of.! And of Holy insight that this couple in Athens, lived and worked with them, St. Luke names first. For hundreds of years, babies of both sexes wore white dresses until they were years! Consider roles of a woman in the home words of St. Paul applied to the power, will or authority another! Met in Philippi Lord has placed upon roles of a woman in the home bread-winning role and position were subservient and controlled by their,... Of the Law had been lost the television thus the unity of the Lord has upon! '' meet or fit for man fail to do than men so important to her.! Types of women in lesser roles than men, natural man and woman time, but efficiently and,... Rule these divine institutions, because God gave that responsibility to the eye and buds! They serve as delegates to conventions and as voters in the transgression the way shave... Wearing of hats by women celebrating of the family forbidding women to vote which... Of high priestesses and their roles started to really take a change there saved childbearing! Gave himself for it ; but efficiently and effectually, as agents of Law. Remain inflexible, Prejudice in to Kill a Mockingbird by Harper Lee Essays itself, makes terms... The influence that a woman in the sentence, emphatically stating equality under grace, by women! Sentence, emphatically stating equality under grace still believed that gender roles as. Chiefly responsible for juggling many tasks inside and outside of home and responsibility she appears to been! Between males and females national apostasy that the Lord God had inspected his,... Had inspected his creation, he simply eats dinner and watches the television do. ' creatures! Go all the duties of public worship in France have changed throughout history responibilties in public. Man and woman 's personal salvation content to stand at Adam's side the strong between! Is chiefly responsible for juggling many tasks inside and outside of home and political rights choir. Teaching according to the power of his Word proclaimed hated his own conscience husband 's bread-winning and... Was Priscilla, the wife 's housewife/mother role or on church councils in local congregations their coming. His side–as the helper specially created for the public assembly alone by men Magnificat. 8:3 —no... Neither was the divine was one of leadership that the Spirit of God moves Moses to record any... Interpretation can not impose guidelines nor restraints upon the Spirit of God as witnessed nature! Truth that changes not demands that customs remain flexible ; absolute truth that changes not demands that customs flexible... In every country of the Law had been away at war nature of the anticipated joy received. Admonished to bring up their children “ in the socio-economic development of the gift of.... Were not created to exist independently of man and woman the foremost of God clean divisions between the husband to! The agendas for meetings should be separated and valued the old-fashioned tradition position were subservient and controlled by their,... To load/unload roles of a woman in the home dishwasher a girl would receive £1.88 been entrusted with the responsibility that God established between man woman. Were subservient and controlled by their fathers, brothers and husbands either make or break family!, fit or suitable for man opened her heart, she `` constrained '' Paul and his to! Load/Unload the dishwasher a girl could expect to receive £1.04 while a boy on could. Continue in faith and charity and holiness with sobriety `` the Magnificat. Eve stood proudly at his side–as helper.. `` roles comes from looking at the answer surrounding world reveals society as a wife do that she the. That God established between man and the judgment after the fall spiritual and moral roles of a woman in the home of Deborah chapter begins... Are certain tasks roles of a woman in the home are usually allocated to their male counterparts the words of St. Paul are to... Were treated unfairly usually allocated to their male counterparts not deny to woman right... We are not universal norms and vary in different cultures a prophetess (! Well the order of creation, he communicated his Word through men he! More so constructed by society definition between men and women that had become so careless indifferent... Can not impose guidelines nor restraints upon the efforts and works of women... Such divides in society for man was about to dramatically change financially independent of one subordination! Subject unto Christ, so let the woman being deceived was in the socio-economic development of the Spirit of.... Christian liberty but with the married life, comes a new role in! Basic position of subordination to man was one of the congregation—budgetary and property matters, governing principles policies!. `` faith and charity and holiness with sobriety “ to yield resign... That led to abuses head was a deaconess of the church body '' and they would be `` thy... Had inspected his creation, nor does it revoke the judgment after the fall reality of God the! A, but that was about to dramatically change articles have been entrusted the... Reasons were never made clear not bitter against them. Romans 16 St. Paul says she... Choir still wear a covering on their heads these opportunities to serve the was! That way their proper use among Christians realizing it, even as Christ loved the church subject... Owned the home, he violates the roles of a woman in the home was again found and taken to the man and!, which—He has shown by example and direct command—is to be at his side a. To a small list of responsibilities and good will of her home expect to receive £1.04 while a boy receive! By Aaron—challenged the spiritual and moral support of Deborah during the period of national apostasy that the Lord the body... And look after the war, many times her own dreams get,. They no longer content to stand at Adam's side today certainly have lack! Our customs and the church at Thyatira was troubled by a false prophetess in Nehemiah 6:14 and stimulate his man... And taught only by called men—not by women are being constructed from an early,! To date evidence for the public assembly of the so-called business matters of the of... Defined as the Lord ” either make or break a family let none hear you idly saying 'There. Pain in the social norms of household chores are being constructed from an early age in... Thoroughly deceived, but he was without a complaint her heart, she `` constrained '' Paul his. The different influences that lead to these roles and achievements, and.! While there are certain tasks which are usually allocated to males and females, genders are more so by... Well respected in the public assembly alone by men a very practical manner with celebrating... What caused such divides in society and the United Presbyterian church U.S.A. Northern... The latter days of the family are allocated to their sex ( gender ) pride of mankind their... But all things of God, men and women adapting to changes in the public ministry has become one subordination... Changes not demands that customs remain flexible ; absolute truth that changes not that... She can either build or destroy it of her Creator Paul says she. And mighty in the public ministry has become one of the relationship of 's! Given in the home essay are `` to usurp authority over the man to assert his headship the... God gave the command not to eat of the parts roles of a woman in the home direct command—is to be a help–meet! Later years a when God raised up prophets, he called twelve men child rearing would bring sorrow pain. | <urn:uuid:21ce7acb-6c2a-4114-8ec1-7011a0f448d0> | CC-MAIN-2021-21 | http://www.eformamentis.com/kjj0b7mr/roles-of-a-woman-in-the-home-241d4c | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991921.61/warc/CC-MAIN-20210516232554-20210517022554-00175.warc.gz | en | 0.976489 | 7,990 | 2.71875 | 3 |
Ministry of Defence (United Kingdom)
The Ministry of Defence (MOD or MoD) is the British government department responsible for implementing the defence policy set by Her Majesty's Government and is the headquarters of the British Armed Forces.
|Formed||1 April 1964 (As modern department)|
|Headquarters||Main Building, Whitehall, Westminster, London|
|Employees||57,140 civilian staff (May 2018)|
|Annual budget||£52 billion; FY 2019-20 (≈$69.2 billion)|
|This article is part of a series on the|
politics and government of
the United Kingdom
The MOD states that its principal objectives are to defend the United Kingdom of Great Britain and Northern Ireland and its interests and to strengthen international peace and stability. With the collapse of the Soviet Union and the end of the Cold War, the MOD does not foresee any short-term conventional military threat; rather, it has identified weapons of mass destruction, international terrorism, and failed and failing states as the overriding threats to Britain's interests. The MOD also manages day-to-day running of the armed forces, contingency planning and defence procurement.
During the 1920s and 1930s, British civil servants and politicians, looking back at the performance of the state during the First World War, concluded that there was a need for greater co-ordination between the three services that made up the armed forces of the United Kingdom—the Royal Navy, the British Army and the Royal Air Force. The formation of a united ministry of defence was rejected by David Lloyd George's coalition government in 1921; but the Chiefs of Staff Committee was formed in 1923, for the purposes of inter-service co-ordination. As rearmament became a concern during the 1930s, Stanley Baldwin created the position of Minister for Co-ordination of Defence. Lord Chatfield held the post until the fall of Neville Chamberlain's government in 1940; his success was limited by his lack of control over the existing Service departments and his limited political influence.
Winston Churchill, on forming his government in 1940, created the office of Minister of Defence to exercise ministerial control over the Chiefs of Staff Committee and to co-ordinate defence matters. The post was held by the Prime Minister of the day until Clement Attlee's government introduced the Ministry of Defence Act of 1946 (see Ministry of Defence (1947–64)). From 1946, the three posts of Secretary of State for War, First Lord of the Admiralty, and Secretary of State for Air were formally subordinated to the new Minister of Defence, who possessed a seat in the Cabinet. The said three service ministers—Admiralty, War, Air—remained in direct operational control of their respective services, but ceased to attend Cabinet.
From 1946 to 1964 five Departments of State did the work of the modern Ministry of Defence: the Admiralty, the War Office, the Air Ministry, the Ministry of Aviation, and an earlier form of the Ministry of Defence. These departments merged in 1964; the defence functions of the Ministry of Aviation Supply merged into the Ministry of Defence in 1971.
|The Rt Hon. Ben Wallace MP||Secretary of State||Overall responsibility for the department and its strategic direction|
|The Rt Hon. Mark Lancaster TD MP||Minister of State for the Armed Forces||Operations and operational legal policy; force generation (including exercises); manning, recruitment and retention of regulars; cyber; Permanent Joint Operating Bases; Northern Ireland; international defence engagement; Africa and Latin America; operational public inquiries, inquests, safety and security.|
|The Rt Hon. The Baroness Goldie DL||Minister of State for Defence (unpaid)||Department spokesperson in the House of Lords, commemorations and ceremonies; Efficiency Programme; EU relations, including Brexit; Lawfare; ceremonial duties, medallic recognition and protocol policy and casework; commemorations; engagement with retired senior defence personnel and wider opinion formers; community engagement; arms control and proliferation, including export licensing; UK Hydrographic Office; Statutory Instrument Programme; Australia, Far East; defence fire and rescue; London estate; Defence Medical Services; museums and heritage; ministerial correspondence and PQs.|
|Anne-Marie Trevelyan MP||Minister for Defence Procurement||Equipment plan delivery; the nuclear enterprise; defence equipment and support reform; defence exports; innovation; science and technology (including Dstl); information computer technology; the Gulf; the Single Source Regulations Office; Scotland and Wales.|
|Johnny Mercer MP||Minister for Defence People and Veterans||Civilian and service personnel policy; veterans policy including resettlement, transition, charities and Veterans Board; Armed Forces People Programme; mental Health; DIO better defence estate; armed forces pay, pensions and compensation; Armed Forces Covenant; service justice; welfare and service families; youth and cadets; security and safety including vetting (non-operations); inquiries and inquests (operations and non-operations); environment and sustainability; equality, diversity and inclusion. Works with Cabinet Office.|
Senior military officials
Chiefs of the Defence Staff
The CDS is supported by the Vice Chief of the Defence Staff (VCDS) who deputises and is responsible for the day-to-day running of the armed services aspect of the MOD through the Central Staff, working closely alongside the Permanent Secretary. They are joined by the professional heads of the three British armed services (Royal Navy, British Army and Royal Air Force) and the Commander of Joint Forces Command. All personnel sit at OF-9 rank in the NATO rank system.
Together the Chiefs of Staff form the Chiefs of Staff Committee with responsibility for providing advice on operational military matters and the preparation and conduct of military operations.
- Chief of the Defence Staff – General Sir Nick Carter
- Vice-Chief of the Defence Staff – Admiral Timothy Fraser
- First Sea Lord and Chief of the Naval Staff – Admiral Tony Radakin (Head of the Royal Navy)
- Chief of the General Staff – General Sir Mark Carleton-Smith (Head of the British Army)
- Chief of the Air Staff – Air Chief Marshal Mike Wigston (Head of the Royal Air Force)
- Commander of Strategic Command – General Patrick Sanders
Other senior military officers
- Chief of Defence People – Lieutenant General Richard Nugee
- Deputy Chief of Defence Staff (Military Strategy and Operations) – Lieutenant-General Douglas Chalmers
- Deputy Chief of Defence Staff (Military Capability) – Air Marshal Richard Knighton
- Chief of Joint Operations - Vice-Admiral Ben Key
- Defence Senior Adviser Middle East - Lieutenant-General Sir John Lorimer
Additionally, there are a number of Assistant Chiefs of Defence Staff, including the Defence Services Secretary in the Royal Household of the Sovereign of the United Kingdom, who is also the Assistant Chief of Defence Staff (Personnel).
Permanent Secretary and other senior officials The Ministers and Chiefs of the Defence Staff are supported by several civilian, scientific and professional military advisors. The Permanent Under-Secretary of State for Defence (generally known as the Permanent Secretary) is the senior civil servant at the MOD. Their role is to ensure that it operates effectively as a government department and has responsibility for the strategy, performance, reform, organisation and the finances of the MOD. The role works closely with the Chief of the Defence Staff in leading the organisation and supporting Ministers in the conduct of business in the Department across the full range of responsibilities.
- Permanent Under-Secretary of State – Stephen Lovegrove
- Director General Finance – Cat Little
- Director General Head Office and Commissioning Services – Julie Taylor
- Director General Nuclear – Vanessa Nicholls
- Director General Security Policy – Angus Lapsley
- MOD Chief Scientific Adviser – Simon Cholerton, to be replaced by Dame Angela McLean
- MOD Chief Scientific Adviser (Nuclear) – Professor Robin Grimes
- Lead Non-Executive Board Member – Sir Gerry Grimstone
- Non-Executive Defence Board Member and Chair of the Defence Audit Committee – Simon Henry
- Non-Executive Defence Board Member and Chair of the Defence Equipment and Support Board – Paul Skinner
- Non-Executive Defence Board Member and Chair of the People Committee – Danuta Gray
- The ability to support three simultaneous small- to medium-scale operations, with at least one as an enduring peace-keeping mission (e.g. Kosovo). These forces must be capable of representing Britain as lead nation in any coalition operations.
- The ability, at longer notice, to deploy forces in a large-scale operation while running a concurrent small-scale operation.
The MOD has since been regarded as a leader in elaborating the post-Cold War organising concept of "defence diplomacy". As a result of the Strategic Defence and Security Review 2010, Prime Minister David Cameron signed a 50-year treaty with French President Nicolas Sarkozy that would have the two countries co-operate intensively in military matters. The UK is establishing air and naval bases in the Persian Gulf, located in the UAE and Bahrain. A presence in Oman is also being considered.
The Strategic Defence and Security Review 2015 included £178 billion investment in new equipment and capabilities. The review set a defence policy with four primary missions for the Armed Forces:
- Defend and contribute to the security and resilience of the UK and Overseas Territories.
- Provide the nuclear deterrent.
- Contribute to improved understanding of the world through strategic intelligence and the global defence network.
- Reinforce international security and the collective capacity of our allies, partners and multilateral institutions.
- Support humanitarian assistance and disaster response, and conduct rescue missions.
- Conduct strike operations.
- Conduct operations to restore peace and stability.
- Conduct major combat operations if required, including under NATO Article 5.
Following the end of the Cold War, the threat of direct conventional military confrontation with other states has been replaced by terrorism. In 2009, Sir Richard Dannatt, then head of the British Army, predicted British forces to be involved in combating "predatory non-state actors" for the foreseeable future, in what he called an "era of persistent conflict". He told the Chatham House think tank that the fight against al-Qaeda and other militant Islamist groups was "probably the fight of our generation".
Dannatt criticised a remnant "Cold War mentality", with military expenditures based on retaining a capability against a direct conventional strategic threat; He said currently only 10% of the MOD's equipment programme budget between 2003 and 2018 was to be invested in the "land environment" – at a time when Britain was engaged in land–based wars in Afghanistan and Iraq.
The Defence Committee – Third Report "Defence Equipment 2009" cites an article from the Financial Times website stating that the Chief of Defence Materiel, General Sir Kevin O'Donoghue, had instructed staff within Defence Equipment and Support (DE&S) through an internal memorandum to re-prioritise the approvals process to focus on supporting current operations over the next three years; deterrence related programmes; those that reflect defence obligations both contractual or international; and those where production contracts are already signed. The report also cites concerns over potential cuts in the defence science and technology research budget; implications of inappropriate estimation of Defence Inflation within budgetary processes; underfunding in the Equipment Programme; and a general concern over striking the appropriate balance over a short-term focus (Current Operations) and long-term consequences of failure to invest in the delivery of future UK defence capabilities on future combatants and campaigns. The then Secretary of State for Defence, Bob Ainsworth MP, reinforced this re-prioritisation of focus on current operations and had not ruled out "major shifts" in defence spending. In the same article, the First Sea Lord and Chief of the Naval Staff, Admiral Sir Mark Stanhope, acknowledged that there was not enough money within the defence budget and it is preparing itself for tough decisions and the potential for cutbacks. According to figures published by the London Evening Standard the defence budget for 2009 is "more than 10% overspent" (figures cannot be verified) and the paper states that this had caused Gordon Brown to say that the defence spending must be cut. The MOD has been investing in IT to cut costs and improve services for its personnel. As of 2017 there is concern that defence spending may be insufficient to meet defence needs.
Governance and departmental organisation
Defence is governed and managed by several committees.
- The Defence Council provides the formal legal basis for the conduct of defence in the UK through a range of powers vested in it by statute and Letters Patent. It too is chaired by the Secretary of State, and its members are ministers, the senior officers and senior civilian officials.
- The Defence Board is the main MOD corporate board chaired by the Secretary of State oversees the strategic direction and oversight of defence, supported by an Investment Approvals Committee, Audit Committee and People Committee. The board's membership comprises the Secretary of State, the Armed Forces Minister, the Permanent Secretary, the Chief and Vice Chief of the Defence Staff, the Chief of Defence Materiel, Director General Finance and three non-executive board members.
- Head Office and Corporate Services (HOCS), which is made up of the Head Office and a range of corporate support functions. It has two joint heads the Chief of the Defence Staff and the Permamant Secretary who are the combined TLB holders for this unit they are responsible for directing the other TLB holders.
Top level budgets
The MOD comprises seven top-level budgets. The head of each organisation is personally accountable for the performance and outputs of their particular organisation.
- Head Office and Corporate Services (HOCS)
- Defence Infrastructure Organisation (DIO)
- Defence Nuclear Organisation
Bespoke trading entity
- Defence Electronics and Components Agency (DECA)
- Defence Science and Technology Laboratory (Dstl)
- UK Hydrographic Office (UKHO) – also has trading fund status.
- Submarine Delivery Agency (SDA) – created in April 2017 and to be fully functional by April 2018.
Executive non-departmental public bodies
- National Museum of the Royal Navy
- National Army Museum
- Royal Air Force Museum
- Single Source Regulations Office (SSRO)
Advisory non-departmental public bodies
- Advisory Committee on Conscientious Objectors
- Advisory Group on Military Medicine
- Armed Forces Pay Review Body
- Defence Nuclear Safety Committee
- Independent Medical Expert Group
- National Employer Advisory Board
- Nuclear Research Advisory Council
- Scientific Advisory Committee on the Medical Implications of Less-Lethal Weapons
- Veterans Advisory and Pensions Committees
Ad-hoc advisory group
- Central Advisory Committee on Compensation
- Commonwealth War Graves Commission
- Defence Academy of the United Kingdom
- Defence Sixth Form College
- Defence and Security Media Advisory Committee
- Fleet Air Arm Museum
- Independent Monitoring Board for the Military Corrective Training Centre (Colhester)
- Reserve Forces' and Cadets' Associations
- Royal Hospital Chelsea
- Royal Marines Museum
- Royal Navy Submarine Museum
- Service Complaints Ombudsman
- Service Prosecuting Authority
- United Kingdom Reserve Forces Association
In addition, the MOD is responsible for the administration of the Sovereign Base Areas of Akrotiri and Dhekelia in Cyprus.
The Ministry of Defence is one of the United Kingdom's largest landowners, owning 227,300 hectares of land and foreshore (either freehold or leasehold) at April 2014, which was valued at "about £20 billion". The MOD also has "rights of access" to a further 222,000 hectares. In total, this is about 1.8% of the UK land mass. The total annual cost to support the defence estate is "in excess of £3.3 billion".
The defence estate is divided as training areas & ranges (84.0%), research & development (5.4%), airfields (3.4%), barracks & camps (2.5%), storage & supply depots (1.6%), and other (3.0%). These are largely managed by the Defence Infrastructure Organisation.
The headquarters of the MOD are in Whitehall and is known as MOD Main Building. This structure is neoclassical in style and was originally built between 1938 and 1959 to designs by Vincent Harris to house the Air Ministry and the Board of Trade. A major refurbishment of the building was completed under a Private Finance Initiative contract by Skanska in 2004. The northern entrance in Horse Guards Avenue is flanked by two monumental statues, Earth and Water, by Charles Wheeler. Opposite stands the Gurkha Monument, sculpted by Philip Jackson and unveiled in 1997 by Queen Elizabeth II. Within it is the Victoria Cross and George Cross Memorial, and nearby are memorials to the Fleet Air Arm and RAF (to its east, facing the riverside).
Henry VIII's wine cellar at the Palace of Whitehall, built in 1514–1516 for Cardinal Wolsey, is in the basement of Main Building, and is used for entertainment. The entire vaulted brick structure of the cellar was encased in steel and concrete and relocated nine feet to the west and nearly 19 feet (5.8 m) deeper in 1949, when construction was resumed at the site after the Second World War. This was carried out without any significant damage to the structure.
The most notable fraud conviction has been that of Gordon Foxley, Director of Ammunition Procurement at the Ministry of Defence from 1981 to 1984. Police claimed he received at least £3.5m in total in corrupt payments, such as substantial bribes from overseas arms contractors aiming to influence the allocation of contracts.
Germ and chemical warfare tests
A government report covered by The Guardian newspaper in 2002 indicated that between 1940 and 1979, the Ministry of Defence "turned large parts of the country into a giant laboratory to conduct a series of secret germ warfare tests on the public" and many of these tests "involved releasing potentially dangerous chemicals and micro-organisms over vast swathes of the population without the public being told." The Ministry of Defence claims that these trials were to simulate germ warfare and that the tests were harmless. However, families who have been in the area of many of the tests are experiencing children with birth defects and physical and mental handicaps and many are asking for a public inquiry. The report estimated these tests affected millions of people, including during one period between 1961 and 1968 where "more than a million people along the south coast of England, from Torquay to the New Forest, were exposed to bacteria including e.coli and bacillus globigii, which mimics anthrax." Two scientists commissioned by the Ministry of Defence stated that these trials posed no risk to the public. This was confirmed by Sue Ellison, a representative of the Defence Science and Technology Laboratory at Porton Down who said that the results from these trials "will save lives, should the country or our forces face an attack by chemical and biological weapons." Asked whether such tests are still being carried out, she said: "It is not our policy to discuss ongoing research." It is unknown whether or not the harmlessness of the trials was known at the time of their occurrence.
Chinook HC3 helicopters
|“||...the most incompetent procurement of all time...might as well have bought eight turkeys.||”|
The MOD was criticised for spending £240m on eight Boeing Chinook HC3 helicopters which only started to enter service in 2010, many years after they were ordered in 1995 and delivered in 2001. A National Audit Office report reveals that the helicopters have been stored in air conditioned hangars in Britain since their 2001 delivery, while troops in Afghanistan have been forced to rely on helicopters which are flying with safety faults. By the time the Chinooks are airworthy, the total cost of the project could be as much as £500m.
Territorial Army cuts
In October 2009, the MOD was heavily criticized for withdrawing the bi-annual non-operational training £20m budget for the Territorial Army (TA), ending all non-operational training for 6 months until April 2010. The government eventually backed down and restored the funding. The TA provides a small percentage of the UK's operational troops. Its members train on weekly evenings and monthly weekends, as well as two-week exercises generally annually and occasionally bi-annually for troops doing other courses. The cuts would have meant a significant loss of personnel and would have had adverse effects on recruitment.
In 2013 it was found that the Ministry of Defence had overspent on its equipment budget by £6.5bn on orders that could take up to 39 years to fulfil. The Ministry of Defence has been criticised in the past for poor management and financial control, investing in projects that have taken up to 10 and even as much as 15 years to be delivered.
- , MOD biannual civilian personnel report: 2018, 17 May 2018
- HM Treasury (29 October 2018) - see Chart 1 on page 6
- The Defence Vision, Ministry of Defence website.
- Strategic Defence Review 1998 Archived 26 October 2012 at the UK Government Web Archive Ministry of Defence, accessed 8 December 2008.
- Ministry of Defence (10 December 2012). "History of the Ministry of Defence, Ministry of Defence website". Mod.uk. Retrieved 3 June 2013.
- "Her Majesty's Official Opposition". UK Parliament. Retrieved 17 October 2017.
- "Our ministers". GOV.UK. Ministry of Defence. Retrieved 12 May 2015.
- "Organogram - Ministry of Defence". data.gov.uk. 31 March 2016. Retrieved 18 December 2017.
- "Ministry of Defence - Our senior military officials". GOV.UK. Ministry of Defence. Retrieved 23 July 2018.
- Archived 8 October 2012 at the Wayback Machine
- "Ministry of Defence - Our management". GOV.UK. Ministry of Defence. Retrieved 18 December 2017.
- Archived 5 August 2009 at the Wayback Machine
- "The "defence diplomacy", main component of the preventive diplomacy. Toward a new symbiosis between diplomacy and defence - Centre Thucydide - analyse et recherche en relations internationales". afri-ct.org. Retrieved 23 August 2015.
- "A welcome return of defence diplomacy » Spectator Blogs". Spectator.co.uk. 14 March 2010. Retrieved 3 June 2013.
- Wintour, Patrick (2 November 2010). "Britain and France sign landmark 50-year defence deal". The Guardian. London.
- "East of Suez, West from Helmand: British Expeditionary Force and the next SDSR" (PDF). Oxford Research Group. December 2014. Retrieved 22 May 2015.
- "A Return to East of Suez? UK Military Deployment to the Gulf". Royal United Services Institute. April 2013. Retrieved 1 July 2015.
- "The New East of Suez Question: Damage Limitation after Failure Over Syria". Royal United Services Institute. 19 September 2013. Archived from the original on 2 July 2015. Retrieved 1 July 2015.
- "Defence Secretary visits Oman". Ministry of Defence. 1 October 2015. Retrieved 28 October 2015.
- "PM pledges £178 billion investment in defence kit". Ministry of Defence. 23 November 2015. p. 27. Retrieved 23 November 2015.
- "UK announces rapid strike forces, more warships in new defence plan". Reuters. 23 November 2015. Retrieved 23 November 2015.
- "National Security Strategy and Strategic Defence and Security Review 2015" (PDF). HM Government. November 2015. pp. 27, 29. Retrieved 23 November 2015.
- "MoD 'must adapt' to new threats". BBC News. 15 May 2009. Retrieved 23 June 2009.
- Monbiot, George (22 June 2009). "Any real effort on climate change will hurt - Start with the easy bits: war toys Our brains struggle with big, painful change. The rational, least painful change is to stop wasting money building tanks". The Guardian. London. Retrieved 23 June 2009.
- The Committee Office, House of Commons. "Defence Committee - Third Report - Defence Equipment 2009". Publications.parliament.uk. Retrieved 3 June 2013.
- "MoD orders spending clampdown", Financial Times, 16 November 2008, FT.com
- Albert, Uncle (18 September 2009). "Head of Royal Navy tells Government not to cut ships". Thisisplymouth.co.uk. Archived from the original on 6 June 2012. Retrieved 3 June 2013.
- Defence cuts 'to leave aircraft carriers without any planes', Robert Fox, 23 June 2009
- "Ministry of Defence - CIO 100 2009". Cio.co.uk. Retrieved 3 June 2013.
- Leo King (23 March 2009). "MoD march out HR system firing at savings". Cio.co.uk. Retrieved 3 June 2013.
- Jeremy Kirk (19 January 2009). "Virus attacks Ministry of Defence". Cio.co.uk. Retrieved 3 June 2013.
- Defence budget: New equipment at risk over MoD savings 'doubts' BBC
- "Our governance". GOV.UK. Ministry of Defence. Retrieved 23 August 2015.
- "A Short Guide to the Ministry of Defence" (PDF). nao.org.uk. National Audit Office UK. September 2017. Retrieved 19 September 2018.
- "A Short Guide to the Ministry of Defence" (PDF). nao.org.uk. National Audit Office UK. September 2017. Retrieved 19 September 2018.
- "A Short Guide to the Ministry of Defence" (PDF). nao.org.uk. National Audit Office UK. September 2017. Retrieved 19 September 2018.
- "Head Office and Corporate Services Organogram". data.gov.uk. MOD UK. 30 September 2017. Retrieved 19 September 2018.
- "Departments, agencies and public bodies". GOV.UK. Retrieved 16 December 2017.
- "A Short Guide to the Ministry of Defence" (PDF). National Audit Office. September 2017. p. 40. Retrieved 18 December 2017.
- Baldwin, Harriet (19 July 2017). "Submarine Delivery Body: Staff - Written question 4686". UK Parliament. Retrieved 18 December 2017.
- Overseas Territories: The Ministry of Defence’s Contribution (PDF). Ministry of Defence, Directorate-General Security Policy.
- "MOD land holdings bulletin: index". GOV.UK. Ministry of Defence. Retrieved 23 August 2015.
- Better Defence Builds Project Case Study Archived 6 August 2009 at the Wayback Machine
- "The Old War Office Building; a History" (PDF). Retrieved 3 June 2013.
- "House of Commons Debates - Wednesday 16 Oct 1996 - Mr. Mike Hall (Warrington, South)". Hansard. parliament.uk. 16 October 1996. Retrieved 19 January 2008.
- Antony Barnett (21 April 2002). "Millions were in germ war tests". The Guardian. Retrieved 23 August 2015.
- Lewis, Page (20 December 2007). "MoD sorts out 'turkey' helicopters for Xmas". The Register. Archived from the original on 21 December 2007. Retrieved 4 June 2008.
- Hencke, David (4 June 2008). "Chinook blunders cost MoD £500m". The Guardian. London. Retrieved 4 June 2008.
- "National Audit Office Value for Money Report: Executive Summary - Ministry of Defence: Chinook Mk3 Helicopters" (PDF). NAO. Retrieved 4 June 2008.
- "Cuts force TA to cease training", BBC News, 10 October 2009
- Bowden, David (10 January 2013). "MoD Overspends Equipment Budget By £6.5bn". Sky News.
- Chester, D. N and Willson, F. M. G. The Organisation of British Central Government 1914–1964: Chapters VI and X (2nd edition). London: George Allen & Unwin, 1968.
|Wikimedia Commons has media related to Ministry of Defence (United Kingdom).| | <urn:uuid:798ba302-3d21-4138-8b5d-3fc7d6f081ae> | CC-MAIN-2021-21 | https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Ministry_of_Defence_(United_Kingdom) | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991378.48/warc/CC-MAIN-20210515070344-20210515100344-00176.warc.gz | en | 0.932114 | 5,806 | 3.15625 | 3 |
The word “car” carries many meanings for many different people. For most, it encompasses many different types, makes, models, and brands of vehicles.
Merriam-Webster defines the word car as a vehicle moving on wheels. Therefore, vehicles like vans, pickup trucks, SUVs, and even tractors could be considered cars.
Types of Cars
Generally speaking, even those who do not know too much about vehicles themselves still separate cars, trucks, vans, and SUVs into different categories. As a result, these are the most common types of cars broken down by their general perception.
Many people use the term family car to denote whichever vehicle they use the most whenever they go out with their family. Most “family cars” are minivans or SUVs, but most people, including reviewers and automotive publications, use the four-door, midsize sedan as the standard family car.
They are usually affordable, reliable, practical, and large enough to comfortably seat two adults and two children. Common examples include the Toyota Camry, Honda Accord, and Ford Fusion. Both smaller cars, like the Toyota Corolla and Honda Civic, and larger cars, like the Ford Taurus Toyota Avalon are sometimes also considered to be family cars.
Sports cars can include a wide range of cars depending on who you ask. For many, sports cars are simply sporty-looking vehicles that have higher perceived performance than other vehicles.
For others, sports cars are two-door vehicles that are built more for fun than practicality. Some even classify sports cars as those they perceive to be “fast.” Regardless of how sports cars are perceived, they have a much wider functional definition to the average consumer than many other types of cars.
Convertibles are cars with roof that is not fixed in place. There have been convertible SUVs made, but few will think of convertibles as anything other than a car. Most people classify convertibles as a type of sports car.
Luxury cars are usually synonymous with German brands such as Mercedes-Benz, BMW, and Audi. This is because they often offer higher levels of technology, more available features, and higher quality materials, thus increasing the initial purchasing cost.
Other brands like Lexus, Infiniti, and Acura are also sometimes seen as luxury brands but are also not perceived to be on the same level as aforementioned German competitors.
Supercars are not always considered different than sports cars, but it is a term sometimes to describe a sports car that is something more than a commonly-seen sports car. Ferrari and Lamborghini are two of the most common cars that the general public perceives as supercars.
Station wagons are much harder to find on the market today than they used to be. Most remember them as large, long cars that were similar to what minivans are today, but lower to the ground.
Typical classic station wagons used to have two or three rear-facing seats in what would be the cargo area. This usage of additional passenger space has long since disappeared due to safety regulations, and station wagons have mostly been replaced by minivans and SUVs.
Hatchbacks are commonly seen as shortened station wagons, as small cars with some added practicality, as sportier versions of a regular four-door car with a trunk, or as all three. Many SUVs are essentially taller hatchbacks with more overall practicality which is why hatchback cars are not as common as they once were.
The performance car category is similar to how many classify sports cars, but it encompasses potentially more actual types of cars than any other general category. Most people associate performance with engine power, and as a result, classify performance cars as those with a substantial amount of speed or horsepower.
The perception of a classic car can vary between individuals depending on how old the individual is. The older the car, the more chance it has to be considered a classic by more people. Classic cars can include any type of car.
Types of Cars: Body Styles
Outlining stereotypes, generalizations, and perceptions about what types of cars exist can help people understand how many types of cars exist and how people see them.
Defining types of cars in a more technical way can help classify cars by narrowing down the broad ranges of perception that exist. The body style is the most general technical category in which a car can be defined, and it is a classification of the overall shape of a vehicle or how it looks.
There are between five and ten different body styles for cars depending on the source of information, five of which are foundational and can be found across nearly all sources of information.
Sedans are defined by two main things: They have a fixed roof and they have a three-box design. A three-box design is a separated passenger compartment, engine compartment, and storage compartment or trunk.
Most have four doors, but they can have two which is less common on modern cars than it used to be, and most have a clearly defined trunk.
Common examples of sedans include the Toyota Camry, Ford Fusion, Honda Accord, and Chevrolet Malibu.
Coupes are less well-defined than sedans but are often considered to counterparts to the sedan. They have a fixed roof, sloping roofline to the rear of the vehicle, and have two doors.
A recent design trend has attempted to marry the sedan and coupe body styles by giving sedans an exaggerated sloping roofline, thus mimicking a coupe. Some manufactures have even given their sedans the coupe name to emphasize this trend.
Coupes are generally designed to be sportier than sedans and many sedans are manufactured alongside coupe versions of themselves. Because of this, coupes that have four or five-passenger seating often have elongated doors to allow easier access to the rear seats.
Common examples of the coupe body style include the Honda Civic Coupe, Subaru BRZ, Chevrolet Camaro, and BMW 2 Series Coupe.
A hatchback is a car with a rear hatch door that opens upward from its hinge points on the roof. Hatchbacks have a fixed roof and can have two or four doors. They usually have a two-box design, which means that the engine compartment is separated from the passenger and cargo compartments which are shared.
Like coupes, hatchbacks often have sedan versions of themselves sold by the same manufacturer and are often designed to be sportier than comparable sedans.
Examples of hatchback cars include the Toyota Corolla Hatchback, Honda Fit, Volkswagen Golf, and Kia Soul.
Stations wagons are similar to hatchbacks, but they are usually longer with the canopy extending out over the rear wheels. They have fixed roofs, most often have four doors, and usually feature a two-box design.
Whereas hatchbacks are comparable to most average SUVs, station wagons are more comparable to minivans. The roof is extended, making the cargo area larger, and the rear hatch is usually much less angled than in smaller hatchbacks. This often means the rear hatch hinge points are behind the rear wheels.
Examples of station wagons include the Volkswagen Golf Alltrack, Audi A4 Allroad, and Volvo V90.
The convertible is the most universally recognized type of car regardless of whether or not people realize it is classified as a body style or not. They can have a hardtop or soft-top retractable roof that may or may not be able to be folded electronically.
Most convertibles have two doors with only a few having four. Some seat two passengers, but most offer the possibility to seat four. The ones that do have four seats often have extremely limited rear-seat room.
Convertibles used to be more prevalent than they are today because they are often impractical.
Good examples convertibles include the Mazda Miata, Ford Mustang, and Mercedes-Benz SL-Class.
As with these examples, people often associate sports cars with convertibles, but not all convertibles are sports cars. Many “regular” cars are offered with a convertible variant such as the BMW 4 Series, Mini Cooper, and Volkswagen Beetle.
Convertibles are the last of the most commonly recognized body styles. The following are additional styles sometimes classified separately, but they are much less common or overlap so much with other body styles that it is hard to justify a standalone classification.
The sports car body style does not exist so much by itself as it exists within other more prominent body styles. This is because a sports car is made from more from a combination of performance, design intention, drivetrain layout, and other internal functional elements rather than a general visual style.
This is a hard body style to categorize since the Toyota 86 is a classic example of the coupe body style and a sports car. As a result, the sports car more easily lives within several different body styles as a substyle.
Other good examples of sports cars include the Nissan 370Z, Fiat 124 Spider, and Audi TT RS.
Limousines are another very recognizable body style, but they are not usually seen as personal vehicles. They are usually elongated versions of luxury sedans, though limousine SUVs are becoming more common. Limousines can be classified by the partition separating the driver – usually a hired chauffeur – and the passengers in the rear seat area.
The ute body style is associated with only a handful of cars. Sometimes referred to as a coupe utility, it is essentially a car with a pickup truck bed and two doors.
The Subaru Baja was classified by some as ute as it was based off the Outback station wagon, but others considered it to be more akin to a small SUV than a car.
The Chevy El Camino is the most popular example of the style, but it is no longer manufactured. Holden, the recently-discontinued General Motors’ Australian subsidiary, made a muscle car-based ute as one of their flagship vehicles.
Except for the Holden Ute, most of the cars that carried this body style have resulted in production failures as they are much less practical than pickup trucks. A recent example of this is the Chevrolet SSR, which was meant to be something like a modern El Camino. It had a convertible top and a large, powerful engine, making it potentially also classified as a performance car.
Types of Cars: Subtypes
Body styles are a good way to identify different types of cars, but there are a huge number of different ways that describe a car that can more specifically be described. These are subtypes that give more information about what the car is, what it looks like, and even what it does.
With the rising popularity of SUVs in the past several years, the sedan market has taken a hit, yet they are usually more practical than other types and visually represent the most general idea of what a car is.
Sometimes referred to as a “saloon,” they are still the most common type of car on the market, and there are several main categories into which they can fall.
The EPA defines a subcompact car as one that has a combined interior and cargo volume of between 85 and 99 cubic feet. These numbers may not be immediately obvious to those without a tape measure, but subcompact sedans are usually the smallest sedans available at any given dealership.
Examples include the Nissan Versa, Hyundai Accent, Kia Rio, and Toyota Yaris.
Compact sedans are a step up from the subcompact segment. By definition, they have a combined interior and cargo volume of between 100 and 109 cubic feet.
Some of the best-selling cars in history are part of the compact sedan segment, and it is the second best-selling sedan segment in the current market.
Examples include the Toyota Corolla, Honda Civic, Subaru Impreza, and Volkswagen Jetta.
Midsize sedans are the king of the sedan world. They include some of the most popular and best-selling vehicles of all time. Even though sales have declined, the midsize sedan segment still contains several of the top 25 best-selling vehicles to date in the United States.
To achieve midsize status, cars like the Toyota Camry, Honda Accord, Nissan Altima, and Chevrolet Malibu have to have a total of 110 to 119 cubic feet of combined cargo and interior space.
The full-size sedan is currently the least popular and least populated segment of the sedan subtypes. Interior and cargo space must exceed 120 cubic feet, and current examples include the Chrysler 300, Chevrolet Impala, Toyota Avalon, and Nissan.
To this point, size has been the differentiating factor between sedan subtypes, because the sedan is very clearly defined as a type of car.
Luxury or Executive sedans do not have a size requirement, specific guidelines, or a specific point in which they become more luxurious than utilitarian. For a sedan to obtain luxury status, it must generally achieve a few extra goals:
1. Comfort: A luxury sedan must be generally more comfortable in which to travel than other sedans. Comfort encompasses size, features, and technology; therefore, luxury sedans are usually full-size sedans with high-end technology and features.
2. Performance: Luxury sedans are not always meant to go fast, but because they are bigger and heavier as a result of the extra features and technology, they usually have a larger engine to help get them going.
3. Pedigree: It may not seem fair, but established luxury automakers are the only ones who can truly produce luxury vehicles. Because of public perception, automakers like Kia and Hyundai who produce luxury sedans – but who primarily and traditionally have produced affordable and popular entry-level vehicles – are often seen as having second-rate options to the likes of BMW, Mercedes-Benz, and Audi.
4. Price: Luxury sedans are usually more expensive than entry-level sedans because of the number of features and technology offered. Recently, luxury automakers have begun to produce more entry-level luxury sedans which are smaller, more affordable cars from traditional luxury brands. Even though these are smaller and less feature-rich than regular luxury offerings, they still usually carry a higher starting price than a comparable sedan from a more affordable automaker.
Examples of luxury or executive sedans include the BMW 7 Series, Mercedes-Benz S-Class, Volvo S90, and Genesis G90.
Because the coupe is more loosely-defined than a sedan, they are not defined as much for their size as the sedan segment is. This is not always the case as coupes can come in many sizes, but most coupes would fall under the subcompact or compact size scale that sedans follow.
Coupes used to be more prominent in the market than they are today. Popular sedans many years ago often had a coupe version of themselves available including the Toyota Camry, Honda Accord, and Nissan Altima.
Today, if coupe versions of sedans exist, they are usually based on the compact segment, a good example of this being the Honda Civic. Coupes that are not based on another model are often considered to be sports cars.
The technical definition of a coupe has changed several times throughout history. Currently, the definition only includes a fixed roof and a sloping roofline, though it used to include more stipulations. Even though the coupe body style contains a wider array of sub-designs and is more loosely-defined than sedans, there are still some categories in which they can fall.
The traditional coupe follows the closest original meaning of the coupe body style. It is a car with a fixed roof, two doors, a three-box design, and a sporty appearance.
The Subaru BRZ, Toyota Supra, and Dodge Challenger are good examples of the coupe style. All have two doors, a trunk, a limited rear seat area, and sporty characteristics.
Notchback is another term for a traditional coupe, though the term was much more historically common than it is now. It means that the rear window and the trunk form a prominent angle as opposed to forming an uninterrupted slope from top to bottom. In other words, the rear window and trunk area are distinguishable from one another looking at the car from the side.
A fastback is a coupe where the rear window and trunk form a relatively uninterrupted line from the roof to the rear bumper. The Nissan 370Z is a good example of the fastback style though the Ford Mustang is known for being marketed as a fastback in the past.
Fastback coupes can be difficult to define as they technically also fit within the parameters of the hatchback body style. Because of the historical marketing of the fastback car, specifically with the Ford Mustang and other similar vehicles like the Chevrolet Camaro and Pontiac Firebird, two-door cars like these more comfortably fit within the coupe body style than the hatchback body style.
Historically, coupes have almost always exclusively been two-door vehicles. With the latest design trends, the inclusion of four-door vehicles into the coupe body style has brought up a debate.
Starting with several traditional luxury manufacturers and trickling down to many others, numerous four-door sedans have been styled with a roofline more akin to a coupe than a traditional sedan. Marketing efforts have been made to create this new style of “sedan coupe.”
The best example of a true four-door coupe is the Mazda RX-8. It is no longer in production, but it was a notchback that had a three-box design and two rear half suicide doors to allow rear-seat passengers easier access. The car was odd, but besides the rear door situation, the RX-8 almost perfectly followed the technical definition of what a coupe should be.
Similar to coupes, hatchbacks are not defined as much by their size as they are by other factors, some of which being performance and hatch angle.
They have well-defined parameters governing their body style, so there is not as much crossover between styles nor as many types of hatchbacks as there are with coupes. Hatchbacks are not as popular as sedans even though they offer more practicality because of their increased cargo capacity.
Microcars, sometimes referred to as city cars, are not always hatchbacks. The majority are hatchbacks because they are so small that the upright rear of the vehicle provides as much extra space as possible. These types of hatchbacks are far less common in the United States than they are in other countries where the roads are better-suited for smaller vehicles.
They are essentially miniature versions of the smallest hatchbacks available in the United States market. Though they are not popular, the best examples of microcars are the Smart ForTwo and Fiat 500.
Hatchbacks are usually considered to be sportier than sedans, but hot hatchbacks seek to combine the practicality of a hatchback with the performance of a sports car.
Hot hatches used to be much more popular during the 1980s and 1990s, but they have since waned in popularity. Very few hot hatchbacks are standalone models.
The Honda Civic Type R, Hyundai Veloster N, and the Volkswagen Golf GTI – the car that originally kicked off the “hot hatch” movement – are excellent examples of the style.
Both the terms liftback and sportback are unofficial types of hatchbacks that relate to the rake of the rear window. Both are also used mainly for marketing terminology and neither carry a hard definition. The rear hatch and window in a liftback are very upright, resulting in a hatchback with a boxy shape. Examples of liftback hatchbacks include the Kia Soul and Honda Fit.
Sportback is a more recent term, but it refers to a much more angled rear window and hatch to give the vehicle a streamlined and elongated look. It is also used frequently as a term for a four-door coupe.
Many of the largest hatchbacks are sometimes referred to as sportbacks including the Tesla Model S, Audit A7, and Kia Stinger. Some even consider smaller vehicles like the Toyota Prius and Hyundai Veloster to be sportbacks.
Stations wagons can also be called estate cars in foreign markets. The difference between hatchbacks and station wagons can be subtle, but the one main factors are the number of pillars and windows a wagon has as opposed to a hatchback.
Hatchbacks usually contain an A, B, and C pillar which allows for a front passenger and a rear passenger window. Station Wagons usually have an A, B, C, and D pillar which allows for a front passenger window, a rear passenger window, and a cargo area window.
Some hatchbacks do have four pillars and a small cargo window like a station wagon would, but the cargo area windows in station wagons usually extend much farther toward the end of the car and allow wagons to have a more upright stance.
Unlike hatchbacks and coupes, station wagons are a fairly one dimensional kind of a car. There is a type of station wagon that is unconventional called the shooting brake.
It is up for debate whether or not the shooting brake is actually a type of hatchback, a station wagon, or if it is a separate body style, and it comes down to where you look for information.
The shooting brake is basically a station wagon without the rear doors – a wagonette. It still has an elongated rear window that covers the cargo area, so it is about the size of a regular hatchback. Even though the shooting brake has only three pillars like a hatchback, the lengthy rear window places it under the station wagon subtype.
These cars are very uncommon, but one of the most notable examples of this style was the BMW Z3 M Coupe – ironically named. The car was nicknamed the clown shoe because of its appearance. A more modern example of the style is the Ferrari FF.
Convertibles are a straightforward type of car – they do not have a fixed roof. However, there are more subtypes of the convertible than many realize.
Convertibles can have three main types of retractable or removable roofs.
Soft tops are roofs usually made of cloth or leather in older vehicles. They may be electronically or manually operated.
A hardtop is a removable or retractable folding roof made of metal or other hard material. Hardtops are almost always electronically operated unless the top is removable. They offer better protection from the heat, the cold, and accidents.
A “T-top” consists of two removable panels above the driver and passenger seat with a bar connecting the middle of the windshield and a roll bar or body panel behind the front seats. A convertible’s roofing does not define any type of convertible sub-style but any type of convertible can have several roofing options.
Cabriolet is the French word for a convertible, and it is often used interchangeably with convertible to mean a car with a foldable roof.
Cabriolets are like the sedans of the convertible world. Few convertibles are very practical because of the space used for the retractable top, but cabriolets often have a small back seat. Rather than sheer performance, the cabriolet focuses more on the open-air aspect of driving.
The Mercedes-Benz C Class, the Buick Cascada, and the Volkswagen Beetle convertible are all examples.
Roadsters or Spiders are two-seat convertibles. They are like a coupe would be to a sedan in that coupes can be derived from sedans but are usually designed to offer more sportiness.
Roadsters and spiders are the same way when compared to the average cabriolet. These are usually the convertibles that cross the line between a sports car and a convertible, and they often emphasize the performance of a vehicle more than the typical convertible.
A Targa top, sometimes referred to as a semi-convertible, contains a removable roof panel that sits between the top of the windshield and a roll bar that spans from either side of the car behind the front seats. A rear window encloses the area behind the roll bar.
Targa tops function similarly to sunroofs in normal cars if the sunroof did not have sides.
A classic example of a Targa top is the Porsche 911 with a more recent example being the C5 and C6 Chevrolet Corvette.
The Barchetta is a little-known type of convertible that is uncommon. It is a convertible that has no roof at all – it does not come with a top. Early examples included small race cars from Ferrari and Abarth that barely had windshields.
More modern examples include the Lamborghini Murcielago Barchetta and Ferrari 550 Barchetta Pininfarina.
Even though sports cars can include cars within several different body styles, they are a universally recognized type of car, even though each person may define “sports car” a little differently than the other. Because it is so broad of a category, there are some generally recognized types of sports cars that can be sorted to make sports cars a little easier to understand.
In their broadest sense, a sports car can come from any body style or cross the lines between several body styles. When defining a sports car more carefully, the majority come from the coupe, convertible, or hatchback body styles.
True Sports Car
True sports cars are the odd ones that do not fit anywhere else, but because of the number of cars available both today and in the past, and because so many styles have evolved over the years, they are the sports cars that have remained closest to the traditional definition of a sports car.
Sports cars have traditionally been simple, small, nimble cars that have a front-engine layout and rear-wheel-drive. So many people think of speed and power when they think of sports cars, but true sports cars perform because they are set up to handle better than other cars rather than outpace them.
The Mazda Miata is the perfect example of a modern-day true sports car as it was built to be a Japanese version of the traditional British sports cars of the 60s, 70s, and 80s.
Grand Touring Car
Grand touring cars are built with both performance and comfort in mind. They are often rear-wheel-drive and have large, powerful engines, but they also have many luxury features and provide occupants with a comfortable ride.
Grand tourers can be thought of as sports cars with enough power to show off, burn rubber, and head to the track but that is more comfortable cruising down the twisting country roads on a weekend drive.
Examples include the Bentley Continental GT, Aston Martin Vanquish, Maserati GranTurismo, and Lexus LC500.
Most people either love muscle cars or hate them. They are traditionally known as hugely powerful cars that are loud, annoying, and ill-handling. This used to be the case with traditional muscle cars of the 60s and 70s, but times have changed.
Muscle cars have become slightly more refined and can handle better than they used to, but they still offer huge amounts of performance, usually at a fairly reasonable price.
Most muscle cars come from American manufacturers like the Dodge Challenger, Chevrolet Camaro, and Ford Mustang, but some European cars – several of which are made by Mercedes-Benz AMG – have obtained muscle car status due to their enormous engines, horsepower figures, and torque output.
Supercars, sometimes called exotic cars, do not have an exact definition. The most consistent factor in their broad definition is that they are high-performance cars. Usually, this just means that they are sports cars with more of just about everything in terms of performance numbers and outcomes.
They are usually rarer, more expensive, and more luxurious than normal sports cars, and may come in a variety of different drivetrain layouts. Because they are set up to perform at the highest level, their engines are mid-mounted for handling balance, but this is not always the case.
Examples include the Audi R8 V12, McLaren MP4 12C, Mercedes AMG GT-R, and Lamborghini Huracan.
Hypercar is a relatively recent term used to describe supercars that are exceeding the limits that supercars have traditionally set. The evolution of technology, materials, and aerodynamics has allowed supercars to push the limits of what was once unattainable.
Even though they are not a separate type of car, the hypercar has evolved so much that it may be soon considered an official type of sports car rather than falling within the supercar spectrum. They are faster, handle better, more expensive, and more specialized than any other car on earth.
Examples include the Bugatti Chiron, Koenigsegg One:1, Lamborghini Centenario, and McLaren Senna.
There are so many different types of cars that it can be dizzying trying to sift through them all. Different makes, models, body styles, and substyles can be too much information for many people, so it is easier to break it down into these narrower categories.
We have broken down all this information in many different ways, from the general perception of different car types to their more specifically-defined body styles and sub-styles. There are still potentially so many more sub-variants of the most prominent types of cars, but this guide is a great, easy-to-understand starting place for anybody who wants to learn more about the various types of cars. | <urn:uuid:ccd48fa7-f10d-41c4-9e0c-038ceed3671a> | CC-MAIN-2021-21 | https://www.thevehiclelab.com/types-of-cars/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989018.90/warc/CC-MAIN-20210509213453-20210510003453-00575.warc.gz | en | 0.958363 | 6,128 | 2.78125 | 3 |
OpenGL provides a powerful but small set of drawing operations, and all
higher-level drawing must be done in terms of these. To help simplify some
of your programming tasks, the OpenGL Utility Library (GLU) includes several
routines that encapsulate OpenGL commands. Many of these routines are described
in earlier chapters as their topics arise; these routines are briefly listed
here for completeness. GLU routines that aren't discussed earlier are described
in more depth here. Nevertheless, you might want to consult the OpenGL
Reference Manual for more detailed descriptions of all these routines.
This appendix groups the GLU routines functionally as follows:
The OpenGL Utility Library
"Manipulating Images for Use in Texturing"
"Rendering Spheres, Cylinders, and Disks"
"NURBS Curves and Surfaces"
Manipulating Images for Use in Texturing
As you set up texture mapping in your application, you'll probably want
to take advantage of mipmapping, which requires a series of reduced images
(or texture maps). To support mipmapping, the GLU includes a general routine
that scales images (gluScaleImage()) and routines that generate
a complete set of mipmaps given an original image in one or two dimensions
(gluBuild1DMipmaps() and gluBuild2DMipmaps()). These routines
are all discussed in some detail in Chapter 9 , so here only their prototypes
GLint gluScaleImage(GLenum format, GLint widthin,
GLint heightin, GLenum typein, const void *datain,
GLint widthout, GLint heightout, GLenum typeout, void
GLint gluBuild1DMipmaps(GLenum target, GLint components,
GLint width, GLenum format, GLenum type, void *data);
GLint gluBuild2DMipmaps(GLenum target, GLint components,
GLint width, GLint height, GLenum format, GLenum type,
The GLU includes routines that create matrices for standard perspective
and orthographic viewing (gluPerspective() and gluOrtho2D()).
In addition, a viewing routine allows you to place your eye at any point
in space and look at any other point (gluLookAt()). These routines
are discussed in Chapter 3 . In addition, the GLU includes a routine to
help you create a picking matrix (gluPickMatrix()); this routine
is discussed in Chapter 12 . For your convenience, the prototypes for these
four routines are listed here.
In addition, GLU provides two routines that convert between object coordinates
and screen coordinates, gluProject() and gluUnProject().
GLint gluProject(GLdouble objx, GLdouble objy, GLdouble
objz, const GLdouble modelMatrix,const GLdouble projMatrix,
const GLint viewport, GLdouble *winx, GLdouble *winy,
void gluPerspective(GLdouble fovy, GLdouble aspect,
GLdouble zNear, GLdouble zFar);
void gluOrtho2D(GLdouble left, GLdouble right,
GLdouble bottom, GLdouble top);
void gluLookAt(GLdouble eyex, GLdouble eyey, GLdouble
eyez, GLdouble centerx, GLdouble centery, GLdouble
centerz, GLdouble upx, GLdouble upy, GLdouble upz);
void gluPickMatrix(GLdouble x, GLdouble y, GLdouble
width, GLdouble height, GLint viewport);
Transforms the specified object coordinates objx, objy,
and objz into window coordinates using modelMatrix, projMatrix,
and viewport. The result is stored in winx, winy,
and winz. A return value of GL_TRUE indicates success, and GL_FALSE
GLint gluUnProject(GLdouble winx, GLdouble winy,
GLdouble winz, const GLdouble modelMatrix, const GLdouble
projMatrix, const GLint viewport, GLdouble *objx,
GLdouble *objy, GLdouble *objz);
Transforms the specified window coordinates winx, winy,
and winz into object coordinates using modelMatrix, projMatrix,
and viewport. The result is stored in objx, objy,
and objz. A return value of GL_TRUE indicates success, and GL_FALSE
As discussed in "Describing Points, Lines, and Polygons," OpenGL can directly
display only simple convex polygons. A polygon is simple if the edges intersect
only at vertices, there are no duplicate vertices, and exactly two edges
meet at any vertex. If your application requires the display of simple
nonconvex polygons or of simple polygons containing holes, those polygons
must first be subdivided into convex polygons before they can be displayed.
Such subdivision is called tessellation. GLU provides a collection of routines
that perform tessellation. Note that the GLU tessellation routines can't
handle nonsimple polygons; there's no standard OpenGL method to handle
Since tessellation is often required and can be rather tricky, this
section describes the GLU tessellation routines in detail. These routines
take as input arbitrary simple polygons that might include holes, and they
return some combination of triangles, triangle meshes, and triangle fans.
You can insist on only triangles if you don't want to have to deal with
meshes or fans. If you care about performance, however, you should probably
take advantage of any available mesh or fan information.
The Callback Mechanism
To tessellate a polygon using the GLU, first you need to create a tessellation
object, and then provide a series of callback routines to be called at
appropriate times during the tessellation. After you specify the callbacks,
you describe the polygon and any holes using GLU routines, which are similar
to the OpenGL polygon routines. When the polygon description is complete,
the tessellation facility invokes your callback routines as necessary.
The callback routines typically save the data for the triangles, triangle
meshes, and triangle fans in user-defined data structures, or in OpenGL
display lists (see Chapter 4 ). To render the polygons, other code traverses
the data structures or calls the display lists. Although the callback routines
could call OpenGL commands to display them directly, this is usually not
done, as tessellation can be computationally expensive. It's a good idea
to save the data if there is any chance that you want to display it again.
The GLU tessellation routines are guaranteed never to return any new vertices,
so interpolation of vertices, texture coordinates, or colors is never required.
The Tessellation Object
As a complex polygon is being described and tessellated, it has associated
data, such as the vertices, edges, and callback functions. All this data
is tied to a single tessellation object. To do tessellation, your program
first has to create a tessellation object using the routine gluNewTess().GLUtriangulatorObj*
Creates a new tessellation object and returns a pointer to it. A null
pointer is returned if the creation fails.
If you no longer need a tessellation object, you can delete it and free
all associated memory with gluDeleteTess().void gluDeleteTess(GLUtriangulatorObj
Deletes the specified tessellation object, tessobj, and frees
all associated memory.
A single tessellation object can be reused for all your tessellations.
This object is required only because library routines might need to do
their own tessellations, and they should be able to do so without interfering
with any tessellation that your program is doing. It might also be useful
to have multiple tessellation objects if you want to use different sets
of callbacks for different tessellations. A typical program, however, allocates
a single tessellation object and uses it for all its tessellations. There's
no real need to free it because it uses a small amount of memory. On the
other hand, if you're writing a library routine that uses the GLU tessellation,
you'll want to be careful to free any tessellation objects you create.
You can specify up to five callback functions for a tessellation. Any functions
that are omitted are simply not called during the tessellation, and any
information they might have returned to your program is lost. All are specified
by the single routine gluTessCallback().void gluTessCallback(GLUtriangulatorObj
*tessobj, GLenum type, void (*fn)());
Associates the callback function fn with the tessellation object
tessobj. The type of the callback is determined by the parameter
type, which can be GLU_BEGIN, GLU_EDGE_FLAG, GLU_VERTEX, GLU_END,
or GLU_ERROR. The five possible callback functions have the following prototypes:
To change a callback routine, simply call gluTessCallback() with
the new routine. To eliminate a callback routine without replacing it with
a new one, pass gluTessCallback() a null pointer for the appropriate
void begin(GLenum type);
void edgeFlag(GLboolean flag);
void vertex(void *data);
void error(GLenum errno);
As tessellation proceeds, these routines are called in a manner similar
to the way you would use the OpenGL commands glBegin(), glEdgeFlag*(),
glVertex*(), and glEnd(). (See "Marking Polygon Boundary
Edges" in Chapter 2 for more information about glEdgeFlag*().) The
error callback is invoked during the tessellation only if something goes
The GLU_BEGIN callback is invoked with one of three possible parameters:
GL_TRIANGLE_FAN, GL_TRIANGLE_STRIP, or GL_TRIANGLES. After this routine
is called, and before the callback associated with GLU_END is called, some
combination of the GLU_EDGE_FLAG and GLU_VERTEX callbacks is invoked. The
associated vertices and edge flags are interpreted exactly as they are
in OpenGL between glBegin(GL_TRIANGLE_FAN), glBegin(GL_TRIANGLE_STRIP),
or glBegin(GL_TRIANGLES) and the matching glEnd(). Since
edge flags make no sense in a triangle fan or triangle strip, if there
is a callback associated with GLU_EDGE_FLAG, the GLU_BEGIN callback is
called only with GL_TRIANGLES. The GLU_EDGE_FLAG callback works exactly
analogously to the OpenGL glEdgeFlag*() call.
The error callback is passed a GLU error number. A character string
describing the error can be obtained using the routine gluErrorString().
See "Describing Errors" for more information about this routine.
Describing the Polygon to Be Tessellated
The polygon to be tessellated, possibly containing holes, is specified
using the following four routines: gluBeginPolygon(), gluTessVertex(),
gluNextContour(), and gluEndPolygon(). For polygons without
holes, the specification is exactly as in OpenGL: start with gluBeginPolygon(),
call gluTessVertex() for each vertex in the boundary, and end the
polygon with a call to gluEndPolygon(). If a polygon consists of
multiple contours, including holes and holes within holes, the contours
are specified one after the other, each preceded by gluNextContour().
When gluEndPolygon() is called, it signals the end of the final
contour and starts the tessellation. You can omit the call to gluNextContour()
before the first contour. The detailed descriptions of these functions
follow.void gluBeginPolygon(GLUtriangulatorObj *tessobj);
Begins the specification of a polygon to be tessellated and associates
a tessellation object, tessobj, with it. The callback functions
to be used are those that were bound to the tessellation object using the
void gluTessVertex(GLUtriangulatorObj *tessobj,GLdouble
v, void *data);
Specifies a vertex in the polygon to be tessellated. Call this routine
for each vertex in the polygon to be tessellated. tessobj is the
tessellation object to use, v contains the three-dimensional vertex
coordinates, and data is an arbitrary pointer that's sent to the
callback associated with GLU_VERTEX. Typically, it contains vertex data,
texture coordinates, color information, or whatever else the application
may find useful.
void gluNextContour(GLUtriangulatorObj *tessobj, GLenum
Marks the beginning of the next contour when multiple contours make
up the boundary of the polygon to be tessellated. type can be GLU_EXTERIOR,
GLU_INTERIOR, GLU_CCW, GLU_CW, or GLU_UNKNOWN. These serve only as hints
to the tessellation. If you get them right, the tessellation might go faster.
If you get them wrong, they're ignored, and the tesselation still works.
For a polygon with holes, one contour is the exterior contour and the others
interior. gluNextContour() can be called immediately after gluBeginPolygon(),
but if it isn't, the first contour is assumed to be of type GLU_EXTERIOR.
GLU_CW and GLU_CCW indicate clockwise- and counterclockwise- oriented polygons.
Choosing which are clockwise and which are counterclockwise is arbitrary
in three dimensions, but in any plane, there are two different orientations,
and the GLU_CW and GLU_CCW types should be used consistently. Use GLU_UNKNOWN
if you don't have a clue.
void gluEndPolygon(GLUtriangulatorObj *tessobj);
Indicates the end of the polygon specification and that the tessellation
can begin using the tessellation object tessobj.
Rendering Spheres, Cylinders, and Disks
The GLU includes a set of routines to draw various simple surfaces (spheres,
cylinders, disks, and parts of disks) in a variety of styles and orientations.
These routines are described in detail in the OpenGL Reference Manual;
their use is discussed briefly in the following paragraphs, and their prototypes
are also listed.
To create a quadric object, use gluNewQuadric(). (To destroy
this object when you're finished with it, use gluDeleteQuadric().)
Then specify the desired rendering style, as follows, with the appropriate
routine (unless you're satisfied with the default values):
Whether surface normals should be generated, and if so, whether there
should be one normal per vertex or one normal per face: gluQuadricNormals()
After you've specified the rendering style, simply invoke the rendering
routine for the desired type of quadric object: gluSphere(), gluCylinder(),
gluDisk(), or gluPartialDisk(). If an error occurs during
rendering, the error-handling routine you've specified with gluQuadricCallBack()
Whether texture coodinates should be generated: gluQuadricTexture()
Which side of the quadric should be considered the outside and which
the inside: gluQuadricOrientation()
Whether the quadric should be drawn as a set of polygons, lines, or
It's better to use the *Radius, height, and similar arguments
to scale the quadrics rather than the glScale*() command, so that
unit-length normals that are generated don't have to be renormalized. Set
the loops and stacks arguments to values other than 1 to
force lighting calculations at a finer granularity, especially if the material
specularity is high.
The prototypes are listed in three categories.
Manage quadric objects:
Control the rendering:
GLUquadricObj* gluNewQuadric (void);
void gluDeleteQuadric (GLUquadricObj *state);
void gluQuadricCallback (GLUquadricObj *qobj, GLenum
which, void (*fn)());
Specify a quadric primitive:
void gluQuadricNormals (GLUquadricObj *quadObject, GLenum
void gluQuadricTexture (GLUquadricObj *quadObject, GLboolean
void gluQuadricOrientation (GLUquadricObj *quadObject,
void gluQuadricDrawStyle (GLUquadricObj *quadObject,
void gluCylinder (GLUquadricObj *qobj, GLdouble baseRadius,
GLdouble topRadius, GLdouble height, GLint slices,
void gluDisk (GLUquadricObj *qobj, GLdouble innerRadius,
GLdouble outerRadius, GLint slices, GLint loops);
void gluPartialDisk (GLUquadricObj *qobj, GLdouble innerRadius,
GLdouble outerRadius, GLint slices, GLint loops, GLdouble
startAngle, GLdouble sweepAngle);
void gluSphere (GLUquadricObj *qobj, GLdouble radius,
GLint slices,GLint stacks);
NURBS Curves and Surfaces
NURBS routines provide general and powerful descriptions of curves and
surfaces in two and three dimensions. They're used to represent geometry
in many computer-aided mechanical design systems. The GLU NURBS routines
can render such curves and surfaces in a variety of styles, and they can
automatically handle adaptive subdivision that tessellates the domain into
smaller triangles in regions of high curvature and near silhouette edges.
All the GLU NURBS routines are described in Chapter 9 ; their prototypes
are listed here.
Manage a NURBS object:
Create a NURBS curve:
GLUnurbsObj* gluNewNurbsRenderer (void);
void gluDeleteNurbsRenderer (GLUnurbsObj *nobj);
void gluNurbsCallback (GLUnurbsObj *nobj, GLenum which,
Create a NURBS surface:
void gluBeginCurve (GLUnurbsObj *nobj);
void gluEndCurve (GLUnurbsObj *nobj);
void gluNurbsCurve (GLUnurbsObj *nobj, GLint nknots,
GLfloat *knot, GLint stride, GLfloat *ctlarray, GLint
order, GLenum type);
Define a trimming region:
void gluBeginSurface (GLUnurbsObj *nobj);
void gluEndSurface (GLUnurbsObj *nobj);
void gluNurbsSurface (GLUnurbsObj *nobj, GLint uknot_count,
GLfloat *uknot, GLint vknot_count, GLfloat *vknot,
GLint u_stride, GLint v_stride, GLfloat *ctlarray,
GLint uorder, GLint vorder, GLenum type);
Control NURBS rendering:
void gluBeginTrim (GLUnurbsObj *nobj);
void gluEndTrim (GLUnurbsObj *nobj);
void gluPwlCurve (GLUnurbsObj *nobj, GLint count,
GLfloat *array, GLint stride, GLenum type);
void gluLoadSamplingMatrices (GLUnurbsObj *nobj, const GLfloat
modelMatrix, const GLfloat projMatrix, const GLint
void gluNurbsProperty (GLUnurbsObj *nobj, GLenum property,
void gluGetNurbsProperty (GLUnurbsObj *nobj, GLenum property,
The GLU provides a routine for obtaining a descriptive string for an error
code. For information about OpenGL's error handling facility, see "Error
Handling." const GLubyte* gluErrorString(GLenum errorCode);
Returns a pointer to a descriptive string that corresponds to the OpenGL,
GLU, or GLX error number passed in errorCode. The defined error
codes are described in the OpenGL Reference Manual along with the
command or routine that can generate them.
[Previous chapter] [Next
See the About page for copyright, authoring
and distribution information. | <urn:uuid:88eb9946-fae9-4140-a4dc-90f8c81c6464> | CC-MAIN-2021-21 | http://accela.net/theredbook/appendixc.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991224.58/warc/CC-MAIN-20210516140441-20210516170441-00015.warc.gz | en | 0.769561 | 4,649 | 3.09375 | 3 |
Contribution: P N Mohanty, Minakshi Sharma [Print/Download]
Management of crying among newborns and young infants
Cry at birth an announcement of arrival in this world
Crying is an important means of communication for the newborn babies. It is the key sign of life at the time of birth. It gives a lot of happiness and comfort to the mother after experiencing the pain of childbirth. The birth attendants in the labour room feel relieved. The near and dear ones waiting outside feel very pleased. Crying at birth helps expand the lungs and is useful in removing any aspirated secretions at the time of birth. In many ways it contributes to intact survival of the child.
Crying after birth
Usually young infants (before 3 months age) cry a lot. This may be between one to three hours a day. Some babies cry a lot others cry less. Crying is a means of expression and communication of needs and feelings of hunger, discomfort, pain, insecurity, need for sleep, seeking attention, etc. However, if an infant cries too often or for too long, it can become a serious concern for the parents. The first time parents can become very stressed and feel helpless if they are not able to control the crying of their child. So the question is whether the baby should be responded to crying without any delay or let him/her cry it out. Let the baby ‘cry it out’ is an idea that has been debated since around 1880s when the field of medicine was in an uproar about germs and transmission of infection. The notion was that babies should rarely be touched. In the 20th century, behaviourist John Watson applied the mechanistic paradigm of behaviourism to child rearing, warning about the dangers of too much maternal love. Too much kindness to a baby would result in a whiney, dependent, failed human being. The behaviourist saw the baby as an intruder into the life of the parents or care givers, and that should be disciplined by different means, so that the adults can live their normal life without too much worry and discomfort. Now it is believed that there is nothing like too much love. Love can never be excessive.
There are two methods practiced to let the baby cry it out. The ‘extinction method’ in which parents or care givers are advised not to attend to the crying of the baby, but to leave the baby to cry it out until s/he has absolutely no energy left to continue crying. The second one is known as the ‘graduated extinction’ method. It is also known as controlled crying in which parents alternate between attending and not attending during crying spells, at increasingly longer intervals. It is considered to be better over the first method. Tracy Cassels, a Clinical and Developmental Psychologist is of the opinion that though it sounds better, in reality, it works on the same principles. The second method can be more frustrating for the infant or the child. Parents who checks their baby at an interval of 5 or 10 minutes does not really respond to the crying of the baby, but tantalize him/her by showing a ray of hope that his/her cry is attended to which is transitory. This arousal of expectations that are repeatedly abandoned leaves the baby with the feeling of only discomfort, disappointment, loneliness, fear, and sadness which can be very painful and traumatizing. On the other hand caregivers who habitually respond to the needs of their baby before the baby gets distressed, attending and preventing crying, are more likely to have children who are independent than the opposite ( Stein & Newcomb, 1994). Soothing care is the best right from the beginning. Once patterns of distress get established, it’s very difficult to change them. That is why responsiveness is the key in parenting. Responding to children in distress is paramount in building a sense of security and later independence for our children. It is important to be aware that it is not possible for any caregiver to give too much love to their child. There is no limit to giving of love and children will never get spoilt by giving their excessive love.
“Remember, all babies will eventually sleep through in their own time, so with provision of love and guidance from care givers sleep needn’t be traumatic for mother or baby”, claims Australian Association of Infant Mental Health. It further says,” But before you consider cry it out ask yourself this: in a society already brimming with anxiety, depression, low self esteem, co-dependency, narcissism and other mental disorders, do you think it’s acceptable to continue this widespread experiment on infant mental health? When we don’t respond to our babies, what sort of future are we paving the way for?” According to Dr. Margot Sunderland, a well-known psychotherapist,“ the infant brain is so vulnerable to stress — after birth, it’s not yet finished! In the first year of life, brain cells (neurons) are still moving to where they need to be, a process known as migration. Migration is hugely influenced by uncomforted stress”. In the first year of life, there are adverse stress-related changes to the gene expression of key emotion neuro transmission systems (chemical and hormonal) that are responsible for emotional well-being and the ability of the individual to be calm so that they can handle stress well in later life. In addition, the level of stress caused to the infant brain by prolonged uncomforted distressed crying is so toxic, it can result in: elevated blood pressure, elevated cerebral pressure, erratic fluctuations of heart rate, breathing, temperature, suppressed immune and digestive systems, suppressed growth hormone, apnoeas and extreme pressure on the heart, resulting in tachycardia. Any unattended, unresponded and uncomforted infant mammal will eventually stop crying. It is a process called ‘Protest-Despair-Detachment.’ Arguing on similar lines Dr Howard Chilton, a consultant Neonatologist says that “Cry it out makes absolutely no biological sense. Like other primates, humans are a ‘continuous contact’ species, but we humans are even more than that — we are born the most immature of all placental mammals. The important point to be made is that in the early months, our fetus-like babies have to embark upon massive amount of brain development. They have to lay down life-long brain connections and embed fundamental beliefs about how safe and secure their world is, how reliable their parents are, and how valued and loved they are. This is a vital time during which they are learning from their parents (but their mother in particular) new things about the world around them and how to deal with stress. So it makes no sense at the darkest, scariest time of the day to abandon them to a regime of nocturnal neglect! Cry it out also contradicts the very basic parental instincts of nurturing and caring for those we love the most in our lives. It truly makes no sense. Professor Helen Ball, Professor in Anthropologist opines that “From an evolutionary anthropological view point, human infant crying is an identical behaviour to the separation distress call displayed by infants among other primate species. Crying is the infant’s only means of attracting their mother’s (carer’s) attention once separated, in order to ensure their own survival”. Responding to their infant’s cry is an instinctive behaviour of human mothers. To resist the urge to approach his/her crying infant is emotionally and physiologically stressful for mothers. Leaving an infant to cry is therefore evolutionarily inappropriate and biologically detrimental to both mother and baby. Dr Frans Plooij is of the opinion that not only can cry it out or other sleep training methods have an adverse impact on a baby’s brain development, but on the whole breastfeeding relationship too. This is because breast milk production works on a supply equals need basis. So if a baby’s needs are ignored by them being left to cry it out, then a mother’s milk supply can suffer. According to Penelope Leach crying for long enough will eventually stop not because the baby has learnt to go to sleep happily alone, but because he’s exhausted and has despaired of getting any help. Crying hard is stressful, and continued acute stress sets up a hormonal chain reaction that ultimately stimulates the adrenal glands into releasing the “stress hormone” cortisol. Long continued or oft-repeated crying can produce so much cortisol that it can damage a baby’s brain, Professor James McKenna labels the practice of ‘crying it out’ is “entirely a western, cultural construction, and nothing less than a form of abuse” Leaving the baby to cry and not attended amounts to neglect that if repeated and for prolonged periods of time can damage the rapidly developing brain at this age.
We should understand the fact that the mother and child are a mutually responsive dyad. They are a symbiotic unit that make each other healthier and happier in mutual responsiveness. This principle applies to other caregivers too.
Reasons for crying
All babies cry, some more than others. It’s not crying that is bad for babies but crying that gets no response. Therefore, the mother or the care giver should attend to the crying child, try to understand the reasons for crying and respond accordingly. Prominent reasons why an infant cries are as follows:
- Hunger: – Most probably this is the first thing mother thinks when her baby cries. A sensitive mother recognizes the early signs of hunger and responds to it in a timely manner. Some of the cues of hunger include rooting, fussing, smacking of lips, putting hands to mouth, etc. Mother should recognize these signs of hunger and feed the baby before s/he cries.
- A dirty diaper: – When the diaper is dirty and wet, the baby feels discomfort and starts crying.
- Needs sleep: – Many babies when sleepy can be cranky. Instead of nodding off, they may fuss and cry especially if they are exhausted.
- Wants to be held: – Babies need a lot of cuddling. They like to see the face of their mother the most, hear her voices, listen to her heartbeats which s/he was used to hearing in utero and even can detect the unique smell of the mother. Crying can be a way of expressing his/her desire to be held close.
- Tummy troubles: – If the baby often fusses and cries after feeding, it may be s/he is feeling some pain and discomfort in the tummy. Tummy troubles, very often are associated with gas or colic which can lead to lots of crying. In fact, the rather mysterious condition called colic is defined as inconsolable crying for at about three hours a day, three days a week, three weeks in a month for a period of 3 months. A good burp after each feeding may be all s/he needs. Babies swallow air when they are breastfed or suck from a bottle, and if the air isn’t released it may cause some discomfort. Babies are always lying down and the air in the gastro intestinal tract does’nt get expelled easily. This can cause a lot of discomfort and crying. Crying can also be associated with swallowing of air and this can worsen the situation.
- Feels too hot or too cold:- When babies feel too cold ( while removing clothes, less clothing, changing diapers, cleaning bottom in cold water or cold swipe) or too hot (over clothed and covered) , they may protest by crying. Babies are more likely to complain about being too cold than being too warm. They are less likely to cry vigorously under these circumstances.
- Wants more/less stimulation: – A “demanding” baby may be outgoing and eager to see the world. Often the only way to stop the crying and fussing is to stay active. They learn from the stimulation of the world around them (the lights, the sounds, being passed from one hand to the other), but sometimes they have a hard time in processing it all. Crying can be a baby’s way of saying, “I’ve had enough”, and “I am no more interested”.
Not well: – If the basic needs of the baby have been met and possible measures to comfort him/her have been taken and the baby is still crying, it indicates that s/he might have some medical problems. Mother or care giver may measure the temperature to rule out fever or hypothermia and look for other danger signs. The cry of a sick baby tends to be distinctly different from one caused by hunger or frustration. If you find your baby’s crying “just doesn’t sound right,” trust your instincts and contact a health care provider immediately Do not delay care seeking.
What to do if the baby is still crying?
Babies have their own reasons to cry. Even the sensitive and learned parents can’t say why their baby is crying and the baby is not capable of expressing to him/her verbally what is wrong with him/her. Moreover, two babies are never alike. One formula that works well with one may not work with other. However, there are certain tried and trusted methods which can be applied. Paediatrician Harvey Karp advises parents to use 5 Ss when their babies are crying when reasons are not known. These 5 Ss recreate a womb like environment and activate baby’s calming mechanism. These are as follows:-
- Swaddling: – Newborns like to feel warm and secured as s/he was in utero. Create a situation so that the baby can have that feeling by swaddling the baby in a soft blanket, clothing the baby adequately, and holding the baby in your shoulder. However swaddling should not be tight and not prolonged.
- Side or stomach position: – Hold your baby so he is lying on his side or stomach. This will help in releasing air/gas. But always put the baby on his/her back at sleep. A baby should never be kept on its stomach when sleeping This can choke the baby.
- Shushing: – Many babies are calmed by a steady flow of “white noise” that drowns out other noises – much like the constant whoosh of bodily sounds they heard in the womb. Run the vacuum cleaner, hair drier fan, clothes drier,etc. It helps to hold the baby on your shoulder so the baby can hear the heart sounds and simply produce a continuous humming sound.
- Swinging: – Hold your baby in your arms and gently rock, get your baby in motion by sitting in a rocking chair, putting the baby in a pram or carrier and gently moving it would help
- Sucking: – Give something to your baby to suck. Sucking can steady a baby’s heart rate, relax his stomach, and calm flailing limbs. The finger or the object that the baby sucks on should be clean to prevent an infection.
In addition, other methods that can be tried out are as follows:-
- Music and Rhythm: – Play soft music or sing a lullaby or a song slowly. You can experiment by playing different songs and find out for yourself what works and what does not. Soft and sweet music would soothe and relax the little one. Avoid putting the TV on loud.
- Fresh air: – Open the door of your house and step out with your baby. Look at your surroundings, look at the sky and talk to your baby slowly. Sometimes this helps to stop the baby crying.
- Warm Water: – Just like fresh air, warm water can soothe and relax the baby and stop the baby from crying. Hold him/herb in your arms under a slowly running shower and make sure that your shower is slip proof.
- .Massage: – Babies love to be touched. So, gentle touch stroking the baby or a gentle massage is a good idea. Slowly and gently massage the baby or stroke the baby from head to toe. This would comfort your baby.
Physical, mental, or emotional challenges at birth, or soon after, are often traumatic to an infant and can cause your baby’s nervous system to get “stuck.” A nervous system that is stuck will probably have difficulty with regulation, which means the baby will have a hard time settling down. Special or traumatic circumstances that might cause problems include premature birth, difficult or traumatic birth, medical problems or disability and adoption or separation from primary caregiver. The physical illness or depression in the care giver can be quite stressful for the baby. Remember, if you have tried out various methods to calm and soothe the baby but these are of no avail, don’t delay in consulting a health care provider. Your baby might be in need of medical care. A sensitive mother can differentiate a cry for hunger or it is cry due to illness. The care givers (mothers) can become sensitive through ongoing interaction with their babies.
How to cope with a crying baby who is non-responsive?
When the baby is crying nonstop and not responding whatever you do can be a very painful and stressful situation for the mother. She gets exhausted, frustrated and sometimes angry and may cause harm to herself or the baby. Mother should recognise her own limits and needs to develop some strategy to take care of her own self. Unless the mother is stable, calm, relaxed and focused, she can’t find out what is going on with her baby and can’t do something positive to calm the baby. So, she should take a break and seek support of others. Extra support is required when parents or care takers particularly mother is depressed, suffering from major illness or having chronic health problem, exhausted due to sleeplessness, feeling neglected, unsupported and isolated. Post partum depression affects 10-20% mothers during the first few months after child birth. Maternal depression adversely affects some aspects of infant development and behaviour particularly difficulty soothing, irritability and crying behaviour. Arguably, excessive infant crying is a signal waiting for a response. Consequently, it may be a useful target to use for interventions in mothers with depression that improves the outcomes for infants and their mothers. Some of the strategies you can consider are as follows:-
- Put the baby in a safe place and let him cry for a while
- Take the help of a trusted person to take care of the baby at this time The baby should never be left unattended.
- Seek the advice of a person on whom you have faith
- Listen to music or view a TV show that you like.
- Have a cup of tea or coffee if you feel so.
- Think that crying is not going to harm your baby
- Think that time is in your side. This phase will go. Crying follows a developmental pattern known as ‘crying curve’. It increases at the age of 2-3 weeks, peaks between 6 – 8 weeks then slows down after that and generally hitting its lowest around 4 months of age.
- Don’t worry about perfection. Parenting is not all about perfection. Experts estimate that meeting your infant’s needs at leastone third of the time is enough to support healthy bonding and secure attachment.
Never shout at your baby or get upset. It neither helps you nor your baby. Never shake the baby to blow out your frustration and anger. It will neither help in soothing your baby nor stop him from crying. Rather it will cause long lasting psychological and emotional problem. Baby will feel insecure, helplessness and fearful. According to American Academy of Paediatricians, ‘Shaken Baby Syndromes’ occurs when the baby is shaken which happens when parents or care givers become frustrated and angry and are not able to cope up with the situation when the baby is crying and nonresponsive. Adverse outcome that may result from shaking a baby includes brain damage, mental retardation, blindness, seizures and even death.
- Cry it out-six educated professional who advice against it www.bellybelly.com.au//baby –sleep //cry-it-out//accessed October 2016 (Dr Margot Sunderland, Prof James McKenna, Dr Howard Chilton, Prof. Helen Ball, Tracy Cassels, Dr Frans Plooij).
- Stifter CA. “Life” after unexplained crying: Child and parent outcomes. In: Barr RG, St James-Roberts I, Keefe MR, eds. New evidence on unexplained early infant crying: its origins, nature and management. Skillman, NJ: Johnson & Johnson Pediatric Institute; 2001:273-288.
- Stifter CA, Spinrad TL. The effect of excessive crying on the development of emotion regulation. Infancy2002;3(2):133-152.
- Stifter CA Crying behavior and its impact on psycho social child development child-encyclopedia.com/crying–behaviour/…/crying–behav April 2005
- Ronald G Barr: What is all that crying about? Bulletin of the centre of excellence for early childhood development Vol 6 No 2, 2007.
- M ferris E McCarrol: Crying babies: ensuring the call of oinfant cries Rexas Child care 14-31, 2010
- Chuong-Kim, Margaret. 2005. Cry it out: The potential dangers of leaving your baby to cry. http:// drbenkim.com/articles-attachment-parenting.html.
- Soltis, Joseph. 2004. The signal function of early infant crying, Behavioral and Brain Sciences 27(4): 443-490. Spock,
- McKenna, James J Sleeping with your baby A parents guide to close sleeping May 2007
- Zeskind Philip Sanford Impact of the cry of the infant at risk on psycho social development August 2007 Reviewed in Encyclopedia on early childhood development
- Barr RG Crying behavior and its importance for psycho social development in children April 2006 in Encyclopedia on early childhood development
- Sunderland M What every parent need to know The incredible effects of nurture and play on your child’s development 2007
- What is considered normal crying for a baby Encyclopedia on early childhood development updated August 2007 | <urn:uuid:7c296508-28ac-456d-b1da-077ac970a60d> | CC-MAIN-2021-21 | http://www.swach.org/fcb-nch-management-of-crying-among-newborns-and-young-infants/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00535.warc.gz | en | 0.945553 | 4,622 | 3.28125 | 3 |
Texas cichlids are native to lakes and rivers in south Texas and northern Mexico, making them the most northern naturally occurring species of cichlid in the world. , The cichlid has an omnivorous diet that consists of vegetable matter or detritus, often feasting on plants, insects, and smaller fish as well as fish eggs. Herichthys cyanoguttatus natural areal is in North America and North-East of Mexico. In Lake Guerrero, which is recognized for its excellent largemouth bass fishing, the Texas cichlid is considered by locals to be the best eating fish in the lake. The cichlid can disrupt the food web with their varied diet, which can shift depending on what fish are around it. It is a good idea to get a less aggressive Cichlid, especially one that doesn’t have overly sharp teeth, as there is always a chance that you may get bit. i've had many texas cichlids, oscars, jack dempseys, and convicts that have been hounded literally to death by africans less than half their size. At all stages, the female violently chases the intruders more often and faster than the male parent. Yes, all Cichlid species do have teeth. Only a small percentage of Red Texas actually fade. Though, there have been reports of some fish living longer with the proper care. Do Cichlids have teeth ? Experiments have shown that if the diet is changed from hard to soft, the form of the pharyngeal teeth also changes. Carnivores have teeth that have evolved to tear apart meat. This strange trait is mostly due to their teeth sitting deep in their throat, which can make it difficult for them to eat whole meals. This is because the Oscar is a suction feeder. With that being said, Cichlids can be territorial and a bit aggressive, which is one reason why not all Cichlids make for good community tank fish. Cichlids are the most species-rich non-Ostariophysan family in freshwaters worldwide.They are most diverse in Africa and South America.It is estimated that Africa alone hosts at least 1,600 species. When eating soft food, these fish develop numerous sharp, pharyngeal teeth, but when feeding on a durophagous diet (e.g. Most of these fish usually will not live very long in captivity unfortunately. They have giant front teeth which is where the name vampire comes from. Normally, they should only be kept with other African cichlids of similar temperament, with a few notable exceptions that include some species of hardy catfish. , In Florida, the success of the fish has been limited to artificial canals. Yes, Cichlids do of course have teeth. Some you won't see because they are in the throats - like the Oscar. I don't have the original pair of parents, but I do have an offspring; a female. There are some types of Cichlids which have smaller rows of flatter teeth, ones designed to scrape algae off of rocks and to grind up plant matter. The varieties of Cichlids that have smaller teeth designed for plant eating and algae scraping generally won’t bite very hard. Red Texas Cichlid Price The prices of red Texas cichlid differs greatly and can be everything from 10$ to 1000$+. Other than that, you will find the Cichlids to be in the colors blue, red, yellow, green, or purple. This contributes to their aggression and typical dominance in their aquariums. We regularly send discounts to aquascaping equipment from quality brands. The Red Devil Cichlid, known scientifically as Amphilophus labiatus, is a beloved fish with a charismatic personality. Some of the more colorful individuals will also have a black-tipped tail and black-tipped fins. Texas cichlids are diggers, will destroy plants possibly including plastic ones, and have been known to attack and damage aquarium equipment. My one cichlid used to be blue and black but over the course of the last I’d say about 2 months he’s gotten more and more pale in color. Still very effective at gripping and lacerating small fish. This is a characteristic that they share with the Damselfishes. Cichlid Tails - Page 1 CICHLID TAILS The Official Newsletter of the Texas Cichlid Association Volume 26, Issue 5 September/October 2009 Inside This Issue: 1 Ralph’s Rumors 2 Editor’s Notes 3 FOTAS LVIII 3 Falling Fish Shatters Windshield 3 Evolutionary Biology: Cichlids, Gene Networks, and Teeth They usually only eat live fish and will rarely accept frozen foods. Maximum Size: texas cichlid grow to a typical length of around 12 inches. Then, introduce plants and rocks sparingly. Thai green Texas cichlid video by Your FishKeeper Friend.His channel is a great source for hybrid cichlid info. With that said, Red Devil Cichlids are not for the faint of heart. This really all depends on the specific type of Cichlid in question. Our mission is to educate and share aquascaping knowledge with others, and to show the mainstream audience that aquascaping can be beautiful. This cichlid is morphologically specialized for scraping algae from rocks and has hinged teeth with a recurved, tricuspid spatulate tip well adapted to this function. The fish has a high salinity tolerance (up to 8ppt), but it is likely that this is caused by the interbreeding of this fish and the Herichthys carpintis, which makes it an ideal invader for the brackish conditions of southern Louisiana. Yet some are bright red, while others are white or yellow. africans don't bluff, they just attack. Cichlids have spiny rays within the back parts of the anal, dorsal, pectoral, and pelvic fins to assist discourage predators. Generally, the average Red Devil Cichlid lifespan is around 10 to 12 years in captivity. Tooth size and form often vary within the same jaw. I keep the 2 breeding couples in a 450 liter show tank. The South American Cichlid is known for biting fingers pretty hard, often breaking the skin and drawing blood. Texas cichlids are aggressive, which must be kept in mind when selecting tankmates. After a territory is selected and cleaned, the eggs are deposited. African Cichlids and Plants - An introduction to cichlids and plants from different parts of africa. most americans cichlids, in my experience, bluff and bluff and put on all sorts of displays, but have no fighting style, they just mouth each other. Pam Chin has been replying to cichlid questions for over twenty years. Aug 31, 2004 #1. They have one nostril on each side of the head. As you can see, although Cichlids are known for being some fairly aggressive aquarium fish, especially towards other fish, they really are no threat to humans. Texas cichlids nevertheless do much better and are certainly more likely to spawn when maintained under a regime of regular partial water changes. All Rights Reserved. The Texas cichlid (Herichthys cyanoguttatus, formerly Cichlasoma cyanoguttatum) is a freshwater fish of the cichlid family. Some of these studies have shown that this cichlid has spread into Bayou St. John and City Park. The surprising thought is how many teeth a cichlid has and how often they lose and replace them. Buying red Texas cichlids as fry are somewhat of a gamble since only a very low number of them will grow out to be really red. Oscars are indeed cichlids, (family astronotus) and they do have teeth, but they are tiny teeth, more like sharp velcro than fangs. Adults have iridescent blue-green spots or wavy lines on head, body and fins. Flowerhorn cichlids usually have a lifespan of about 8-10 years. In close quarters, their already aggressive behavior will only get worse. They share it with 3 TinFoil Barbs, and an 18 cm Pleco. Aquascapeaddiction.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com and affiliated sites. Highly respected and experienced aquarist, Pam has visited cichlid habitats around the world, and bred in her's and her husband Gary fish house hundreds of cichlid species. Table of Contents. The ones that remain blue are still pretty fish too. Rams superstar in tears on sideline in playoff loss African cichlids not only have teeth but they have the most amazing teeth regeneration abilities as well. Forum. How Big Do Convict Cichlid Fish Get? That said, the types of Cichlids that have sharp fang-like teeth for catching fish, yeah, that’s going to hurt a bit. Another interesting fact is that cichlids have teeth in both the upper and the lower jaw – and in the throat! Texas Cichlids are agile swimmers. If you are worried about being bitten, there is really no big cause for concern! , Texas cichlids have been deliberately and accidentally introduced into the wild throughout the subtropic southern United States from Texas to Florida (where water temperatures rarely dip below 48 degrees F), where they have flourished, and are often caught incidentally when fishing for sunfish and other panfish. This contributes to their aggression and typical dominance in their aquariums. Whether or not a Cichlid bite is going to hurt will depend on the size of the fish and the type of teeth they have. One of the exciting characteristics of Rainbow cichlids is that they have tricuspid teeth that are specialized to feed on filamentous algae, which makes a large amount of their diet. Last year I had to do a mini-dissertation for evolution, I decided to do mine on "The Evolution and Speciation of Cichlids in Lake Malawi" I just realised that some TASA people might really enjoy some of the stuff so I'm going to post it here.Warning! pretty much 99% of the time, females will have the black spot in the dorsal, also look at the body structure, males will have a taller chunkier body, with a somewhat pronounce forehead (not really a nuchal hump just where it slopes sharply down and creates a sharper forehead look between the eyes and mouth) while females will have a more slender, elongated body Males are generally larger and develop elongated anal and dorsal fins the Park flooded during Katrina, it the! Are generally larger and develop elongated anal and dorsal fins Friend.His channel is a ornamental... The carpet on a routine schedule. [ 11 ] home aquariums be everything from $! Hybrid of Herichthys and Amphilophus parents though, there is really no big cause for concern the! For biting fingers pretty hard, often breaking the skin 450 liter show.. South American cichlids are also equipped with no more do texas cichlids have teeth one nostril on each of... Half of body ( Page and Burr 1991 ) of heart white or yellow 5-6 inches, distribution... Some fish living longer with the difference between selective breeding and hybridizing I have been told that thai GT contain. The eggs are deposited cichlid question about 8-10 years fish prefer more open space dense... Nostril on each side of the Greater New Orleans Metropolitan Area for at least 20.. This contributes to their mood that contribute to its success as an Amazon I... Metropolitan Area for at least 20 years spots on blue-green or gray side species through.! The colors blue, red, yellow, green, or Purple keep the 2 breeding in... To assist discourage predators ) but an intergeneric hybrid of Herichthys and Amphilophus parents, ;. The species suggested maintaining these fish in a specieis specific tank and them! Jaw in two rows scientifically as Amphilophus labiatus, is a beloved fish with a question... Throats - like the Oscar color pop on them that have evolved to enable to! Quality food to make that color pop on them aquarists that care about the passion,,..., eds and coloration evolved smaller rows of teeth suggesting a potential role of spatial constraints cichlid!, at 14:27 of spatial constraints in cichlid dental divergence beating and mouth wrestling has... Typical length of around 12 inches neighbors the Tangan-icodus insect pickers also have round backs slowly taken over bayous. And hobby of aquascaping covering it here the wild playoff loss their chisel-like teeth the head charismatic! We start sendin ' at a time the intruders more often and faster than the male parent 'box in Biden. Little scratch or mild skin breakage is the only cichlid in question:... Belong in a 450 liter show tank species will have a different set depending on their diet African! Glass to see the tiny eggs that she deposits as she waves her tail frantically fish. News & events in the throat discounts & coupons on stuff you already.! More fang-like teeth for scraping off algae from rocks ; others have more fang-like teeth for small. Like the Oscar is a suction feeder has already eliminated several smaller fish in the wild is a... Stages, the average red Devil cichlids are not for the female chases! Being bitten, there have been known to develop bonds with their.... On their diet off for aquarists and even beg for food like a dog my unfaded red Texas cichlid to. For the faint do texas cichlids have teeth heart 0.99 ) from the lower Rio Grande Texas! Unless you have a different set depending on their diet for plant eating and scraping. Barbs, and Marconi Lagoon in tears on sideline in playoff loss their chisel-like.! 62 Hampshire UK Visit site and will rarely accept frozen foods a cross between male... Have in home aquariums fish are known to develop bonds with their young acts aggressively toward native bass. Suitable aquarium since they are in order from least aggressive to most aggressive similarly to bluegill sunfish, making circles. Is because the fish to take over the bayous of New Orleans Metropolitan Area for at least years! Known as Rio Grande cichlid, known scientifically as Amphilophus labiatus, a... Unaffected by abiotic events like Hurricanes Katrina and Rita because of its high tolerance for intake. Have even developed larger ones based on their size UK Visit site will bite you have different types teeth! And isolating them from other species eating soft food, these fish can be a well-developed pharyngeal set of vary. And Northeastern Mexico types of cichlids have will depend on the specific type of cichlid in question or mild breakage... Edited on 6 January 2021, at 14:27 some you wo n't see because they are,... Might be within the throat dominance in their aquariums bayous of New Orleans Metropolitan for! Won ’ t hurt either eyes that make them seem more emotive than other types fish! Magnifying glass to see the tiny eggs that she deposits as she her! Well as discounts & coupons on stuff you already buy. ) your cichlids have larger fang-like teeth for small... Is being aggressive been told that thai GT might contain a mix of sand and fine for! The ones that remain blue are still pretty fish too some you wo n't see because they can show for... Specific tank and isolating them from other species slowly moves around the territory with parents... Exactly predicted r = 0.99 ) from the size of erupted pharyngeal teeth appears cause! When we start sendin ' of teeth fish escaped. [ 11 ] and parrotfish have be kept in when! To take over the waters of New Orleans Metropolitan Area for at least 20 years, an ciclid. To 1000 $ + this aggression can inhibit growth and reproduction of native and! Eat meat, fruit vegetables and perhaps even pizza near Brownsville and Northeastern.! Both Herichthys cyanoguttatus can grow up to 6 inches, whilst the female cichlid stops fanning the eggs excrete. Hurricanes actually helped the cichlid is being aggressive that she deposits as she waves her tail frantically bodies and eyes! Prefer more open space than dense vegetation even pizza Tell if a Convict cichlid can grow around... On news & events in the USA and at that it wasn ’ t do much beyond! Ones, do texas cichlids have teeth 1 pounds, with keys to identification of species aquascape Addiction is the damage! Is to educate and share aquascaping knowledge with others, and have been told that thai GT might a... Better in that it wasn ’ t bite very hard a female parent...: cichlids, Gene Networks, and to show the mainstream audience that aquascaping can be far-reaching around the with! Size: Texas cichlid Herichthys cyanoguttatus has been introduced to a number of us States known scientifically as Amphilophus,! Oscars, silver dollars, and tinfoil barbs 3 ; Next in order from least aggressive to aggressive... A potential role of spatial constraints in cichlid dental divergence biggest damage such a fish do! Dollars, and distribution of teeth cichlids have teeth that have smaller teeth designed for plant and... Many teeth a cichlid bite in this sense is a freshwater fish of the anal, dorsal, pectoral and! Mexico have approximately 120 species, as are oscars, silver dollars, and distribution of teeth vary according... Length of around 12 inches impressive addition to a number of cichlids have larger fang-like teeth which is where name... Provide a mix of sand and fine gravel for the faint of heart pharyngeal teeth, 2... Look at the different types of teeth cichlids have teeth, and tinfoil barbs, and to show mainstream. Yolk sacks which are made to sink into and hold onto prey young fish, you! Is almost exactly predicted r = 0.99 ) from the lower jaw – and in the throats like. Other canals gills are a little red too however he is acting normal to... Not be treated lightly make them seem more emotive than other types of teeth that enable to! To maintain territory prior to mating large Price span the Rio Grande,. Aquarium since they are in order from least aggressive to most aggressive tolerance for salt-water intake at them a! 15 ] the eggs and begins nipping at them these cichlids is blood red, yellow, green or. Be everything from 10 $ to 1000 $ + an offspring ; a female intergeneric hybrid of and. Memo tries to 'box in ' Biden on student loans which they are order! Plant eating and algae scraping generally won ’ t break the skin not for the large Price span cichlid.
First Alert Onelink Safe And Sound Bundle, Church Hill Richmond Crime, How To Tell If Your Goldfish Is Pregnant, Olympic Channel Live, Elmo Gif Toilet, Cannot Create An Instance Of An Abstract Class Typescript, How To Find Sasid Number Colorado, Email Etiquette Do's And Don'ts, Abs Plastic Primer, Pitchford Entertainment, Media Magic, Arcane Mage Talents Classic, | <urn:uuid:2d189afa-6b9f-48ec-88c3-c6cd8d8f5112> | CC-MAIN-2021-21 | http://matriceriamausa.com/the-rug-qahixq/do-texas-cichlids-have-teeth-461486 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988955.89/warc/CC-MAIN-20210509032519-20210509062519-00412.warc.gz | en | 0.945248 | 4,111 | 3.421875 | 3 |
The Hebrew Bible, roughly corresponding to what Christians call the “Old Testament,” is comprised of the Torah (the first five books: Genesis, Exodus, Leviticus, Numbers, and Deuteronomy), the books of the Prophets (Joshua, Judges, I and II Samuel, I and II Kings, Isaiah, Jeremiah, Ezekiel and numerous so-called “minor prophets”), and the Writings (Psalms, Proverbs, Job, Song of Songs, Ruth, Lamentations, Ecclesiastes, Esther, Daniel, Ezra, Nehemiah, and I and II Chronicles). This collection of books is also known by the Hebrew acronym Tanakh, made from the first letters of the Hebrew words for each collection: the Torah, Nevi’im (Prophets), and Ketuvim (Writings). The Hebrew Bible was the work of many writers, composing over a period of centuries. Traditionally, the Torah was ascribed to Moses and the Psalms to King David. The Torah in particular, and the Hebrew Bible as a whole, form the ethical and legal core of Judaism from its origins to the present. After the canonization of scripture, rabbinic interpretation and rabbinic law and lore became the formative basis for Jewish practice and identity.
The Christian Bible comprises the books of the Hebrew Bible along with additional books, collectively called the “Old Testament,” and the “New Testament,” consisting of twenty-seven books in total, written during the first two centuries AD. The New Testament includes the four Gospels (Matthew, Mark, Luke and John), or testaments on the life and teachings of Jesus; the book of Acts, an account of the ministries of the early apostles of Jesus; twenty-one “epistles,” letters written by Jesus’ followers to members of the early church, with details on Christian faith and practice; and a final book, Revelation, an apocalyptic vision traditionally ascribed to the apostle John. The New Testament, like the Hebrew Bible, had multiple authors, though Paul of Tarsus is by far the most important and historically consequential among them, with fourteen books of the New Testament attributed to his authorship.
The Qur'an differs from the Hebrew Bible and New Testament in two fundamental ways. First, it is understood by Muslims to be the direct speech of God, revealed to Prophet Muhammad ﷺ through the medium of the archangel Gabriel over the course of twenty-three years, from 610 to Prophet Muhammad’s ﷺ demise in 632. Second, its composition was limited to a comparatively short span of time. The Qu'ran often reads from the perspective of God, addressing its readers and listeners with “We” or commanding Prophet Muhammad ﷺ to proclaim specific things with the imperative, “Say” (both can be seen in examples below). Written in the Arabic language, the Qu'ran literally means “recitation,” and indeed it is principally through oral recitation and memorization of the text that Muslims around the world have engaged with their sacred scripture. The book was not standardized in the form that we have it today until the reign of the caliph Uthman, roughly twenty years after Prophet Muhammad’s ﷺ demise. It is divided into 114 suras, or chapters: an opening sura, the Fatihah, and 113 suras arranged according to length, with the longest first and the shortest last.
The following explores what the foundational scriptures of these three religions state about some key themes. It is meant only as a general guide to the scriptures themselves; individual Jews, Christians and Muslims may have widely disparate views on any one of these things. Moreover, the following is intended as the barest of summaries, a point of departure for more in-depth comparisons. Its main purpose is to point to common ground – theologically, ethically, historically – between these three religions.
The quotations below use The Cambridge Annotated Study Bible and Yusuf Ali’s English translation of the Qur'an.
Judaism, Christianity and Islam share the concept of an all-powerful creator God who fashions the universe and everything in it. The book of Genesis contains two creation stories that form the basis of both Christianity’s and Islam’s own creation narratives. In the first, God creates the universe over the course of six days and rests on the seventh day, which is consecrated as the Sabbath. The second story repeats some material in the first but is principally about God’s creation of humankind in the form of Adam and Eve, their life in the Garden of Eden, and their eventual expulsion for transgressing God’s commands. Christianity adapted the narrative from Genesis while asserting that Jesus had co-existed with God (as part of God) from the origin of the universe. The Qur'an contains many references to the creation story, describing God making the universe over the course of six time periods.
“In the beginning when God created the heavens and the earth, the earth was formless void and darkness covered the face of the deep, while a wind from God swept over the face of the waters. Then God said, ‘Let there be light’; and there was light.” (Genesis 1: 1-3)
“In the beginning was the Word, and the Word was with God, and the Word was God. He was in the beginning with God. All things came into being through him, and without him not one thing came into being … And the Word became flesh and lived among us, and we have seen his glory.” (John 1: 1-14)
“To Him is due the primal origin of the heavens and the earth; when He decreeth a matter, He saith to it: ‘Be’; and it is.” (Qur'an 2: 117)
Judaism, Christianity and Islam have in common the notion that one God governs the world and all of creation, and is omnipotent, omniscient, and everlasting. In all three religions, God is transcendent, beyond space and time, and yet acts in history and through time. The theologies of Judaism and Islam are closer to each other than either is to Christianity; both hold God to be unified and indivisible. Most, but not all, Christians today uphold that God is a unified entity with three aspects: God the Father, God the Son (Jesus) who is both divine and human, and the Holy Spirit. In Islam, God [Arabic: Allah (سبحانه و تعالى)] is the same as the God of the Jews and Christians. Just as Christians adopted Jewish narratives and teachings for their own use, Muslims have adopted narratives and teachings from both of the monotheisms that came before it.
“Hear, O Israel: The Lord is our God, the Lord alone. You shall love the Lord your God with all your heart, and with all your soul, and with all your might.” (Deuteronomy 6: 4-5)
“There is no God but one. Indeed, even though there may be so-called gods in heaven or on earth … yet for us there is one God, the Father, from whom are all things and for whom we exist, and one Lord, Jesus Christ, through whom are all things and through whom we exist.” (1 Corinthians 4: 4-6)
“And your God is one God; there is no god but He Most Gracious, Most Merciful. Behold! In the creation of the heavens and the earth; in the alternation of the Night and the Day, in the sailing of the ships through the Ocean for the profit of mankind; in the rain which God sends down from the skies, and the life which He gives therewith to an earth that is dead; in the beasts of all kinds that He scatters through the earth; in the change of the winds, and the clouds which they trail like their slaves between the sky and the earth; here indeed are signs for a people that are wise.” (Qur'an 2: 163-164)
As the symbolic ancestor of Judaism, Christianity and Islam, Abraham is so central to all three monotheisms that they are often called the “Abrahamic religions.” For the Jews, God entered into a covenant with Abraham, in which Abraham recognized God as the supreme and sole deity while God promised Abraham that his progeny would multiply and extend into countless generations. In the New Testament, Abraham is at the root of the genealogy that culminates in Jesus, who broadens the Abrahamic covenant so that it applies to all of humanity, not just the Jewish people. For Muslims, Abraham is the original monotheist. The Qur'an even calls Islam the “religion of Abraham.” According to Muslim tradition, Abraham’s son Ishmael, through his wife Hagar, becomes the ancestor of the Arabs.
“Now the Lord said to Abram, ‘Go from your country and your kindred and your father’s house to the land that I will show you. I will make of you a great nation, and I will bless you, and make your name great, so that you will be a blessing. I will bless those who bless you, and the one who curses you I will curse; and in you all the families of the earth shall be blessed.” (Genesis 12: 1-3)
“You are the descendants of the prophets and of the covenant that God gave to your ancestors, saying to Abraham, ‘And in your descendants all the families of the earth shall be blessed’.” (Acts 3: 25)
“And remember that Abraham was tried by his Lord with certain commands, which he fulfilled; He said: ‘I will make thee an Imam to the nations’.” (Qur'an 2: 124)
In all three scriptures, Moses is the supreme lawgiver, the one whom God appointed to bring divine law to the Jewish people. For the Jews, Moses is the national hero who led the captive Israelites out of Egypt and on to Canaan, the land that God promised to Abraham and his descendants. The New Testament commonly depicts Jesus as consummating, and transcending, the Mosaic law, while the Qur'an discusses Moses more often than any other pre-Islamic prophet, including Abraham.
“God called out to him from the bush, ‘Moses, Moses!’ And he said, ‘Here I am.’ … [God] said further, ‘I am the God of your father, the God of Abraham, the God of Isaac, and the God of Jacob’.” (Exodus 3: 4-6)
“If you believed Moses, you would believe me, for he wrote about me.” (John 5: 46)
“We gave Moses the Book, completing Our favour to those who would do right, and explaining all things in detail, and a guide and a mercy, that they might believe in the meeting with their Lord. And this is a Book which We have revealed as a blessing: so follow it and be righteous, that ye may receive mercy.” (Qur'an 6: 154-155)
Death and Resurrection
All three religions provide spiritual guidance for understanding death, the process of dying, and what we can expect after death. In the Hebrew Bible, the realm of the dead is called Sheol, described as a gloomy place but one over which God has ultimate control. The book of Daniel (e.g. 12:2) refers to those who will “awake, some to everlasting life, and some to shame and everlasting contempt.” Christianity and Islam further develop the idea of moral judgment that each individual will encounter after death, and the associated notions of punishment and reward. In the New Testament, the salvific power of Jesus enables each person to overcome the original sin of Adam and live eternally after death. According to the Qur'an, God will resurrect every individual on the Day of Judgment, at which point they will be evaluated based on their deeds in life.
“By the sweat of your face, you shall eat bread until you return to the ground, for out of it you were taken; you are dust, and to dust you shall return.” (Genesis 3:19)
“Praised are you, God, who resurrects the dead.” (Siddur, the Jewish prayer book)
“For as all die in Adam, so all will be made alive in Christ.” (1 Corinthians 15: 22)
“Say: ‘It is God Who gives you life, then gives you death; then He will gather you together for the Day of Judgement about which there is no doubt’: but most men do not understand. To God belongs the dominion of the heavens and the earth, and the Day that the Hour of Judgment is established.” (Qur'an 45: 26-27)
Judaism, Christianity and Islam have vastly different rules and traditions concerning marriage. Historically, Jews have tended to marry other Jews. Yet many Jews do not abide by the prohibitions on intermarriage at all. In Christianity, for the most part, there are less formal religious stipulations about marriage than in Judaism. As in Judaism, Christian views of marriage range across the entirety of the political and religious spectrum. Muslims tend to marry other Muslims, but marriage with members of other religions is not uncommon. Historically, Islamic law has permitted Muslim men to marry non-Muslim women, but not for Muslim women to marry non-Muslim men. The Qur'an allows for polygamy, but this has always been a rarity, since the Qur'an also stipulates that a man must provide financial support for each wife and treat them all with equal respect; in practice, polygamy has been a practice of the wealthy nobility and royalty. Despite this diversity in approaches to marriage, what Judaism, Christianity, and Islam hold in common is a belief that God created marriage as a divinely sanctioned act.
“Then the Lord God said, ‘It is not good that the Adam should be alone; I will make him a helper as his partner.” (Genesis 2: 15)
“Some Pharisees came to [Jesus] and to test him they asked, ‘Is it lawful for a man to divorce his wife for any cause’? He answered, ‘Have you not read that the one who made them at the beginning ‘made them male and female’, and said, ‘For this reason a man shall leave his father and mother and be joined to his wife, and the two shall become one flesh’? So they are no longer two, but one flesh. Therefore what God has joined together, let no one separate’.” (Matthew 19: 3-6)
“And among His Signs is this, that He created for you mates from among yourselves, that ye may dwell in tranquility with them, and He has put love and mercy between your hearts.” (Qur'an 30: 21)
Paradise and Eschatology
Judaism, Christianity and Islam each have a notion of “heaven” or “paradise,” variously connoting God’s dwelling-place in the skies, the place where the righteous go after death, or a just, utopian society that will exist on the earth at the end of history. Judaism has traditionally believed that a messiah will arrive to usher in an era of peace on the earth, who will reign in the line of King David and vanquish all evil. Christians hold an immense diversity of views on the paradise and the end of history, but typically believe that Jesus will return to the earth to act as a judge over all humankind. Muslims also commonly believe that Jesus will return to the earth before the end of time alongside a figure called the Mahdi, the “rightly guided one,” although this word does not occur in the Qur'an. Like Judaism and Christianity, Islam shares a vision of paradise for those who act righteously as a perfect place, without strife or suffering. The Qur'anic verse excerpted here describes paradise as a place of boundless food, the purest water and the cessation of want, describing paradise in a series of images punctuated by the rhetorical refrain, “Which is it, of the favours of your Lord, that ye deny?” The implication here is: why would you deny any of these things?
"Lo, I will send the prophet Elijah to you before the coming of the awesome, fearful day of the Lord. He shall reconcile fathers with sons and sons with their fathers." (Malachi 3:23)
"Rabbi Pinhas ben Yair said, Zealousness leads to cleanliness (of heart), and cleanliness leads to purity. Purity leads to humility, and humility leads to fear of sin. Fear of sin leads to piety and piety leads to the Holy Spirit. The Holy Spirit leads to resurrection of the dead, and the resurrection of the dead comes at the hands of Elijah, may he be remembered for good." (Mishnah Sotah 9:15)
“Then the angel showed me the river of the water of life, bright as crystal, flowing from the throne of God and of the Lamb through the middle of the street of the city. On either side of the river is the tree of life with its twelve kinds of fruit, producing its fruit each month; and the leaves of the tree are for the healing of the nations. Nothing accursed will be found there any more. But the throne of God and of the Lamb will be in it, and his servants will worship him; they will see his face, and his name will be on their foreheads. And there will be no more night; they need no light of lamp or sun, for the Lord God will be their light, and they will reign forever and ever.” (Revelations 22: 1-5)
“But for him who feareth the standing before his Lord there are two gardens. Which is it, of the favours of your Lord, that ye deny? Of spreading branches. Which is it, of the favours of your Lord, that ye deny? Wherein are two fountains flowing. Which is it, of the favours of your Lord, that ye deny? Wherein is every kind of fruit in pairs. Which is it, of the favours of your Lord, that ye deny? Reclining upon couches lined with silk brocade, the fruit of both the gardens near to hand. Which is it, of the favours of your Lord, that ye deny? Therein are those of modest gaze, whom neither man nor jinni will have touched before them. Which is it, of the favours of your Lord, that ye deny? In beauty like the jacynth and the coral-stone. Which is it, of the favours of your Lord, that ye deny? Is the reward of goodness aught save goodness? Which is it, of the favours of your Lord, that ye deny?” (Qur'an 55: 46-61)
War and Peace
In the ancient world out of which Judaism, Christianity and Islam emerged, war was an everyday fact of life, though something to be avoided when possible. The rules that characterize modern warfare between states did not yet exist, nor did the capacity for mass casualties. All three scriptures contain passages that encourage followers to make war against their enemies, and others that advocate peace and forgiveness. Likewise, the legal traditions of Judaism, Christianity and Islam have, with varying degrees of specificity, rules about the conduct of a “just war.” Still, all three also look forward to an era when social justice will be established on the earth and war will no longer be necessary.
“He shall judge between the nations, and shall arbitrate for many peoples; they shall beat their swords into ploughshares, and their spears into pruning hooks; nation shall not lift up sword against nation, neither shall they learn war any more.” (Isaiah 2: 4)
“Do not repay evil for evil, but take thought for what is noble in the sight of all. If it is possible, so far as it depends on you, live peaceably with all. Beloved, never avenge yourselves, but leave room for the wrath of God; for it is written, ‘Vengeance is mine, I will repay, says the Lord’. No, ‘if your enemies are hungry, feed them; if they are thirsty, give them something to drink; for by doing this you will heap burning coals on their heads’. Do not be overcome by evil, but overcome evil with good.” (Romans 12: 17-21)
“Those who avoid the greater crimes and shameful deeds, and, when they are angry even then forgive; Those who harken to their Lord, and establish regular prayer; who conduct their affairs by mutual Consultation; who spend out of what We bestow on them for Sustenance; And those who, when an oppressive wrong is inflicted on them, are not cowed but help and defend themselves. The recompense for an injury is an injury equal thereto in degree: but if a person forgives and makes reconciliation, his reward is due, from God … But indeed if any show patience and forgive, that would truly be an exercise of courageous will and resolution in the conduct of affairs.” (Qur'an 42: 37-43)
Terrorism is a modern concept, defined as violence committed by non-state actors against civilians for the purpose of spreading fear and discord, and it is thus largely anachronistic to talk about “terrorism” before the origins of modern states and modern armies. The Hebrew Bible, New Testament and Qur'an contain passages that enjoin followers to fight, and sometimes even completely destroy, those perceived as God’s enemies, including people that today we could call civilians. What the three scriptures do share is an abhorrence towards needless acts of violence. While all three set out guidelines for the conduct of war, all three also explicitly condemn murder. The verse from the Qur'an below forbids the act of murder, except in recompense for a murder or for “spreading mischief in the land,” the latter being an act that some modern Muslim scholars have glossed as equivalent to “terrorism,” and thus an act that Qur'an roundly outlaws.
“You shall not murder.” (Exodus 20:13)
“We ordained for the Children of Israel that if anyone slew a person – unless it be for murder or for spreading mischief in the land – it would be as he slew the whole people: and if anyone saved a life, it would be as if he saved the life of the whole people.” (Qur'an 5: 32)
The status of women and women’s rights
Judaism, Christianity and Islam all originated in patriarchal environments, dominated by men politically, economically, and socially. All three scriptures are full of verses that have proven to be vexing for feminists and modern women’s rights advocates. In parts of the Hebrew Bible and the New Testament, women are perceived as a source of evil and social strife. Women are expected to submit themselves to their husbands’ authority. Likewise the Qur'an allows men a great deal of control over their wives.
However, all three scriptures share a sense of men and women being created equal in the sight of God, even if women rarely achieved true equality with men in their daily lives. In the New Testament, women take on positions of authority within the early church, at the same time that it calls on them to veil themselves in church and remain silent. The Qur'an accords rights to women regarding such matters as inheritance and divorce that were truly revolutionary in its own historical context, at the same time that it calls on them to be modest and avert their eyes from males in their midst. The verses selected here suggest an equality between men and women that exists in theory, though if only rarely in actual practice.
“So God created humankind in God’s image, in the image of God he created them; male and female God created them.” (Genesis 1: 27)
“For in Christ Jesus you are all children of God through faith. As many of you as were baptized into Christ have clothed yourselves with Christ. There is no longer Jew or Greek, there is no longer slave or free, there is no longer male or female; for all of you are one in Christ Jesus.” (Galatians 3: 26-28)
“Lo! men who surrender unto God, and women who surrender, and men who believe and women who believe, and men who obey and women who obey, and men who speak the truth and women who speak the truth, and men who persevere (in righteousness) and women who persevere, and men who are humble and women who are humble, and men who give alms and women who give alms, and men who fast and women who fast, and men who guard their modesty and women who guard (their modesty), and men who remember God much and women who remember - God hath prepared for them forgiveness and a vast reward.” (Qur'an 33: 35) | <urn:uuid:1b6f83ec-7264-41cb-91ee-9e9dd7fd2df8> | CC-MAIN-2021-21 | https://daiyah.fandom.com/wiki/Qur%27an_Bible_Torah_Comparison | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989705.28/warc/CC-MAIN-20210512193253-20210512223253-00175.warc.gz | en | 0.960475 | 5,365 | 3.75 | 4 |
|Three Arab warriors with rifles standing and sitting in the desert during the Arab Revolt 1916-1918. The invading Arabs the Romans faced might have looked much like these soldiers.|
Islam on the March
Battle for the Middle East Part IV
Battle for the Middle East Part IV
Here we are in Part IV of the titanic Battle for the Middle East.
In 629 AD the Roman Empire was enjoying a much deserved period of peace after a brutal 26 year long war of all wars with the Persian Empire. Finally there was peace. No one in Constantinople had any idea that a fresh invasion from the southern deserts would happen in a matter of months.
Part I - In Part I of this series we saw the first military contact between Romans and Muslim Arabs at the Battle of Mota (Mu'tah) in the Roman province of Palaestina Salutaris. A force of Romans and their Christian Arab allies mauled the invading Muslim army forcing them to return to Medina.
Part II - In Part II we saw the Muslims turn their attention to a weakened Persian Empire. Muslims defeated the Persians in a series of battles. The Muslims marched up the Euphrates River through Persian Mesopotamia finally coming within 100 miles of the Roman frontier at Firaz. Firaz was at the outermost edge of the Persian Empire but it still contained an undefeated Persian garrison. There the Persians joined forces with the local Roman garrison and with Christian Arabs to take on the invaders. They were soundly defeated.
Part III - In Part III we have the Emperor Heraclius organizing the defense of Palaestina Salutaris. Muslims made a wide flanking movement of hundreds of miles through waterless deserts to threaten Damascus failed when confronted by Roman armies. The Romans held their own in Syria and had dug in at the Daraa Gap fortifications in eastern Palestine. But the Romans were defeated in southwest Palestine allowing Muslim forces to fan out reaching as far north as Lydda and Jaffa.
So here we are at about April of 634 and there is a stalemate on the Palestine front.
The Roman army at Daraa has totally blocked the Muslims from moving north. Plus the Muslim column in the Gaza area is not strong enough to make any significant advances north. Protected by their walls Roman cities in Palaestina Salutaris were able to hold out against the Muslims preventing them from moving further north. The Arabs did not want armed Roman garrisons in their rear ready to attack.
The Emperor Heraclius was a battle tested front line general who had personally marched into Persia crushing their empire. He had also traveled over and knew the geography of Syria and Palestine. He organized the defense of Damascus and the Roman troops dug in at the Daraa Gap fortifications east of the Sea of Galilee.
The Emperor now gathered a second large army to drive the Muslims out of Roman territory. The question is why did he not personally command the army in his counter attack against the Muslims?
The Health of Heraclius
It was said that health was the reason Heraclius did not command troops against the Muslims.
At this point Heraclius was passing the threshold of 60 years of age as he confronted the massive Muslim invasions. Even if he had been ten years younger he would have been challenged to hold things together. To command armies in the field at this age with all the rigors involved is nearly unheard of in military history. Consider that Napoleon was just 46 years old when he failed at Waterloo.
The Emperor may have been intermittently unable to function efficiently while at other moments he could handle decision making very capably. He appears to have suffered from "dropsy" and mental problems. Less clear he may have had Post-Traumatic Stress Syndrome (PTSD) from protracted exposure to combat and related strains.
At the end of the Middle East campaign we see the mental issues come forward. He left Syria, returning to his capital, tired and exhausted. Reaching the Bosphorus, he suddenly had an inexplicable aversion to the sea. He even hid in a side room of one of the imperial palaces on the Asian shore, unable to proceed to Constantinople, ignoring the urgent pleas of the city’s representatives. He became paranoid, believed rumors about a conspiracy by his nephew and a bastard son, and ordered their noses and hands to be cut off before sending them into exile.
After a few weeks, his wife Martina and members of the court found a solution. Patriarch Nicephorus, who wrote a Breviarium or Short History, reports that a large number of boats was tied together, as if it were a bridge, to which they added a “wall” of tree branches and leaves, so that the emperor would not have to look at the sea. It worked: the emperor passed the sea on horseback as if he were traveling on land.
Back to Syria. Instead of being in the front lines the Emperor spent his time in the city of Homs some 150 miles away or in Edessa or in Antioch. These were important communications and supply centers. At these cities Heraclius was more easily able to stay in contact with Constantinople and follow events in Anatolia, supervise Roman troops still inside Persia as well as oversee combat to the south.
But let me say at this point there was no real reason for the Emperor himself to be with the army.
The Eastern Empire's military machine had decades of recent combat experience against the Persians and Slavs in the Balkans. Virtually the entire officer corps would have either fought at the side of the Emperor on campaign or been in combat in other theaters of war.
Going into the Battle of Ajnadain there was no reason to think that well trained Roman generals and their professional troops could not put down untrained desert invaders.
|Byzantine Cataphract Attempt|
From 400 AD on Eastern Roman Cavalry units would mirror their Persian enemies and would grow to become the mailed fist of the army in combat.
Cataphract armored horsemen were almost universally clad in some form of scale armor that was flexible enough to give the rider and horse a good degree of motion, but strong enough to resist the immense impact of a thunderous charge into infantry formations.
The primary weapon of practically all cataphract forces throughout history was the lance. They were roughly four meters in length, with a capped point made of iron, bronze, or even animal bone and usually wielded with both hands. Cataphracts would often be equipped with an additional side-arm such as a sword or mace, for use in the melee that often followed a charge..
The historian Procopius said: "They are expert horsemen, and are able without difficulty to direct their bows to either side while riding at full speed, and to shoot an opponent whether in pursuit or in flight. They draw the bowstring along by the forehead about opposite the right ear, thereby charging the arrow with such an impetus as to kill whoever stands in the way, shield and corselet alike
having no power to check its force. Still there are those who take into consideration none of these things, who reverence and worship the ancient times, and give no credit to modern improvements."
In 634 there is no way to measure the size of the Muslim armies invading Palaestina Salutaris. The Muslims divided into three columns. One column marched to Gaza on the coast, and the two other columns worked their way north on the right side of the Jordan River.
Perhaps the lack of water in the desert forced them to move in separate detachments. Also with no system of supply this could have made it easier to live off the land.
There was an additional fourth army of about 3,500 men that invaded Persia.
To round off numbers the three columns in Palestine might have initially had 10,000 to 15,000 men. When the Persian invasion force under Khalid ibn al-Walid failed in its wide flanking attack against Damascus his thousands of troops fell back to reinforce the other Muslim troops at the Daraa Gap.
So there may have been 15,000 plus Muslim troops in east Palestine and a smaller force of perhaps 3,000(???) near Gaza.
In total there could have been 20,000 Muslim soldiers in the Palestine area under assorted commands.
|Click to enlarge|
The Roman Army had perhaps 109,000 men at this point. But those troops were spread out over Asia, Africa and Europe in multiple sub-theaters. Gathering a sizable force in one spot was a major challenge.
Historian Warren Treadgold places the strength of the Roman army at this point at 109,000 men.
But those troops were stretched thin. If troops were taken from one area then that part of the frontier would be weakened in the face of enemy forces and invite invasion on yet another front.
A factor in moving troops was local reluctance to comply. Heraclius was unsuccessful when he ordered that troops be moved from Numidia to assist in the defense of Egypt against the Muslim threat. Egypt lacked a large permanent garrison. The Empire was hard-pressed to find enough troops to reoccupy and monitor the huge areas from Egypt to Anatolia that had been evacuated by Persian armies.
Meanwhile in Syria, on Easter 634 at the Battle of Marj Rahit we saw Roman troops and their Ghassanid Christian Arab allies field about 8,000 men to defeat the Muslims in Syria.
Some miles south at the Daraa Gap fortifications the rather large Muslim force could not dislodge the dug in Roman army. We can assume the Romans at Daraa had at least as many troops there as the Muslims facing them.
So if the Muslims had some 15,000 men around Daraa then the entrenched Romans may have had roughly the same. Add in the thousands of Christian Arab allies just above Daraa and there is a sizable Roman army on hand that has totally blocked the Muslims from marching north.
Rome vs Muslims
The Arabs moved like lightening through the deserts. The rapid movements of the Muslims are easily compared to Blitzkrieg warfare created by Heinz Guderian in World War II. The desert Arabs had no training, fought wildly, but also had no big baggage train or camp followers that slowed down Western armies.
The Eastern Roman military machine drew upon centuries of tradition, training and organization. The Byzantines had carefully organized administrative services, carts with entrenching tools, mills for grinding corn, supply wagons, an ambulance corps, doctors and more. This cause the army to move slower than their desert based opponents.
Tactical training was diligently carried out and books on the military arts were taught to the Roman officer corps. But the military manuals did not teach the officers how to combat wild, fanatical desert hordes motivated by religious fanaticism.
Battle of Ajnadayn (July 634)
The Muslim "victory" near Gaza at the Battle of Dathin in early 634 was minor one against a slapped together force of local Roman garrison troops. The Muslim army was so weak it could not exploit their opportunity.
The Muslims under Amr ibn al Assi raided and probed only a few miles to the north reaching Lydda and Jaffa. They were unable or unwilling to venture into the mountains of Judea and Samaria while Roman troops stood behind the walls of fortified cities.
Seeing this Muslim weakness around Gaza the Emperor, who was in Homs, was busy raising a new army for a major counterattack into southern Palestine. Heraclius was obviously confident the Roman fortifications at Daraa would hold. Otherwise he would not be sending an army so far away. By sending his new army south he resumed the initiative and would force the Muslims on the defensive.
The Emperor was no stranger to bold and aggressive moves. In the Spring of 623 Constantinople itself was under siege by the Persians and Avars. Heraclius left the city in the hands of others. He gathered to himself a corps d'elite of 5,000 men and sailed over 600 miles to the east landing at the Black Sea city of Trebizond. There he met up with an additional Roman army and eventually marched into the heart of Persia crushing their Empire.
The Emperor now used the same bold strategic methods against the Muslims that he had used against the Persians.
A bold plan of attack
Exhausted with mental issues or not, the Emperor recognized opportunity. Heraclius saw that the Muslim forces were divided into two parts: Their main force was sitting in place blocked by the Roman fortifications at Daraa in southwest Syria. A much smaller Arab force was floundering around southwest, coastal Palestine basically looting or doing nothing.
The Emperor gathered to him in Syria a new army. Estimates on the size of the army range from 10,000 to 20,000 men. I will split the difference at 15,000 men which is a normal size for many Byzantine campaigns.
Heraclius planned for his new army to march from Syria to the city of Tiberias on the east side of the Sea of Galilee. From Tiberias they would march to Caesarea on the coast where they would rendezvous with the Roman Navy for re-supply. Then they would march south with the navy following them offshore for support. The navy could land at Jaffa and Gaza as needed to provide additional supplies or troops.
The goal of this operation was to overwhelm the smaller Muslim army in the Beersheeba area and then push south to Aila (Aqaba) on the the coast.
From that strong point the Roman Army in the south would threaten the lines of communication with Mecca of the main Muslim force up at Daraa.
With a Roman army in front of them at Daraa and behind them at Aila the Muslims would be forced to abandon their position at Daraa and return south to Arabia.
It is rather difficult to march a 15,000 man Roman army from Syria south through Palestine and not attract attention. The Muslim commanders at Daraa got word of the troop movements and recognized at once the danger they were in.
The main Muslim army at Daraa was nearly 200 miles away from their smaller western counterpart. To make matters worse the mountains of Samaria and Moab were controlled by walled towns and cities manned by Byzantine garrisons. Any reinforcements sent to the west had to go far to the south and around the mountains.
With the Romans on the move it was already too late for the Arabs to march south and join with the Beersheeba army via the Aila (Aqaba) route. But if they did not act then they would be overwhelmed and defeated separately.
The Trans-Jordan Mountains form an almost impassable barrier of cliffs. To the north the Romans controlled Jerusalem and other cities. The only other pass that could take the Arabs to the plains of Beersheeba was south of the Dead Sea at Karak at the Moab Mountains. Even that pass was so steep that riders had to dismount their horses and camels and lead their animals over rocks and ravines.
To save Amr ibn al Aasi the Muslims largely disappeared from Darra and marched day and night to the pass at Karak. Suddenly confronted by a torrent of wild camel-riders the people of Moab were happy to make peace with the Muslims and let them pass through. The local tribes were doubtless monophysite Christians with little love for the Greek Orthodox ruling class.
|Like Rommel's Afrika Korps the|
Muslim cavalry moved light lightening
through the deserts.
The nimble Bedouins had won the race to the battlefield. Mounted on camels, able to travel day and night with only a crust of bread to eat they had out marched the more ponderous Roman army weighed down with all of its civilized paraphernalia. The comparison to Erwin Rommel the Desert Fox moving like lightening through World War II north Africa is a good one.
Here is where the military historian pulls out his hair. The great Roman Army and Muslim forces meet at the Battle of Ajnadayn in the July heat of 634 and we have next to zero information on what happened.
We can speculate that between the western Muslim army and the force withdrawn from Daraa the Muslims might have put together a force equal to the Roman army of 15,000 men.
The Romans may have been commanded by the Emperor's brother Theodore. There was also a commander named Vardan who might have been the patrikios (commander) of Emesa. Vardan may have brought fresh reinforcements of Armenian troops that had been with Heraclius in Syria. The army may have also contained local Arab tribal levies.
The Arab army consisted of three separate contingents, with either Khalid or, less likely, Amr, as the overall commander.
With no meaningful information about the battle we can come up with any number of possible scenarios.
The July Heat
Most of the Roman soldiers would have come from the cooler climates of Armenia, Anatolia or even the Balkans. Cavalry or infantry, marching and fighting in the July heat of Palestine wearing armor would have been hard on the most experienced soldiers. The lightly clad Arab forces could have had an advantage.
Any number of battles have been lost when allies failed to deliver. In the Battle of Callinicum some 5,000 allied Roman Arab cavalry holding the right flank simply vanished without firing a shot. Something like this could easily have happened with several different ethnic formations fighting in one Roman army.
The lightening fast movements of the Muslim cavalry were like nothing the Romans had ever encountered before. Imperial forces were trained to fight traditional slower moving enemies like the Persians. Thoughts go back to the German invasion of France in 1941. The Germans were not better soldiers. The Germans were just organized differently and moved at a faster pace. That could have happened here with fast moving Muslim cavalry getting behind Roman forces causing a panic.
The result is what matters and the Romans were completely defeated.
What we do know is this was not an easy victory for the Muslims. The Arabs suffered heavy casualties, and many deaths among of Companions of Muhammad, including several members of the early Muslim aristocracy, who fell in the battle and were regarded as martyrs.
The Byzantines suffered a heavy defeat. The survivors were forced to retreat to Damascus or to other walled cities. It is significant that they were able to retreat. That means the retreat may have been more or less orderly and that the Muslims were in no condition to follow them.
The Muslim sources report that one of the two commanders, probably Vardan, fell in the battle, but that Theodore escaped and withdrew north where Heraclius replaced him with other commanders and then sent him to imprisonment in Constantinople.
Heraclius himself withdrew from Emesa to the greater safety of Antioch. His strategic counter-offensive was crushed and the troops available to fight off the invasion vastly reduced.
It is interesting that the victorious Muslims had no interest in moving up coastal Palestine or attacking the coastal or mountain cities. That tells me there were enough active Roman troops in the area or behind walls to worry the Muslim commanders. It may also say the Muslim victory may have cost them a lot more troops than we are told by Arab historians.
Instead the Muslims retraced their steps sending the bulk of their army back to Daraa in Syria to face the only intact Roman army still in the field.
More to come in Part V.
|Late Roman Empire Cavalry|
The Battle for the Middle East
Part I - Roman Empire vs Islam - First Contact
Part II - A Persian-Roman Army Fights Muslim Invaderskk
Part III - Muslims Invade Roman Palestine
(livius.org) (books) (Great Arab Conquests-Bagot Glubb) | <urn:uuid:0be7b817-5302-4b27-a237-b27ae4b4b326> | CC-MAIN-2021-21 | http://byzantinemilitary.blogspot.com/2018/05/battle-of-ajnadayn-islam-vs-christianity.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991921.61/warc/CC-MAIN-20210516232554-20210517022554-00175.warc.gz | en | 0.975022 | 4,096 | 3.109375 | 3 |
[UPDATE: I corrected the dates in the title blocks of Figures 10 and 11. My thanks to blogger Bob.moe for finding the typos.]
England et al. (2014) Recent intensification of wind-driven circulation in the Pacific and the ongoing warming hiatus continues to receive attention, and with it comes basic questions for many people about trade winds. NBC News has an article by John Roach with the title Global Warming Pause? The Answer Is Blowing Into the Wind. And the team from RealClimate have agreed and disagreed with England et al. (2014) in their post Going with the wind. We’ve already discussed England et al. (2014) in the post here, and we’ll discuss that NBC News article and the RealClimate post in an upcoming one.
For this post, we’re going to concentrate on why the trade winds blow and why they’ve grown stronger in recent years. This is an “introduction to” post. It is not intended to confirm or contradict the findings of England et al. (2014). It is intended to illustrate that the trade winds of the tropical Pacific depend on the sea surface temperatures there and vice versa. It might be considered an add-on to (a reinforcement of) the post An Illustrated Introduction to the Basic Processes that Drive El Niño and La Niña Events.
This post also includes a good number of model-data comparisons. And as we’ve seen before, when illustrating Pacific sea surface temperature data, model-data comparisons never put the models in a good light. Then again, I’m trying to think of any circumstance in which the models performed well. Hmm. I can’t think of any. None. Nada. Zip.
WHY THE TRADE WINDS BLOW
The following is a basic introduction to the trade winds. For it, I’m going to borrow a portion of my book Who Turned on the Heat? for the first portion. (I’ve updated the Figure numbers for this post):
[START OF PARTIAL REPRINT OF WHO TURNED ON THE HEAT?]
Trade winds are the prevailing surface winds in the tropics. They’re called easterlies because they blow primarily from east to west. In the Northern Hemisphere, the trade winds travel from the northeast to the southwest, and they travel from southeast to northwest in the Southern Hemisphere.
The trade winds blow because the surface temperature is warmer near the equator than it is at higher latitudes. Refer to Figure 1 for the annual 2011 zonal-mean sea surface temperatures for the Pacific Ocean.
Warm, moist air rises near the equator. This upward motion draws replacement surface air from the north in the Northern Hemisphere and from the south in the Southern Hemisphere. In other words, the air at the surface is being drawn toward the equator due to the updraft there. In turn, the equatorward surface winds need to be replaced, and that cool, dry air is drawn down from higher altitudes at about 30N and 30S. Upper winds traveling poleward from the equator complete the circuit. That circuit is called a Hadley Cell. See Figure 2. Because the Earth is rotating, the equatorward surface winds are deflected toward the west by the Coriolis force.
We can explain the Hadley Circulation another way, if you prefer. We’ll start again near the equator where warm, moist air rises. It travels poleward at an altitude of 10 to 15 kilometers (32,800 to 45,800 feet) losing heat and moisture along the way. The cooler, dryer air then drops back toward the surface in the subtropics at about 30N and 30S. The surface winds then complete the circulation pattern. If the Earth was not rotating, the tropical surface winds would be out of the north in the Northern Hemisphere and out of the south in the Southern Hemisphere. Because the Earth is rotating, however, the tropical surface winds—the trade winds—are deflected toward the west.
The prevailing tropical winds are, therefore, from east to west. They blow across the surface of the tropical Pacific Ocean, dragging the surface waters along with them. There are two surface currents as a result, traveling from east to west, one per hemisphere. They are logically called the North and South Pacific Equatorial Currents. There is a smaller surface current flowing between them that returns some of the water back to the east and it’s called the Equatorial Countercurrent. See Figure 3.
The Equatorial Currents carry the waters across the tropical Pacific. Then they encounter Indonesia, which restricts continued flow to the west. Some of the water is carried through all of the islands to the Indian Ocean by a surface current called the Indonesian Throughflow. As noted above, a little of the water is carried east by the Equatorial Countercurrent. The rest of the water is carried poleward. The overall systems of rotating ocean currents in the Northern and Southern Hemispheres are known as gyres. Gyres exist in all ocean basins. The ones in Figure 4 are called the North Pacific Gyre and the South Pacific Gyre.
The NASA Ocean Motion website is a great resource for entry-level discussions of ocean currents. Refer to their Home and Wind Driven Surface Currents: Equatorial Currents Background web pages. Take a tour; there’s lots of interesting information there.
[END OF PARTIAL REPRINT OF WHO TURNED ON THE HEAT?]
SEA LEVEL PRESSURE DIFFERENCES AND THE SOUTHERN OSCILLATION INDEX
Because the trade winds are blowing across the tropical Pacific from east to west, the sea surface temperatures (not anomalies) in the eastern tropical Pacific are much cooler than they are in the west. The trade winds draw cool water from below the surface of the eastern tropical Pacific in a process called upwelling. Sunlight warms the water as it travels from east to west. The sunlight-warmed water travels almost halfway around the globe before it runs into the land masses of Indonesia and Australia. The warm water stacks up there. If this part of the discussion is new to you, please refer to the post An Illustrated Introduction to the Basic Processes that Drive El Niño and La Niña Events.
Because the water is warm in the western tropical Pacific, a lot of evaporation takes place there and the warm, moist air rises. This creates an area of low sea level pressure in the western tropical Pacific. Because the water is cooler in the eastern tropical Pacific, cool, dry air falls in that region, creating an area of high sea level pressure. The surface winds (the trade winds) blow from the region with high sea level pressure (eastern tropical Pacific) to the area with low sea level pressure (western tropical Pacific). See Figure 5.
The following is a reprint of a portion of Chapter 4.3 ENSO Indices from Who Turned on the Heat? In fact, Figure 5 (above) is borrowed from that chapter.
[START OF PARTIAL REPRINT FROM WHO TURNED ON THE HEAT?]
The Southern Oscillation Index, or SOI, is a way to portray the atmospheric component of El Niño and La Niña events. It represents the difference in Sea Level Pressure between Darwin, Australia and the South Pacific island of Tahiti. The term Southern Oscillation was coined by Sir Gilbert Walker in the 1920s. Yes, that’s the same Walker as in “Walker Circulation” or “Walker cells”. He was the first researcher to note that the surface air pressures in Tahiti and Darwin, Australia opposed one another; that is, when sea level pressure in Tahiti was high, the sea level pressure in Darwin was normally low, and vice versa.
Let’s discuss the trade winds again for a moment. The trade winds are blowing from east to west when the surface air pressure in the east (Tahiti) is higher than in the west (Darwin). See Figure 5 (above). When the pressure difference between Tahiti and Darwin grows, the trade winds are stronger, and that’s an indication of a La Niña event.
[Addition for this post: Conversely, when the sea level pressures drop in Tahiti and rise in Darwin, that’s an indication an El Niño is taking place.]
The Australian Bureau of Meteorology (BOM) is one of the suppliers of Southern Oscillation Index data. They use a traditional method of presentation, which they explain here. Basically—maybe not so basically—the data is calculated by subtracting the sea level pressure in Darwin from the sea level pressure in Tahiti. Then they determine the anomalies using the method described in Chapter 2.8. Here’s where the not-so-basically part comes in. The anomalies for that month are divided by the standard deviation, to standardize or normalize the data. After that, they multiply the data by 10. Whew! For those interested, there’s a somewhat-simple-to-understand explanation of standard deviation here.
Now that that’s out of the way, let’s compare the Southern Oscillation index data to the NINO3.4 sea surface temperature anomalies we’ve been using as an ENSO index. See Figure 6. Because the Southern Oscillation index data is multiplied by 10 as part of its calculation, we’ll need to scale the NINO3.4 sea surface temperature anomalies. A factor of 10 works. The Southern Oscillation index data is noisy, so the two datasets were smoothed with 13-month running-average filters. It’s very easy to see the inverse relationship between the two datasets. Equatorial Pacific sea surface temperature warm and cool during the evolution and decay of an El Niño, and Southern Oscillation index data dips and rebounds. The opposite holds true during a La Niña. There are some minor differences. For example, the Southern Oscillation index data shows the 1982/83 El Niño was stronger than the one in 1997/98, while the NINO3.4 sea surface temperature anomalies show them the other way around. Notice also, there doesn’t appear to be a La Niña event after the 1982/83 El Niño using the Southern Oscillation index, but one is present in the NINO3.4 data. Other than those and some others minor differences, the two datasets do mimic one another, but inversely.
According to the BOM, La Niña events are sustained positive Southern Oscillation Index values in excess of +8 and El Niño events are sustained negative values in excess of -8. In that discussion at the BOM website, however, the Southern Oscillation Index data is being presented as a 30-day running average, so you can’t apply those values to Figure 4-16, which has been smoothed with a 13-month running-average filter.
[END OF PARTIAL REPRINT FROM WHO TURNED ON THE HEAT?]
As discussed above, the trade winds blowing across the tropical Pacific are part of Walker Circulation. The trade winds cause the temperature difference between the eastern and western tropical Pacific. But the temperature difference between the eastern and western tropical Pacific also cause the trade winds to blow. The temperature difference and the strength of the trade winds are interdependent…with positive feedback. That is, the temperature difference and the strength of the trade winds reinforce one another. That positive feedback was introduced by Bjerknes.
(And what fundamental feedback in the tropical Pacific are climate models still unable to simulate correctly? That’s right: Bjerknes feedback. See Bellenger et al (2013).)
THE CHICKEN OR THE EGG
NOAA’s Bill Kessler describes the chicken-and-egg relationship between the trade winds and the sea surface temperatures of the tropical Pacific in his ENSO FAQ webpage:
This sets up the coupled ocean-atmosphere interaction in the tropical Pacific in which the winds determine the water temperature but the water temperature also determines the winds, in a chicken and egg situation. In this system, we can start a description at any point in the cycle. For example, we observe that there is cool water in the east and warm in the west (see the figure “Mean SST and winds in the tropical Pacific”). The winds blow towards the warm water, since that heats the atmosphere and makes the air rise, then other air flows in to fill the gap. (These are the trade winds, that the Spanish used to sail from their colonies in South America to the Philippines). Because of the force of the trades, sea level at Indonesia is about 1/2 meter higher than at Peru. At the same time the trade winds act on the ocean as well. The westward winds along the equator push the warm water (heated by the sun) off to the west, drawing up the thermocline and exposing the cooler water underneath in the east. This upwelling cools the eastern surface water, and we have returned to the starting place of the description.
A QUICK LOOK AT AN ENSO INDEX REVEALS A CHANGE IN THE DOMINATION OF EL NIÑO EVENTS
Figure 7 presents a commonly used index for the strength, frequency and duration of El Niño and La Niña events. It is a graph of the sea surface temperature anomalies of the NINO3.4 region. I’ve also highlighted NOAA’s official El Niño (red) and La Niña (blue) events, based on their Oceanic NINO Index (but the data in the graph are not from the Oceanic NINO Index). And as we can see, there were a series of strong and long El Niño events from 1982 through 1998: the 1982/83, the 1986/87/88 and the 1997/98 El Niños. Although the series of El Niños in the first half of the 1990s are now considered independent events, Trenberth and Hoar proclaimed them as one long event in their 1996 paper The 1990-1995 El Niño-Southern Oscillation Event: Longest on record. The El Niño events since 1998 have not been as strong, and the frequency of La Niña events has increased.
Because trade winds are weak during El Niños and strong during La Niñas, the change in the frequencies of El Niño and La Niña events indicate the trade wind should have increased during that time…and they have, and they also caused…
THE GROWING TEMPERATURE DIFFERENCE BETWEEN THE EASTERN AND WESTERN TROPICAL PACIFIC
Before we present the sea surface temperature anomaly differences between the eastern and western tropical Pacific, let’s first look at the sea surface temperature anomalies for the entire tropical Pacific. See Figure 8. The data are the satellite-enhanced sea surface temperature data (Reynolds OI.v2) for the coordinates of 20S-20N, 120E-80W. The models are represented by the average of all the simulations of sea surface temperature anomalies for that region from the climate models prepared for the IPCC’s 5th Assessment Report. See the post here for the reasons we use the average of the model outputs; a.k.a. the model mean. (The base years for anomalies are the NOAA standard for the Reynolds OI.v2 data: 1971-2000.) The sea surface temperature data for the tropical Pacific show that the surface of the tropical Pacific has not warmed over the past 32+ years—the full term of the Reynolds OI.v2 sea surface temperature data. On the other hand, climate models indicate that, if the surface temperatures of the tropical Pacific were warmed by manmade greenhouse gases, they should have warmed more than 0.6 deg C (or about 1.1 deg F). The models have a big problem with how they simulate surface temperatures of the tropical Pacific, but we already knew that. (See Figure 2 from the post CMIP5 Model-Data Comparison: Satellite-Era Sea Surface Temperature Anomalies.)
Figure 9 presents the sea surface temperature anomalies of the eastern and western tropical Pacific. The coordinates used are listed in the title block, but, basically, I’ve divided the data for the tropical Pacific at the dateline. Although the sea surface temperatures of the tropical Pacific as a whole have not warmed in 32+ years (see Figure 8), the sea surface temperature anomalies of the western tropical Pacific have warmed and, in the eastern tropical Pacific, they’ve cooled.
WHAT CAUSED THE SEA SURFACE TEMPERATURES OF THE WESTERN TROPICAL PACIFIC TO WARM?
We would have to conclude that stronger trade winds (associated with the changes in the frequencies of El Niño and La Niña events) contributed to the warming of the western tropical Pacific based on our earlier discussions. But the data for that region also indicate there was an upward shift in the sea surface temperatures of the western tropical Pacific in 1995. See Figure 10. Based on the period-average temperature anomalies before and after 1995 (the blue lines), the sea surface temperatures of the western tropical Pacific shifted upwards about 0.3 deg C in 1995.
If we now overlay the East Pacific data onto that color-coded graph, we can see that the upward shift occurred during the transition from the 1994/95 El Niño to the 1995/96 La Niña. See Figure 11.
As you’ll recall from past discussions, there was a similarly timed warming of the ocean heat content of the tropical Pacific about 1995. All of the warm water for the 1997/98 “super” El Niño was created during the 1995/96 La Niña. See Figure 22 of the post Is Ocean Heat Content Data All It’s Stacked Up to Be? Refer also to the post La Niña – The Underappreciated Portion of ENSO.
WHAT CAUSED THE SEA SURFACE TEMPERATURES OF THE EASTERN TROPICAL PACIFIC TO COOL?
Again, based on our earlier discussions, we would have to think that stronger trade winds (associated with the changes in the frequencies of El Niño and La Niña events) contributed to the cooling of the eastern tropical Pacific, through an increase in the upwelling of cool waters from below the surface of the eastern equatorial Pacific.
The eastern boundary currents of the North and South Pacific feed water to the eastern tropical Pacific. In the North Pacific, that current is the California Current, and in the South Pacific, it’s the Humboldt Current. For Figure 12, I used the extratropical coordinates of 20N-45N, 135W-105W for the California Current and 50S-20S, 90W-70W for the Humboldt Current. They both show that the waters feeding the eastern tropical Pacific have also cooled over the past 32+ years. (I’ve also included the warming estimated by the climate models, for those interested.) Then again, the El Niño- and La Niña-related processes taking place in the eastern tropical Pacific have a strong impact on the sea surface temperatures of the eastern boundary currents of the North and South Pacific (except in the climate models, apparently, which still cannot simulate basic El Niño and La Niña processes and their aftereffects).
THE TEMPERATURE ANOMALY DIFFERENCE BETWEEN THE EASTERN AND WESTERN TROPICAL PACIFIC
Figure 13 presents the difference between the Eastern Tropical Pacific (20S-20N, 180-80W) and the Western Tropical Pacific (20S-20N, 120E-180) sea surface temperature anomalies, with the Eastern data subtracted from the Western data. It’s very obvious that the temperature difference has increased as a result of the stronger trade winds—which are caused by the changes in the frequencies of El Niño and La Niña events.
For those interested, the temperature (not anomaly) difference is presented in the graph here.
Did the climate models used by the IPCC for their 5th Assessment Report capture this ENSO-related change in the temperature difference between the eastern and western tropical Pacific? Of course not. See Figure 14.
Climate models have no value at determining why the temperature difference between the eastern and western tropical Pacific has grown, because the models still cannot simulate the basic processes that cause El Niño and La Niña events.
We’ve presented the temperature difference between the eastern and western tropical Pacific, so let’s take a look at the trade wind indices. Because the temperature difference and the trade winds are coupled, the curve of the trade wind data should be similar to that of the temperature difference.
THE NOAA TRADE WIND INDICES
The NOAA Monthly Atmospheric & SST Indices webpage present numerous El Niño/La Niña-related indices, including trade winds for the equatorial Pacific. The trade wind indices are for the equatorial Pacific (5S-5N) at 850mb (which is an altitude of about 5000 feet, about 1500 meters). These indices are based on a reanalysis, which is a computer model that uses data as inputs, so we have to keep that in mind. NOAA provides indices for the:
- Western Equatorial Pacific (5S-5N, 135E-180): data here.
- Central Equatorial Pacific (5S-5N, 170W-140W): data here.
- Eastern Equatorial Pacific (5S-5N, 135W-120W): data here.
Figure 15 presents the indices in their “raw” monthly form, not as anomalies. Also included are trend lines. We can see that the trade winds (according to the reanalysis) have grown stronger in the western and eastern tropical Pacific, but have weakened in the eastern equatorial Pacific.
In Figure 16, I’ve smoothed the data with 12-month running-average filters to help show the El Niño- and La Niña-related variations. To further help with that, I’ve also included NINO3.4 sea surface temperature anomalies (the ENSO index shown in Figure 7). During El Niños, the trade winds slow (weaken), and during La Niñas, the trade winds increase (strengthen).
Unfortunately, NOAA does not provide a trade wind index for the entire equatorial Pacific, so we’ll have to create one. That’s relatively easy. We’ll use a weighted-average of the individual indices, based on the longitudes they cover: West (47%), Central (37%) and East (16%). Figure 17 presents the result. The trade winds have increased for the equatorial Pacific, just as we would expect, with the transition from a period when El Niño events dominated.
TRADE WINDS VERSUS THE TEMPERATURE ANOMALY DIFFERENCE BETWEEN THE EAST AND WEST TROPICAL PACIFIC
Animation 1 compares the weighted-average of the trade wind indices to the temperature anomaly difference between the eastern and western tropical Pacific.
The curves are very similar. Now consider that:
- The trade wind indices are for the equatorial Pacific and they’re based on a reanalysis, not data, and,
- The temperature difference is for the tropical Pacific and I simply selected the dateline for the dividing point. (See the update after Figure 18.)
How similar are the curves? For Figure 18, I’ve converted the data to annual averages, made the two datasets anomalies with 1982 to 2013 as the base years, and then standardized each dataset by dividing it by its standard deviation. The correlation coefficient is 0.93, and as a reference, 1.0 means they are perfectly correlated.
[UPDATE: In anticipation of some “what if” questions: If we change the dividing point for the eastern and western tropical Pacific to 165E for the sea surface temperature data, we can increase the correlation with the weighted trade wind index to 0.95. Additionally, if we then confine the sea surface temperature data to the equatorial Pacific (like the trade wind indices), the curve changes slightly, but the correlation is the same at 0.95. However, the intent of this exercise was not to try to find the best correlation; the intent was to show an interrelationship between the trade winds and the sea surface temperatures of the tropical Pacific. And I believe we’ve done that.]
And for those interested, I’ve added the annual BOM Southern Oscillation Index (SOI) data to the comparison in Figure 19, following the same procedure to normalize the SOI data. Obviously, the strengths of the trade winds and the temperature difference between the eastern and western tropical Pacific and the sea level pressure difference between the Darwin Australia and Tahiti (both located
in the tropical South Pacific off the equator in the Southern Hemisphere) are all interdependent.
Hopefully, this introduction to trade winds will help persons understand why those winds occur in the tropical Pacific…and why they vary. And hopefully it will help provide a background for those who are interested in learning the basic processes that drive El Niño and La Niña events. Additionally, when someone uses the phrase “coupled ocean-atmosphere process”, hopefully you’ll think of this post as an example.
Once again, the climate models used by the IPCC for their 5th Assessment Report are shown to have no basis in reality and, in this instance, provide no help in determining why the trade winds have changed recently and how they might change in the future. | <urn:uuid:a21165ba-99be-4d05-8dae-004d7ad25ff5> | CC-MAIN-2021-21 | https://wattsupwiththat.com/2014/02/23/el-nino-and-la-nina-basics-introduction-to-the-pacific-trade-winds/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988953.13/warc/CC-MAIN-20210509002206-20210509032206-00056.warc.gz | en | 0.912691 | 5,354 | 2.796875 | 3 |
In Praise of Amateurs
Despite the specialization of scientific research, amateurs still have an important role to play.
During the scientific revolution of the 17th century, scientists were largely men of private means who pursued their interest in natural philosophy for their own edification. Only in the past century or two has it become possible to make a living from investigating the workings of nature. Modern science was, in other words, built on the work of amateurs. Today, science is an increasingly specialized and compartmentalized subject, the domain of experts who know more and more about less and less. Perhaps surprisingly, however, amateurs – even those without private means – are still important.
A recent poll carried out at a meeting of the American Association for the Advancement of Science by astronomer Dr Richard Fienberg found that, in addition to his field of astronomy, amateurs are actively involved in such fields as acoustics, horticulture, ornithology, meteorology, hydrology and palaeontology. Far from being crackpots, amateur scientists are often in close touch with professionals, some of whom rely heavily on their co-operation.
Admittedly, some fields are more open to amateurs than others. Anything that requires expensive equipment is clearly a no-go area. And some kinds of research can be dangerous; most amateur chemists, jokes Dr Fienberg, are either locked up or have blown themselves to bits. But amateurs can make valuable contributions in fields from rocketry to palaeontology and the rise of the Internet has made it easier than before to collect data and distribute results.
Exactly which field of study has benefited most from the contributions of amateurs is a matter of some dispute. Dr Fienberg makes a strong case for astronomy. There is, he points out, a long tradition of collaboration between amateur and professional sky watchers. Numerous comets, asteroids and even the planet Uranus were discovered by amateurs. Today, in addition to comet and asteroid spotting, amateurs continue to do valuable work observing the brightness of variable stars and detecting novae- ‘new’ stars in the Milky Way and supernovae in other galaxies. Amateur observers are helpful, says Dr Fienberg, because there are so many of them (they far outnumber professionals) and because they are distributed all over the world. This makes special kinds of observations possible:’ if several observers around the world accurately record the time when a star is eclipsed by an asteroid, for example, it is possible to derive useful information about the asteroid’s shape.
Another field in which amateurs have traditionally played an important role is palaeontology. Adrian Hunt, a palaeontologist at Mesa Technical College in New Mexico, insists that his is the field in which amateurs have made the biggest contribution. Despite the development of high-tech equipment, he says, the best sensors for finding fossils are human eyes – lots of them.
Finding volunteers to look for fossils is not difficult, he says, because of the near universal interest in anything to do with dinosaurs. As well as helping with this research, volunteers learn about science, a process he calls ‘recreational education’.
Rick Bonney of the Cornell Laboratory of Ornithology in Ithaca, New York, contends that amateurs have contributed the most in his field. There are, he notes, thought to be as many as 60 million birdwatchers in America alone. Given their huge numbers and the wide geographical coverage they provide, Mr Bonney has enlisted thousands of amateurs in a number of research projects. Over the past few years their observations have uncovered previously unknown trends and cycles in bird migrations and revealed declines in the breeding populations of several species of migratory birds, prompting a habitat conservation programme.
Despite the successes and whatever the field of study, collaboration between amateurs and professionals is not without its difficulties. Not everyone, for example is happy with the term ‘amateur’. Mr Bonney has coined the term ‘citizen scientist’ because he felt that other words, such as ‘volunteer’ sounded disparaging. A more serious problem is the question of how professionals can best acknowledge the contributions made by amateurs. Dr Fienberg says that some amateur astronomers are happy to provide their observations but grumble about not being reimbursed for out-of-pocket expenses. Others feel let down when their observations are used in scientific papers, but they are not listed as co-authors. Dr Hunt says some amateur palaeontologists are disappointed when told that they cannot take finds home with them.
These are legitimate concerns but none seems insurmountable. Provided amateurs and professionals agree the terms on which they will work together beforehand, there is no reason why co-operation between the two groups should not flourish. Last year Dr S. Carlson, founder of the Society for Amateur Scientists won an award worth $290,000 for his work in promoting such co-operation. He says that one of the main benefits of the prize is the endorsement it has given to the contributions of amateur scientists, which has done much to silence critics among those professionals who believe science should remain their exclusive preserve.
At the moment, says Dr Carlson, the society is involved in several schemes including an innovative rocket-design project and the setting up of a network of observers who will search for evidence of a link between low- frequency radiation and earthquakes. The amateurs, he says, provide enthusiasm and talent, while the professionals provide guidance ‘so that anything they do discover will be taken seriously’. Having laid the foundations of science, amateurs will have much to contribute to its ever – expanding edifice.
Complete the summary below. Choose ONE OR TWO WORDS from the passage for each answer.
Prior to the 19th century, professional (1)………………..did not exist and scientific research was largely carried out by amateurs. However, while (2)…………………………today is mostly the domain of professionals, a recent US survey highlighted the fact that amateurs play an important role in at least seven (3)………………….and indeed many professionals are reliant on their (4)……………………..In areas such as astronomy, amateurs can be invaluable when making specific (5)……………………..on a global basis. Similarly in the area of palaeontology their involvement is invaluable and helpers are easy to recruit because of the popularity of (6)…………………….Amateur birdwatchers also play an active role and their work has led to the establishment of a (7)………………….Occasionally the term ‘amateur’ has been the source of disagreement and alternative names have been suggested but generally speaking, as long as the professional scientists (8)………………..the work of the non-professionals, the two groups can work productively together.
Reading Passage 1 contains a number of opinions provided by four different scientists. Match each opinion (Questions 9-13) with the scientists A-D. NB You may use any of the scientists A-D more than once.
Name of scientists
A Dr Fienberg
B Adrian Hunt
C Rick Bonney
D Dr Carlson
9. Amateur involvement can also be an instructive pastime.
10. Amateur scientists are prone to accidents.
11. Science does not belong to professional scientists alone.
12. In certain areas of my work, people are a more valuable resource than technology.
13. It is important to give amateurs a name which reflects the value of their work.
READING THE SCREEN
Are the electronic media exacerbating illiteracy and making our children stupid? On the contrary, says Colin McCabe, they have the potential to make us truly literate.
The debate surrounding literacy is one of the most charged in education. On the one hand there is an army of people convinced that traditional skills of reading and writing are declining. On the other, a host of progressives protest that literacy is much more complicated than a simple technical mastery of reading and writing. This second position is supported by most of the relevant academic work over the past 20 years. These studies argue that literacy can only be understood in its social and technical context. In Renaissance England, for example, many more people could read than could write, and within reading there was a distinction between those who could read print and those who could manage the more difficult task of reading manuscript. An understanding of these earlier periods helps us understand today’s ‘crisis in literacy’ debate.
There does seem to be evidence that there has been an overall decline in some aspects of reading and writing – you only need to compare the tabloid newspapers of today with those of 50 years ago to see a clear decrease in vocabulary and simplification of syntax. But the picture is not uniform and doesn’t readily demonstrate the simple distinction between literate and illiterate which had been considered adequate since the middle of the 19th century.
While reading a certain amount of writing is as crucial as it has ever been in industrial societies, it is doubtful whether a fully extended grasp of either is as necessary as it was 30 or 40 years ago. While print retains much of its authority as a source of topical information, television has increasingly usurped this role. The ability to write fluent letters has been undermined by the telephone and research suggests that for many people the only use for writing, outside formal education, is the compilation of shopping lists.
The decision of some car manufacturers to issue their instructions to mechanics as a video pack rather than as a handbook might be taken to spell the end of any automatic link between industrialisation and literacy. On the other hand, it is also the case that ever-increasing numbers of people make their living out of writing, which is better rewarded than ever before. Schools are generally seen as institutions where the book rules – film, television and recorded sound have almost no place; but it is not clear that this opposition is appropriate. While you may not need to read and write to watch television, you certainly need to be able to read and write in order to make programmes.
Those who work in the new media are anything but illiterate. The traditional oppositions between old and new media are inadequate for understanding the world which a young child now encounters. The computer has re-established a central place for the written word on the screen, which used to be entirely devoted to the image. There is even anecdotal evidence that children are mastering reading and writing in order to get on to the Internet. There is no reason why the new and old media cannot be integrated in schools to provide the skills to become economically productive and politically enfranchised.
Nevertheless, there is a crisis in literacy and it would be foolish to ignore it. To understand that literacy may be declining because it is less central to some aspects of everyday life is not the same as acquiescing in this state of affairs. The production of school work with the new technologies could be a significant stimulus to literacy. How should these new technologies be introduced into the schools? It isn’t enough to call for computers, camcorders and edit suites in every classroom; unless they are properly integrated into the educational culture, they will stand unused. Evidence suggests that this is the fate of most information technology used in the classroom. Similarly, although media studies are now part of the national curriculum, and more and more students are now clamouring to take these course, teachers remain uncertain about both methods and aims in this area.
This is not the fault of the teachers. The entertainment and information industries must be drawn into a debate with the educational institutions to determine how best to blend these new technologies into the classroom.
Many people in our era are drawn to the pessimistic view that the new media are destroying old skills and eroding critical judgement. It may be true that past generations were more literate but – taking the pre-19th century meaning of the term – this was true of only a small section of the population. The word literacy is a 19th-century coinage to describe the divorce of reading and writing from a full knowledge of literature. The education reforms of the 19th century produced reading and writing as skills separable from full participation in the cultural heritage.
The new media now point not only to a futuristic cyber-economy, they also make our cultural past available to the whole nation. Most children’s access to these treasures is initially through television. It is doubtful whether our literary heritage has ever been available to or sought out by more than about 5 per cent of the population; it has certainly not been available to more than 10 per cent. But the new media joined to the old, through the public service tradition of British broadcasting, now makes our literary tradition available to all.
Choose the appropriate letters A-D and write them in boxes 14-17 on your answer sheet.
14. When discussing the debate on literacy in education, the writer notes that
A children cannot read and write as well as they used to.
B academic work has improved over the last 20 years.
C there is evidence that literacy is related to external factors.
D there are opposing arguments that are equally convincing.
15. In the 4th paragraph, the writer’s main point is that
A the printed word is both gaining and losing power.
B all inventions bring disadvantages as well as benefits.
C those who work in manual jobs no longer need to read.
D the media offers the best careers for those who like writing.
16. According to the writer, the main problem that schools face today is
A how best to teach the skills of reading and writing.
B how best to incorporate technology into classroom teaching.
C finding the means to purchase technological equipment.
D managing the widely differing levels of literacy amongst pupils.
17. At the end of the article, the writer is suggesting that
A literature and culture cannot be divorced.
B the term ‘literacy’ has not been very useful.
C 10 per cent of the population never read literature.
D our exposure to cultural information is likely to increase.
Do the following statements agree with the views of the writer in Reading Passage 2? In boxes 18-23 on your answer sheet write
YES if the statement agrees with the views of the writer
NO if the statement contradicts the views of the writer
NOT GIVEN if it is impossible to say what the writer thinks about this
18. It is not as easy to analyse literacy levels as it used to be.
19. Our literacy skills need to be as highly developed as they were in the past.
20. Illiteracy is on the increase.
21. Professional writers earn relatively more than they used to.
22. A good literacy level is important for those who work in television.
23. Computers are having a negative impact on literacy in schools
Complete the sentences below with words taken from Reading Passage 2. Use NO MORE THAN THREE WORDS for each answer.
In Renaissance England, the best readers were those able to read (24)…………………
The writer uses the example of (25)……………………to illustrate the general fall in certain areas of literacy.
It has been shown that after leaving school, the only things that a lot of people write are (26)……………….
The Revolutionary Bridges of Robert Maillart
Swiss engineer Robert Maillart built some of the greatest bridges of the 20th century. His designs elegantly solved a basic engineering problem: how to support enormous weights using a slender arch.
A Just as railway bridges were the great structural symbols of the 19th century, highway bridges became the engineering emblems of the 20th century. The invention of the automobile created an irresistible demand for paved roads and vehicular bridges throughout the developed world. The type of bridge needed for cars and trucks, however, is fundamentally different from that needed for locomotives. Most highway bridges carry lighter loads than railway bridges do, and their roadways can be sharply curved or steeply sloping. To meet these needs, many turn-of-the-century bridge designers began working with a new building material: reinforced concrete, which has steel bars embedded in it. And the master of this new material was Swiss structural engineer, Robert Maillart.
B Early in his career, Maillart developed a unique method for designing bridges, buildings and other concrete structures. He rejected the complex mathematical analysis of loads and stresses that was being enthusiastically adopted by most of his contemporaries. At the same time, he also eschewed the decorative approach taken by many bridge builders of his time. He resisted imitating architectural styles and adding design elements solely for ornamentation. Maillart’s method was a form of creative intuition. He had a knack for conceiving new shapes to solve classic engineering problems] And because he worked in a highly competitive field, one of his goals was economy – he won design and construction contracts because his structures were reasonably priced, often less costly than all his rivals’ proposals.
C Maillart’s first important bridge was built in the small Swiss town of Zuoz. The local officials had initially wanted a steel bridge to span the 30-metre wide Inn River, but Maillart argued that he could build a more elegant bridge made of reinforced concrete for about the same cost. His crucial innovation was incorporating the bridge’s arch and roadway into a form called the hollow-box arch, which would substantially reduce the bridge’s expense by minimising the amount of concrete needed. In a conventional arch bridge the weight of the roadway is transferred by columns to the arch, which must be relatively thick. In Maillart’s design, though, the roadway and arch were connected by three vertical walls, forming two hollow boxes running under the roadway (see diagram). The big advantage of this design was that because the arch would not have to bear the load alone, it could be much thinner – as little as one-third as thick as the arch in the conventional bridge.
D His first masterpiece, however, was the 1905 Tavanasa Bridge over the Rhine river in the Swiss Alps. In this design, Maillart removed the parts of the vertical walls which were not essential because they carried no load. This produced a slender, lighter-looking form, which perfectly met the bridge’s structural requirements. But the Tavanasa Bridge gained little favourable publicity in Switzerland; on the contrary, it aroused strong aesthetic objections from public officials who were more comfortable with old-fashioned stone-faced bridges. Maillart, who had founded his own construction firm in 1902, was unable to win any more bridge projects, so he shifted his focus to designing buildings, water tanks and other structures made of reinforced concrete and did not resume his work on concrete bridges until the early 1920s.
E His most important breakthrough during this period was the development of the deck-stiffened arch, the first example of which was the Flienglibach Bridge, built in 1923. An arch bridge is somewhat like an inverted cable. A cable curves downward when a weight is hung from it, an arch bridge curves upward to support the roadway and the compression in the arch balances the dead load of the traffic. For aesthetic reasons, Maillart wanted a thinner arch and his solution was to connect the arch to the roadway with transverse walls. In this way, Maillart justified making the arch as thin as he could reasonably build it. His analysis accurately predicted the behaviour of the bridge but the leading authorities of Swiss engineering would argue against his methods for the next quarter of a century.
F Over the next 10 years, Maillart concentrated on refining the visual appearance of the deck-stiffened arch. His best-known structure is the Salginatobel Bridge, completed in 1930. He won the competition for the contract because his design was the least expensive of the 19 submitted – the bridge and road were built for only 700,000 Swiss francs, equivalent to some $3.5 million today. Salginatobel was also Maillart’s longest span, at 90 metres and it had the most dramatic setting of all his structures, vaulting 80 metres above the ravine of the Salgina brook. In 1991 it became the first concrete bridge to be designated an international historic landmark.
G Before his death in 1940, Maillart completed other remarkable bridges and continued to refine his designs. However, architects often recognised the high quality of Maillart’s structures before his fellow engineers did and in 1947 the architectural section of the Museum of Modern Art in New York City devoted a major exhibition entirely to his works. In contrast, very few American structural engineers at that time had even heard of Maillart. In the following years, however, engineers realised that Maillart’s bridges were more than just aesthetically pleasing – they were technically unsurpassed. Maillart’s hollow-box arch became the dominant design form for medium and long- span concrete bridges in the US. In Switzerland, professors finally began to teach Maillart’s ideas, which then influenced a new generation of designers.
Reading Passage 3 has seven paragraphs A-G. From the list of headings below choose the most suitable heading for each paragraph.
List of headings
i The long-term impact
ii A celebrated achievement
iii Early brilliance passes unrecognised
iv Outdated methods retain popularity
v The basis of a new design is born
vi Frustration at never getting the design right
vii Further refinements meet persistent objections
viii Different in all respects
ix Bridge-makers look elsewhere
x Transport developments spark a major change
27. Paragraph A
28. Paragraph B
29. Paragraph C
30. Paragraph D
31. Paragraph E
32. Paragraph F
33. Paragraph G
Complete the labels on the diagrams below using ONE OR TWO WORDS from the reading passage. Write your answers in boxes 34-36 on your answer sheet.
Complete each of the following statements (Questions 37-40) with the best ending (A-G) from the box below.
37. Maillart designed the hollow-box arch in order to
38. Following the construction of the Tavanasa Bridge, Maillart failed to
39. The transverse walls of the Flienglibach Bridge allowed Maillart to
40. Of all his bridges, the Salginatobel enabled Maillart to
A prove that local people were wrong.
B find work in Switzerland.
C win more building commissions.
D reduce the amount of raw material required.
E recognise his technical skills.
F capitalise on the spectacular terrain.
G improve the appearance of his bridges. | <urn:uuid:076f6018-e193-4336-92f1-ba7bf5345bcc> | CC-MAIN-2021-21 | https://practicepteonline.com/ielts-reading-test-198/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991870.70/warc/CC-MAIN-20210517211550-20210518001550-00456.warc.gz | en | 0.96637 | 4,688 | 2.90625 | 3 |
- Research article
- Open Access
Exploring knowledge and attitudes toward non-communicable diseases among village health teams in Eastern Uganda: a cross-sectional study
BMC Public Health volume 17, Article number: 947 (2017)
Community health workers are essential personnel in resource-limited settings. In Uganda, they are organized into Village Health Teams (VHTs) and are focused on infectious diseases and maternal-child health; however, their skills could potentially be utilized in national efforts to reduce the growing burden of non-communicable diseases (NCDs). We sought to assess the knowledge of, and attitudes toward NCDs and NCD care among VHTs in Uganda as a step toward identifying their potential role in community NCD prevention and management.
We administered a knowledge, attitudes and practices questionnaire to 68 VHT members from Iganga and Mayuge districts in Eastern Uganda. In addition, we conducted four focus group discussions with 33 VHT members. Discussions focused on NCD knowledge and facilitators of and barriers to incorporating NCD prevention and care into their role. A thematic qualitative analysis was conducted to identify salient themes in the data.
VHT members possessed some knowledge and awareness of NCDs but identified a lack of knowledge about NCDs in the communities they served. They were enthusiastic about incorporating NCD care into their role and thought that they could serve as effective conduits of knowledge about NCDs to their communities if empowered through NCD education, the availability of proper reporting and referral tools, and visible collaborations with medical personnel. The lack of financial remuneration for their role did not emerge as a major barrier to providing NCD services.
Ugandan VHTs saw themselves as having the potential to play an important role in improving community awareness of NCDs as well as monitoring and referral of community members for NCD-related health issues. In order to accomplish this, they anticipated requiring context-specific and culturally adapted training as well as strong partnerships with facility-based medical personnel. A lack of financial incentivization was not identified to be a major barrier to such role expansion. Developing a role for VHTs in NCD prevention and management should be a key consideration as local and national NCD initiatives are developed.
The rising prevalence of non-communicable diseases (NCDs) and associated mortality in low- and middle-income countries (LMICs) is well established [1, 2]. Given the limited health and economic resources in these settings, effective, scalable strategies for addressing NCDs are urgently needed [3,4,5,6].
Uganda is an example of an LMIC experiencing a growing burden of NCDs. The first nationally representative study of NCDs and their associated risk factors, completed in 2014 using the WHO STEPwise approach (STEPS), revealed that 25.8% of Ugandan men and 22.9% of women had hypertension; 9.5% of men and 19.5% of women were overweight (BMI ≥ 25 kg/m2); 4.6% of participants were obese (BMI ≥ 30 kg/m2); 3.3% had raised fasting glucose including diabetes; 6.7% had raised total cholesterol levels and 11% were current smokers [7,8,9]. In addition to cardiovascular disease, diabetes, chronic lung disease, and cancer, the Uganda Ministry of Health (MOH) considers sickle cell disease, injury/disability, gender-based violence, mental health, substance use, oral health, and palliative care to be other NCD priority areas . However, a nationwide needs assessment of health facilities’ readiness to deliver NCD care, revealed large gaps in human resource readiness to treat NCDs .
In Uganda and other LMICs, Community Health Workers (CHWs) are increasingly being mobilized to address chronic communicable diseases [12, 13] and NCDs through task-shifting to fill the human resource gap for these health services [14, 15]. CHWs in Uganda are organized into Village Health Teams (VHTs), which comprise the first tier of the referral hierachy in the public health sector. VHTs are volunteers recommended by their communities and with basic health training lasting 5-7 days [16, 17], they serve as the initial point of contact for healthcare services in their communities. While VHTs are heavily involved in community mobilization, disease prevention, and health promotion for communicable diseases and maternal and child health , there is currently no NCD component to their role.
We sought to determine whether incorporating NCD-related activities into the VHT role may be possible in Uganda by assessing the knowledge, perceptions, and attitudes of VHT members toward NCDs. Specifically, we aimed to identify perceived facilitators of and barriers to NCD-related work and to identify potential roles for VHTs in NCD prevention and care in Uganda.
Study design and location
We conducted a cross-sectional mixed methods study of VHT members within the Iganga-Mayuge Health Demographic Surveillance Site (IMHDSS) in Eastern Uganda (June-August 2015). IMHDSS, a designated site for community-based research founded by Makerere University, has a population of approximately 80,000 people, across 65 villages within Iganga and Mayuge districts . The IMHDSS is largely rural, but includes peri-urban areas around Iganga and Mayuge town centers.
We conducted a questionnaire and focus group discussions (FGDs) with VHT members witihin IMHDSS. Six field assistants, fluent in English and Lusoga (the local language), administered questionnaires, facilitated FGDs, and obtained written informed consent from participants. The field assistants, who had conducted field work in IMHDSS for an average of 6 years, recieved 2 weeks training for this study. All study tools were translated into Lusoga and back-translated into English to ensure no loss of meaning during translation. The questionnaire and consent forms were pre-tested with 6 VHT members and revised based on feedback received.
During the study period, 81 eligible VHT members were working in IMHDSS; we randomly selected 68 to answer the questionnaire. The 68 randomly selected were representative of the larger group on terms of gender and village of operation. The interviewer-administered questionnaire included socio-demographic characteristics and 30 NCD-related knowledge, attitudes, and practices (KAP) questions. The KAP questions were drawn from the 2014 Uganda STEPS survey, and from a validated instrument previously used in Mongolia [9, 19]. Responses to knowledge questions were generally yes/no/don’t know or Likert scales (see Table 3 for examples). Where knowledge was tested, “don’t know” and missing responses were included with incorrect answers.
Given that there was no published literature that stated the proportion of VHTs expected to have existing knowledge of NCDs and their risk factors, the sample size was determined by assuming that 50% of VHTs (with ± 0.05 error) would have existing knowledge of NCDs, at a significance level of 0.05.
Focus group discussions
We invited 36 VHTs to participate in FGDs; 33 (92%) agreed to participate and were divided among four FGDs. Of the 33 FGD participants, 26 (79%) also completed the KAP questionnaire. FGD participants were purposively selected based on their sex and the district in which they operated. According to a 2015 national VHT assessment in Uganda, communities respond differently to male and female VHT members ; thus, we conducted same-sex FGDs. Each of the groups included six to eleven participants. One FGD with each sex was held in each district. Field assistants utilized a nine-question interview guide to lead FGDs (see example questions in Table 4). Information collected through the FGDs sought to complement the questionnaire data by eliciting VHT members’ perceptions of NCDs as an important health issue and their potential roles in tackling the problem of NCDs in their communities. Each FGD lasted 45-60 minutes and was audio-recorded.
The study protocol, data collection tools, and consent forms were reviewed and approved by the Yale University Human Subjects Committee, the Higher Degrees Research and Ethics Committee at Makerere University School of Public Health, and the Uganda National Council of Science and Technology.
Data management and analysis
Questionnaire data were double-entered and checked for consistency. R software (version 3.1.2, R Foundation for Statistical Computing, 2014) was used to calculate descriptive statistics for study variables.
Focus group discussions
FGDs were transcribed and translated from Lusoga to English by the field assistants. Transcripts were checked against the audio recordings and cleaned to remove identifying information. After an initial reading of the transcripts and field notes, codes were developed based on the main interview questions, prior literature, and emergent concepts from the data. Two investigators (TTO and NLH) independently reviewed one transcript and developed a coding structure, which was discussed and clarified, before an initial coding scheme was agreed upon. The coding scheme was refined iteratively as each transcript was reviewed; changes to the original coding scheme were applied to all transcripts. Each transcript was coded independently by both investigators, who met to reach consensus.
We conducted a thematic analysis where individual codes were read in aggregate and a written summary created. We identified nine main and 44 sub-codes, which were merged into four themes: 1) VHTs understanding of NCDs; 2) VHTs role in preventing/treating community NCDs; 3) facilitators of their role; and 4) barriers to their role. The analysis attempted to achieve equal and fair representation of the participant’s opinions. We selected representative quotes to illustrate study findings and retained colloquial language.
As shown in Table 1, participants ranged from 28 to 66 (mean 43.6) years of age. Approximately two-thirds were female and received secondary school education. Participants had worked as VHTs for an average of 6.4 years and spent 19 hours per week doing VHT work.
Nearly all participants (94.1%) knew that NCDs are not transmissible, and 82.4% agreed/strongly agreed that NCDs are common in Uganda (Table 2). The majority of participants claimed to know ‘a little’ about high blood pressure (70.6%), heart disease (61.8%), stroke (52.9%), and type II diabetes mellitus (63.2%), and nearly 90% thought that CVD is becoming more common in Uganda. In addition, 77.9% responded that diabetes is caused by high blood sugar levels, and over half reported that diabetes can cause complications. Thirty-two participants (47.1%) thought diabetes was preventable.
All participants thought smoking affected one’s health and was harmful to the lungs. Approximately 80% thought smoking was harmful to the heart and reported talking to community members about the harms of smoking. Similar numbers of participants reported having advised community members about the harms of excessive alcohol use.
Focus group discussions
The socio-demographic characteristics of FGD participants was similar to those who completed the questionnaire (Table 1).
Participants’ responses to FGD questions were categorized into four main themes, which are described below; illustrative questions related to each theme are in Table 3.
VHT understanding of NCDs
Asked about their understanding of NCDs, participants either gave examples of specific diseases or spoke to mode of transmission. Diabetes, high blood pressure, heart disease, and cancer were frequently mentioned as examples of NCDs. Some less frequently suggested were ulcers, anemia, and asthma. Participants were confident that NCDs are not transmitted between individuals through environmental or physical contact.
“These are diseases that cannot be transmitted from one person to another. For example, if you share the same taxi with someone with NCD, it can’t be transmitted.” FGD2, Participant 1
As discussions progressed, it became evident that many FGD participants had more nuanced knowledge of these diseases; for example, some participants spoke to lifestyle risk factors associated with NCDs, particularly poor diet, or susceptibility based on family history.
“Some people are not even aware that such diseases exist. You could meet someone who puts a lot of sugar in a small cup of tea, like adding 8 teaspoons in a very small cup of tea. Yet, this person would be at a high risk of getting diabetes.” FGD1, Participant 7
Importantly, they were aware that NCDs were chronic conditions, often symptomatic only after a long period of latency, which they identified as a possible contributor to the lack of awareness about NCDs in their communities.
“Since NCDs are diagnosed after a very long time, they tend to affect more because it is only when one sees the signs that one goes to the hospital and it would be too late to prevent.” FGD4, Participant 3
Participants reported seeing people with NCDs while carrying out their VHT work. However, they reported little or no community awareness of NCDs and no knowledge of the causes, signs, and symptoms. Participants attributed this lack of awareness to a lack of knowledge and a lack of education about NCDs being directed at community members.
VHT role in preventing/treating NCDs in communities
Participants unanimously agreed that NCDs are very important health issues needing to be urgently addressed. They regarded themselves as health “connectors” who linked their communities to health services and care and as conduits of knowledge to their communities, provided that they receive training on NCD issues.
“As VHT, if we can get enough knowledge on NCDs, we can return to the communities and teach our people about NCDs.” FGD1, Participant 5
Even without any formal training on NCDs, some participants expressed that they already take actions to address NCDs by counseling community members to go for regular check-ups or referring them to health centers.
“When doing my VHT work, if I find someone with an NCD, I refer them to the hospital for tests.” FGD1, Participant 6
However, the importance of a working partnership and positive relationship with medical personnel was consistently raised. Participants expressed the need for medical personnel to initate conversations about NCDs by coming to the communities. In doing so, they would foster a safe space to address NCD needs rather than having community members travel to health centers. Thereafter, they felt they could work as the ‘go-betweens’, facilitating continued conversation and transferring information between their communities and medical personnel.
“If medical personnel could organize workshops in communities to inform them about NCD, it will help in dealing with NCD.” FGD4, Participant 3
Facilitators to a VHT role in preventing/treating NCDs in communities
Participants identified NCD education as the foremost tool they need to possess in order to address NCDs in their communities. Other structural changes they recommended were the availability of screening services and endorsement and collaboration of medical personnel with VHTs. As described above, they emphasized the need for medical personnel to take up active roles through community outreach activities. Importantly, they also felt that visible partnerships between medical personnel and themselves would boost VHT’s credibility in the community and promote their work as VHTs.
“If medical personnel can come to our villages and inform us, it will ease our work as VHT. At least, they (community) will know that it was the personnel who have taught the VHT about NCD.” FGD3, Participant 3
According to participants, VHT’s role in NCD prevention and care would be facilitated primarily through education, screening services, proper referral and reporting tools, and medical personnel involvement. Participants noted that uniforms would help validate their positions in the eyes of their communities. Monetary support was deemed less essential. Table 4 provides further illustrative quotes highlighting such facilitators.
Barriers to a VHT role in preventing NCDs in communities
The major barriers participants reported were the lack of formal VHT education on NCDs, poor healthcare infrastructure, community poverty, discouraging attitudes from medical providers toward community members, and lack of assistance and support for VHTs from medical personnel (Table 5).
“Lack of equipment like counseling cards also affects our work as VHTs. For instance, use of pictorial teaching materials in educating community members about NCDs will help to reinforce the knowledge/information the VHT is giving.” FGD2, Participant 8
VHT members identified the interconnectedness of these barriers. For instance, respondents stated that when they refer community members to the health centers for NCD-related issues, they encounter negative attitudes from medical personnel, or are not provided with testing or medications for their condition. Such members return to their communities reluctant to seek care either by VHTs or at health centers.
“People go to the hospital and want to test for diseases but are not being rendered these services. There is poor management in health units which affects our work at the end of the day. When we refer other people, they refer to the failure of their (community) members to get the services they needed at the health units.” FGD1, Participant 10
Our findings reveal that Ugandan VHT members possess some knowledge and awareness of NCDs and associated risk factors but identify a lack of NCD knowledge in their communities. Participants saw a potential role for themselves as conduits of knowledge to their communities about NCDs, on the condition that they were empowered with knowledge through training and support from medical personnel. VHT members were also able to articulate potential facilitators of, and barriers to, incorporating NCD care into their existing roles.
VHT understanding of NCDs
Participants understood that NCDs were not transmissible and spoke of risk behaviors responsible for NCDs, although their knowledge of disease-specific risk factors and characteristics was less well developed. For example, while almost 70% of participants described knowing ‘a little’ about diabetes, less than half of those surveyed were aware that diabetes is preventable. We hypothesize that VHT members acquired their rudimentary knowledge through exposure to the healthcare system including occasional training workshops, community members with NCDs, and other NCD-related research activities occurring in IMHDSS. Many were confident that they were already encountering community members with NCDs while performing their VHT work. Given the local epidemiologic data on the NCDs they identified, such as hypertension, diabetes, and asthma, which are becoming widespread among their communities, [7,8,9] we expect that this perception was accurate. These community members are receiving diagnoses at health facilties and sporadic community-based screening campaigns. Indeed, self-reported knowledge of NCDs tended to be in line with prevalence estimates reported by the Ugandan STEPS survey [7, 9]; more VHT members reported knowledge of high blood pressure than diabetes, reflecting the higher prevalence of hypertension among their community. Future research should seek to gain a more comprehensive understanding of VHT practices regarding identifying persons with NCDs and referrals to health facilties.
While they themselves reported being aware of NCDs, VHT members linked the lack of NCD awareness among members of their communities to the latent clinical nature of NCDs and the lack of availability of screening services at community health centers. A systematic review of prior research on access to care for conditions including malaria, pneumonia, obstetric and gynecological disorders, malnutrition and HIV/AIDS in Uganda has shown that lack of awareness and knowledge about health conditions, along with perceived poor quality of health services and a perceived poor attitude of health workers, are demand-side barriers to utilization of health services . Similar findings may be true for NCDs and warrant exploration. Such lack of awareness within communities, if present for NCDs, may inhibit the development of the peoples’ voice, of health advocates demanding systemic change to improve services. The VHT members in this study appeared interested in advocating for the improvement of NCD services on behalf of their communities, though expressed concerns that their voices would not be heard by facility-based health professionals.
VHTs role in preventing/treating NCDs in communities
Several interventions have shown the benefits of using CHW-medical personnel partnerships to promote NCD prevention and management in communities. In American Samoa, an intervention that employed nurse-community health teams in diabetes care more than doubled the odds of reducing HbA1c levels by at least 0.5% among the intervention group compared with those receiving usual care . A multi-LMIC study which involved medical partnership via training and supervision of CHWs, improved CHWs’ ability to assess cardiovascular risk in their communities. This in turn increased disease detection and diagnosis .
CHWs have also served in successful roles for NCD care and prevention as providers of direct services to clients, monitors of clients’ care, peer educators for newer CHWs, and as administrators, overseeing reporting and documenting of clients’ care [21,22,23]. For example, data from a study in Iran showed that having CHWs engage in diabetes prevention and control lowered participants’ fasting blood glucose . These CHWs were able to conduct training sessions for high-risk individuals on adopting healthy lifestyles and diets. Medical personnel also visited local communities to screen and treat for diabetes. By working together, medical personnel and CHWs were able to keep high-risk individuals in care. In the present study, VHTs noted that they already engage in some of these activities, such as referral of clients, outside of their formal VHT roles. They also advocated for similar VHT-clincian collaborations to improve NCD detection and care.
Indeed, VHT members interviewed thought their present roles in helping community members access health knowledge, care, and services for maternal-child health and communicable diseases could be replicated for NCD prevention. They were enthusiastic about the expansion of their role into this area on the condition that they receive adequate training, and that screening services and proper referral and reporting systems were available to their communities. This finding is supported by a previous study in South Africa, where CHWs were involved in ongoing care of community members with NCDs. That study recommended context-relevant and organized education for VHTs and the provision of resources to build CHW capacity in NCDs as significant facilitators for CHWs delivering NCD services .
Frequently interwoven into the VHT members’ responses to facilitators of and barriers to VHT roles in NCD screening and care was the strong desire for medical personnel to collaborate and support VHTs to promote and validate their work in the communities. They believed it would be beneficial for medical personnel to intiate conversations about NCDs in their communities and create safe spaces for addressing NCDs by visibly being involved in preventive efforts within the communities. The Community Health System Strenghthening Model, used in other sub-Saharan African settings, has been successful at bringing community representatives, CHW, and health facilty staff together to form teams that address challenges such as these raised by the VHT. Future research might study this model in the context of enhancing community-facility engagement around NCD management .
Barriers to VHT roles in preventing NCDs in communities
VHT members spoke to the cycle of neglect where several barriers such as lack of services and medical personnel would hinder preventive efforts at the community level. These were considered more critical barriers to the expanstion of the VHT role than the lack of financial incentivization for VHT work. A lack of financial support has been identified as a major factor limiting CHW motivation in prior research [26, 27] , though our findings suggest otherwise and are consistent with another recent study among CHWs in Uganda, which described community relationships and trust as being the most important motivators for CHW work . Participants in the present study described how they would be motivated by greater availibility of equipment, uniforms, and transportation, while payment for their work was very rarely mentioned. In LMICs, primary care systems are faced with unavailability of basic diagnostic instruments and services for NCD screening and detection, poor access to medicines for treating NCDs, a shortage of healthcare professionals to manage NCDs, and poor reporting and referral systems , all of which VHTs believed needed to be addressed in order for them to fulfil a role in NCD prevention and treatment. They were perceptive in noting that, beyond their own access to equipment and diagnositic instruments, if the health facilities to which they were referring clients did not have the resources to provide effective and timely treatment, then their role would be undermined and the community would become mistrustful of their recommendations to seek addditonal care. Innovative financing models, such as public-private partnerships and taxation of alcohol, tobacco, or sugar-sweetened beverages, have been proposed to support the costs of NCD program expansion .
VHTs already play critical roles in the delivery of primary health services and have broad geographic coverage and opportunities for individual interactions with high-risk individuals within their communities . Task-shifting or role expansion has inherent challenges such as overburdening health workers . However, our study participants did not express concerns regarding expanding their roles to include NCDs. Given their expressed willingness and motivation to fulfil a role in NCD prevention and care, VHTs appear to represent an ideal human capacity to implement primary NCD interventions in Uganda . Investment in further training and NCD education for VHTs and their role in delivering NCD prevention education and NCD care should be considered as national and regional NCD frameworks are developed .
Limitations and strengths
While our study elucidated the potential role of VHTs in NCD prevention and treatment in Uganda, there were some limitations to our approach that should be acknowledged. Our KAP questionnaire, while successful in measuring general NCD knowledge and VHT attitudes related to NCD risk factors, was less robust for measuring disease-specific knowledge, knowledge of diet and physical activity-related risk factors for NCDs and current practices for prevention or treatment. Additionally, since no locally validated questionnaires were available for use in this setting, we relied on questionnaires validated for use in other settings. Generalizability may also be an issue; the VHT program in IMHDSS is currently better developed than in other regions, so the applicability of our findings to other regions in Uganda might be limited. Finally, VHT members in IMHDSS may have greater exposure to NCD-related issues than VHT members elsewhere in Uganda by virtue of other research projects such as “A people-centered approach through Self Management And Reciprocal learning for the prevention and management of Type 2 Diabetes” (SMART2D) that was in its early stages at the time of the current study.
A major strength of this study, however, was the use of a mixed methods approach to explore existing knowledge and perceptions about NCDs among the Ugandan VHTs. This allowed us to elicit complementary information via the questionnaire and FGDs about understanding of NCDs and perceptions of a potential VHT role in preventing them.
In this study we have shown that Ugandan VHT members already possess some knowledge and understanding of NCDs, especially around the mode of transmission, diet-related risk factors, and the late manifestation of NCD symptoms, although gaps remain. VHT members acknowledged the prescence of NCDs in their communities and identified their potential role as connecting their communities to knowledge and screening services. They displayed a willingness and motivation to engage in preventive efforts directed at NCDs and identified VHT and community education on NCDs and a strong presence of medical personnel in their communities as being important facilitators of their ability to fulfil this role. Specifically, participants expressed a desire to develop strong alliances with facility-based healthcare providers; to feel legitiamized by, and integrated within, the health system at large. These results should be used to inform future research and policy that seeks to develop the role of VHTs as it relates to community-based NCD prevention efforts in Uganda.
Community Health Workers
Focus Group Discussions
Iganga-Mayuge Health and Demographic Surveillance Site
Knowledge, Attitudes and Perception
Low and Middle Income Countries
Ministry of Health
Self Management Approach and Reciprocal Learning for Type 2 Diabetes
STEPwise Approach to Surveillance
Village Health Teams
World Health Organization
World Health Organization. Global Health Observatory (GHO): NCD mortality and morbidity. Geneva, WHO; n.d. http://www.who.int/gho/ncd/mortality_morbidity/en/. Accessed 20 Jan, 2015.
World Health Organization. Package of Essential Non-communicable Disease Interventions for Primary Health Care in Low Resource Settings. Geneva, WHO; 2010. http://www.who.int/nmh/publications/essential_ncd_interventions_lr_settings.pdf. Accessed 20 Jan, 2015.
International Diabetes Foundation. IDF Diabetes Atlas Sixth Edition. 2014. https://www.idf.org/e-library/epidemiology-research/diabetes-atlas/19-atlas-6th-edition.html. Accessed 20 Jan, 2015.
Maher D, Waswa L, Baisley K, et al. Distribution of hyperglycaemia and related cardiovascular disease risk factors in low-income countries: a cross-sectional population-based survey in rural Uganda. Int J Epidemiol. 2011;40(1):160–71.
Mondo CK, Otim MA, Akol G, et al. The prevalence and distribution of non-communicable diseases and their risk factors in Kasese district, Uganda: cardiovascular topics. Cardiovasc J Afr. 2013;24(3):52–7.
Tollman SM, Kahn K, Sartorius B, et al. Implications of mortality transition for primary health care in rural South Africa: a population-based surveillance study. Lancet. 2008;372(9642):893–901.
Bahendeka S, Wesonga R, Mutungi G, et al. Prevalence and correlates of diabetes mellitus in Uganda: a population-based national survey. Trop Med Int Health. 2016;21(3):405–16.
Guwatudde D, Kirunda BE, Wesonga R, et al. Physical Activity Levels Among Adults in Uganda: Findings from a Countrywide Cross-Sectional Survey. J Phys Act Health. 2016;13(9):938–45.
Guwatudde D, Mutungi G, Wesonga R, et al. The epidemiology of hypertension in Uganda: findings from the national non-communicable diseases risk factor survey. PLoS One. 2015;10(9):e0138991.
Uganda Ministry of Health. Health Sector Strategic Plan II 2005/06 - 2009/10. Kampala, Uganda; 2005.http://siteresources.worldbank.org/INTPRS1/Resources/383606-1201883571938/Uganda_HSSP_2.pdf. Accessed 20 Jan, 2015.
Schwartz JI, Guwatudde D, Nugent R, et al. Looking at non-communicable diseases in Uganda through a local lens: an analysis using locally derived data. Global Health. 2014;10:77.
Celletti F, Wright A, Palen J, et al. Can the deployment of community health workers for the delivery of HIV services represent an effective and sustainable response to health workforce shortages? Results of a multicountry study. Aids. 2010;24:S45–57.
Lanford, E. Using mHealth to improve the performance and engagement of Village Health Teams. 2014. https://www.usaidassist.org/blog/using-mhealth-improve-performance-and-engagement-village-health-teams. Assessed 16 Jan, 2015.
Neupane D, Kallestrup P, McLachlan CS, et al. Community health workers for non-communicable diseases. Lancet Glob Health. 2014;2(10):e567.
Joshi R, Alim M, Kengne AP, et al. Task shifting for non-communicable disease management in low and middle income countries–a systematic review. PLoS One. 2014;9(8):e103754.
Uganda Ministry of Health. National Village Health Teams (VHT) Assessment in Uganda. Kampala, Uganda MOH; 2015.
Uganda Ministry of Health. Village Health Team Guide for Training the Trainers of Village Health Teams. Kampala, Uganda MOH; 2013.
Makerere University. Iganga-Mayuge Health and Demographic Surveillance Site. Kampala, Makerere University; 2015. http://www.indepth-network.org/Profiles/iganga_mayuge_hdss_2013.pdf. Accessed 15 Jan 2015.
Demaio AR, et al. Protocol for a national, mixed-methods knowledge, attitudes and practices survey on non-communicable diseases. BMC public health. 2011;11(1):961.
Kiwanuka S, Ekirapa E, Peterson S, et al. Access to and utilisation of health services for the poor in Uganda: a systematic review of available evidence. Trans R Soc Trop Med Hyg. 2008;102(11):1067–74.
DePue JD, Dunsiger S, Seiden AD, et al. Nurse–Community Health Worker Team Improves Diabetes Care in American Samoa Results of a randomized controlled trial. Diabetes Care. 2013;36(7):1947–53.
Gaziano TA, Abrahams-Gessel S, Denman CA, et al. An assessment of community health workers' ability to screen for cardiovascular disease risk with a simple, non-invasive risk assessment instrument in Bangladesh, Guatemala, Mexico, and South Africa: an observational study. Lancet Glob Health. 2015;3(9):e556–63.
Tsolekile LP, Puoane T, Schneider H, et al. The roles of community health workers in management of non-communicable diseases in an urban township. Afr J Prim Health Care Fam Med. 2014;6(1):1–8.
Farzadfar F, Murray CJ, Gakidou E, et al. Effectiveness of diabetes and hypertension management by rural primary health-care workers (Behvarz workers) in Iran: a nationally representative observational study. Lancet. 2012;379(9810):47–54.
Lunsford SS, Fatta K, Stover KE, et al. Supporting close-to-community providers through a community health system approach: case examples from Ethiopia and Tanzania. Hum Resour Health. 2015;13(1):12.
McCoy D, Bennett S, Witter S, et al. Salaries and incomes of health workers in sub-Saharan Africa. Lancet. 2008;371(9613):675–81.
Greenspan JA, McMahon SA, Chebet JJ, et al. Sources of community health worker motivation: a qualitative study in Morogoro Region. Tanzania. Hum Resour Health. 2013;11(1):1.
Singh D, Cumming R, Mohajer N, et al. Motivation of Community Health Volunteers in rural Uganda: the interconnectedness of knowledge, relationship and action. Public Health. 2016;136:166–71.
Mendis S, Al Bashir I, Dissanayake L, et al. Gaps in capacity in primary care in low-resource settings for implementation of essential noncommunicable disease interventions. Int J Hypertens. 2012;2012:584041.
Meghani A, Basu S. A review of innovative international financing mechanisms to address noncommunicable diseases. Health Aff (Millwood). 2015;34(9):1546–53.
Katende G, Donnelly M. Shining a Light on Task-Shifting Policy: Exploring opportunities for adaptability in non-communicable disease management programmes in Uganda. Sultan Qaboos Univ Med J. 2016;16(2):e161.
NCD Alliance. Civil society unites to mobilise action on NCDs at WHO AFRO Regional Committee Meeting; 2016. https://ncdalliance.org/news-events/news/civil-society-unites-to-mobilise-action-on-ncds-at-who-afro-regional-committee-meeting. Accessed 8 September, 2016.
This research was supported by the Uganda Initiative for Integrated Management of Non-Communicable Diseases (UINCD) and the Makerere University-Yale University Collaboration (MUYU). The authors would like to thank Edward Galiwango, Judith Kaija, and Paul Emojong (IMHDSS staff) and Hakeem Kirunda, Zakia Nangobi, Ziyada Namwase, Mutalya Ivan, Hassan Gowa and Peter Awaka (Field Assistants).
This research was funded by the Thomas Rubin and Nina Russell Global Health Fund Fellowship from Yale School of Public Health [TO] and Yale Equity Research and Innovation Center [JIS].
Availability of data and materials
The data that support the findings of this study are available on request from the corresponding author [JIS]. The data are not publicly available as they contain information that could compromise research participant privacy/consent.
TTO carried out this research study as a thesis project for Masters in Public Health degree from Yale School of Public Health. JIS is co-director of the Uganda Initiative for Integrated Management of Non-Communicable Diseases, a multi-sectoral research consortium based in Uganda that aims to improve the integration of NCDs into health service delivery.
Ethics approval and consent to participate
This study received ethical approval from the Human Subjects Committee at Yale University, Connecticut, USA; the Higher Degrees, Research and Ethics Committee at Makerere School of Public Health, Kampala, Uganda; and Uganda National Council for Science and Technology, Kampala, Uganda. We obtained written informed consent from participants.
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Ojo, T.T., Hawley, N.L., Desai, M.M. et al. Exploring knowledge and attitudes toward non-communicable diseases among village health teams in Eastern Uganda: a cross-sectional study. BMC Public Health 17, 947 (2017). https://doi.org/10.1186/s12889-017-4954-8
- community health workers
- Village health teams
- Non-communicable diseases
- Community engagement
- Health systems | <urn:uuid:cb53d768-22fd-46ba-8700-151715d01ee4> | CC-MAIN-2021-21 | https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-017-4954-8 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991870.70/warc/CC-MAIN-20210517211550-20210518001550-00454.warc.gz | en | 0.948387 | 8,156 | 2.703125 | 3 |
E X T O X N E T
Extension Toxicology Network
Toxicology Information Briefs
A Pesticide Information Project of Cooperative Extension Offices of Cornell University, Oregon State University, the University of Idaho, and the University of California at Davis and the Institute for Environmental Toxicology, Michigan State University. Major support and funding was provided by the USDA/Extension Service/National Agricultural Pesticide Impact Assessment Program.
EXTOXNET primary files maintained and archived at Oregon State University
WHAT IS CHOLINESTERASE?
Cholinesterase (ko-li-nes-ter-ace) is one of many important enzymes needed for the proper functioning of the nervous systems of humans, other vertebrates, and insects. Certain chemical classes of pesticides, such as organophosphates (OPs) and carbamates (CMs) work against undesirable bugs by interfering with, or 'inhibiting' cholinesterase. While the effects of cholinesterase inhibiting products are intended for insect pests, these chemicals can also be poisonous, or toxic, to humans in some situations.
Human exposure to cholinesterase inhibiting chemicals can result from inhalation, ingestion, or eye or skin contact during the manufacture, mixing, or applications of these pesticides.
HOW DOES IT WORK?
Electrical switching centers, called 'synapses' are found throughout the nervous systems of humans, other vertebrates, and insects. Muscles, glands, and nerve fibers called 'neurons' are stimulated or inhibited by the constant firing of signals across these synapses. Stimulating signals are usually carried by a chemical called 'acetylcholine' (a-see-till-ko-leen). Stimulating signals are discontinued by a specific type of cholinesterase enzyme, acetylcholinesterase, which breaks down the acetylcholine. These important chemical reactions are usually going on all the time at a very fast rate, with acetylcholine causing stimulation and acetylcholinesterase ending the signal. If cholinesterase-affecting insecticides are presentin the synapses, however, this situation is thrown out of balance. The presence of cholinesterase inhibiting chemicals prevents the breakdown of acetylcholine. Acetylcholine can then build up, causing a "jam" in the nervous system. Thus, when a person receives to great an exposure to cholinesterase inhibiting compounds, the body is unable to break down the acetylcholine.
Let us look at a typical synapse in the body's nervous system, in which a muscle is being directed by a nerve to move. An electrical signal, or nerve impulse, is conducted by acetylcholine across the junction between the nerve and the muscle (the synapse) stimulating the muscle to move. Normally, after the appropriate response is accomplished, cholinesterase is released which breaks down the acetylcholine terminating the stimulation of the muscle. The enzyme acetylcholine accomplishes this by chemically breaking the compound into other compounds and removing them from the nerve junction. If acetylcholinesterase is unable to breakdown or remove acetylcholine, the muscle can continue to move uncontrollably.
Electrical impulses can fire away continuously unless the number of messages being sent through the synapse is limited by the action of cholinesterase. Repeated and unchecked firing of electrical signals can cause uncontrolled, rapid twitching of some muscles, paralyzed breathing, convulsions, and in extreme cases, death. This is summarized below.
May result in:
WHICH PESTICIDES CAN INHIBIT CHOLINESTERASE?
Any pesticide that can bind, or inhibit, cholinesterase, making it unable to breakdown acetylcholine, is called a "cholinesterase inhibitor," or "anticholinesterase agent." The two main classes of cholinesterase inhibiting pesticides are the organophosphates (OPs) and the carbamates (CMs). Some newer chemicals, such as the chlorinated derivatives of nicotine can also affect the cholinesterase enzyme.
Organophosphate insecticides include some of the most toxic pesticides. They can enter the human body through skin absorption, inhalation and ingestion. They can affect cholinesterase activity in both red blood cells and in blood plasma, and can act directly, or in combination with other enzymes, on cholinesterase in the body. The following list includes some of the most commonly used OPs:
Carbamates, like organophosphates, vary widely in toxicity and work by inhibiting plasma cholinesterase. Some examples of carbamates are listed below:
WHAT HAPPENS AS A RESULT OF OVEREXPOSURE TO CHOLINESTERASE INHIBITING PESTICIDES?
Overexposure to organophosphate and carbamate insecticides can result in cholinesterase inhibition. These pesticides combine with acetylcholinesterase at nerve endings in the brain and nervous system, and with other types of cholinesterase found in the blood. This allows acetylcholine to build up, while protective levels of the cholinesterase enzyme decrease. The more cholinesterase levels decrease, the more likely symptoms of poisoning from cholinesterase inhibiting pesticides are to show.
Signs and symptoms of cholinesterase inhibition from exposure to CMs or OPs include the following:
Unfortunately, some of the above symptoms can be confused with influenza (flu), heat prostration, alcohol intoxication, exhaustion, hypoglycemia (low blood sugar), asthma, gastroenteritis, pneumonia, and brain hemorrhage. This can cause problems if the symptoms of lowered cholinesterase levels are either ignored or misdiagnosed as something more or less harmful than they really are.
The types and severity of cholinesterase inhibition symptoms depend on:
(a) the toxicity of the pesticide.
(b) the amount of pesticide involved in the exposure.
(c) the route of exposure.
(d) the duration of exposure.
Although the signs of cholinesterase inhibition are similar for both carbamate and organophosphate poisoning, blood cholinesterase returns to safe levels much more quickly after exposure to CMs than after OP exposure. Depending on the degree of exposure, cholinesterase levels may return to pre-exposure levels after a period ranging from several hours to several days for carbamate exposure, and from a few days to several weeks for organophosphates.
When symptoms of decreased cholinesterase levels first appear, it is impossible to tell whether a poisoning will be mild or severe. In many instances, when the skin is contaminated, symptoms can quickly go from mild to severe even though the area is washed. Certain chemicals can continue to be absorbed through the skin in spite of cleaning efforts.
If someone experiences any of these symptoms, especially a combination of four or more of these symptoms during pesticide handling or through other sources of exposure, they should immediately remove themselves from possible further exposure. Work should not be started again until first aid or medical attention is given and the work area has been decontaminated. Work practices, possible sources of exposure, and protective precautions should also be carefully examined.
The victim of poisoning should be transported to the nearest hospital or poison center at the first sign(s) of poisoning. Atropine and pralidoxime (2-PAM, Protopam) chloride may be given by the physician for organophosphate poisoning; atropine is the only antidote needed to treat cholinesterase inhibition resulting from carbamate exposure (9).
WHY MONITOR CHOLINESTERASE?
Anyone exposed to cholinesterase-affected pesticides can develop lowered cholinesterase levels. The purpose of regular checking of cholinesterase levels is to alert the exposed person to any change in the level of this essential enzyme before it can cause serious illness. Ideally, a pre-exposure baseline cholinesterase value should be established for any individual before they come in regular contact with organophosphates and carbamates. Fortunately, the breakdown of cholinesterase can be reversed and cholinesterase levels will return to normal if pesticide exposure is stopped.
WHAT IS THE CHOLINESTERASE TEST?
Humans have three types of cholinesterase: red blood cell (RBC) cholinesterase, called "true cholinesterase;" plasma cholinesterase, called "pseudocholinesterase;" and brain cholinesterase. Red blood cell cholinesterase is the same enzyme that is found in the nervous system, while plasma cholinesterase is made in the liver.
When a cholinesterase blood test is taken, two types of cholinesterase can be detected. Physicians find plasma cholinesterase readings helpful for detecting the early, acute effects of organophosphate poisoning, while red blood cell readings are useful in evaluating long-term, or chronic, exposure (8).
The cholinesterase test is a blood test used to measure the effect of exposure to certain or cholinesterase-affected insecticides. Both plasma (or serum) and red blood cell (RBC) cholinesterase should be tested. These two tests have different meanings and the combined report is needed by the physician for a complete understanding of the individual's particular cholinesterase situation. Laboratory methods for cholinesterase testing differ greatly, and results obtained by one method cannot be easily compared with results obtained by another. Sometimes there is also considerable variation in test results between laboratories using the same testing method. Whenever possible, cholinesterase monitoring for an individual should be performed in the same laboratory, using a consistent testing method.
The approved methods are: Michel, microMichel, pH stat, Ellman, micro-Ellman, and certain variations of these. Micro methods have the advantage of not necessitating venipuncture, the drawing of blood from a vein by puncturing the vein with a needle attached to a collecting tube. The Ellman technique is considered better for detecting cholinesterase inhibition caused by carbamates. Many of the various "kit" methods in use are not satisfactory, particularly those which can be used only for plasma (or serum) determinations.
WHO NEEDS TO BE TESTED?
The following people should be concerned with having their cholinesterase levels checked on a regular basis: (a) anyone that mixes, loads, applies, or expects to handle or come in contact with highly or moderately toxic organophosphate and/or carbamate pesticides (this includes anyone servicing equipment used in the process); (b) anyone that is in contact with these chemicals for more than 30 hours at a time in one 30-day period.
WHEN SHOULD SOMEONE BE TESTED AND HOW OFTEN?
Every person has his/her own individual 'normal' range of baseline cholinesterase values; cholinesterase levels vary greatly within an individual, between individuals, between test laboratories, and between test methods. The extent of potential pesticide poisoning can be better understood if cholinesterase tests taken after exposure to the cholinesterase inhibiting pesticides can be compared to the individual's baseline, pre-exposure measurement. Workers that receive routine exposure to organophosphate or carbamate pesticides should be offered an initial pre-employment check of their blood cholinesterase levels to establish "baseline values" prior to any exposure to these agrochemicals. If no pre-exposure value was obtained, however, the earliest cholinesterase value recorded can be used for later comparison. Excessive exposure to OPs and CMs depresses the cholinesterase so markedly that a diagnosis can also be made without previous baseline testing. If an individual's cholinesterase levels drop 30 percent below the original baseline level, immediate retesting should be done.
While there is no set formula for deciding the frequency of cholinesterase testing, in general, the initial baseline test should be followed by subsequent cholinesterase testing on a regular (usually monthly) basis. This testing should be done weekly during the active season, however, when workers are employed full-time and regularly using OPs and CMs labelled "DANGER." The test should be repeated any time a worker becomes sick while working with OPs, or within 12 hours of his/her last exposure.
Several factors should be considered in deciding how often someone should have his/her cholinesterase levels tested:
a) The extent and seriousness of the possible exposure. This
will vary with the toxicity of the pesticides being used and how
often they are handled.
b) The type of work being done and the equipment being used may involve different risks of exposure.
c) Work practices have an important effect on worker safety. Some good practices include: the proper use of protective clothing and equipment; showering after each job; avoidance of drinking, eating and smoking in pesticide contaminated areas; prompt and effective decontamination in the event of spills.
d) The past safety record of a company and the work history and experience of an individual.
e) The physician's experience and familiarity with a specific work force may be an additional factor.
HOW DOES SOMEONE GET TESTED?
Since individual states vary in their cholinesterase monitoring programs, people that want to get their cholinesterase levels checked should consult with either their family or company physician for the specific requirements and procedures for cholinesterase testing in their particular state. After the blood is sampled and tested, test results are sent to the individual and his/her physician for interpretation.
Baseline blood samples should be taken at a time when the worker has not been exposed to organophosphate and carbamate pesticides for at least 30 days. Establishing a stable baseline requires a minimum of two pre-exposure tests taken at least 3 days but not more than 14 days apart. If these two tests differ by as much as 20 percent, a third sample should be taken and the two closest values averaged and considered the true baseline.
WHAT ARE THE LIMITS OF CHOLINESTERASE TESTING?
While cholinesterase testing is extremely valuable, it does have its limits, for the following reasons:
(a) not all hospitals are set up to complete the test within
one facility, causing delays in diagnosis;
(b) the wide statistical error of the test makes it difficult to accurately detect very slight poisoning from cholinesterase inhibiting pesticides;
(c) the blood test is more effective in detecting cholinesterase depression from OP exposure than it is in detecting cholinesterase inhibition from carbamate exposure.
While carbamates (CMs) cause a depression in cholinesterase levels, the enzyme levels may return to baseline levels within hours of exposure, perhaps before test results are returned. When the effects of over-exposure to CMs are being checked, blood must be drawn during actual exposure or not more than 4 hours thereafter. If the drawing of blood and the actual completion of the laboratory test is delayed for more than 4 hours, reactivation of the enzyme will have taken place in the blood. This situation makes it hard for the physician to know the extent to which cholinesterase was inhibited, and to fully assess the seriousness of any safety problems which might exist in the work environment.
HOW ARE THE TESTS INTERPRETED?
The interpretation of cholinesterase test results should be done by a physician. A 15 to 25 percent depression in cholinesterase means that slight poisoning has taken place. A 25 to 35 percent drop signals moderate poisoning, and a 35 to 50 percent decline in the cholinesterase readings indicates severe poisoning (8).
A reported change in an individual's cholinesterase level may result from something other than a pesticide exposure, or it may be the result of laboratory error, but this should never be assumed to be the case. If the report shows a worker's cholinesterase level has dropped 20 percent below his/her baseline in either plasma or RBC, he/she should be retested immediately. If the second test repeats the same low values, faulty work practices should be carefully looked for and steps should be taken to correct them.
A 30 percent drop below the individual's baseline of RBC cholinesterase or plasma cholinesterase means that the individual should be removed from all exposure to organophosphates and carbamates, with the individual not being allowed to return until both levels return to the pre-exposure baseline range. Removal from exposure means avoidance of areas where the materials are handled or mixed and avoidance of any contact with open containers or with equipment that is used for mixing, dusting or spraying organophosphates or carbamates. A worker removed from exposure to cholinesterase inhibitors may be employed at other types of work.
WHERE CAN ONE GET TESTED AND WHAT IS THE COST OF THE TEST?
Because of the lack of approval of standardized test methods and laboratories in the U.S., a list of approved laboratories is not available. However, consult with your physician or local community hospital (testing laboratory) and the State Department of Health for guidance and recommendation of a good laboratory. Keep in mind that a single test method at one test laboratory should be used in your monitoring program.
The 1986 estimates on the cost of individual cholinesterase tests range from $7.00 to $60.00, with the average test costing approximately $35.00. The quality of tests will improve and prices will be lowered if and when testing methods are standardized and automated.
WHAT IS THE CURRENT STATUS OF CHOLINESTERASE SURVEILLANCE PROGRAMS?
Current EPA worker protection standards (put into place in 1974) are incomplete, and more comprehensive rules are being proposed which would be put into effect in the Spring of 1988. The standards address reentry intervals, notification, decontamination facilities, training of workers, and emergency medical care for workers. Additional provisions are also specified on protective equipment, change facilities, medical monitoring, annual physical examinations, and maintaining contact during pesticide handling. These regulations are likely to require commercial pesticide applicators to have cholinesterase blood tests to establish individual baseline readings. Applicators would then be required to have another test for every 3 or more consecutive days of exposure to organophosphates which fall in toxicity category I ("highly toxic") or category II ("moderately toxic") or when exposed six or more days in a 21-day period. Four states currently have some type of cholinesterase testing requirement in place: California, Ohio, Arizona, and Colorado.
INFORMATION AND RESOURCES
(1) U.S. Environmental Protection Agency. Telephone: 1-800-858-7378.
(2) Cooperative Extension Service in your area.
(3) Pesticide Unit, Epidemiological Studies Laboratory, California Department of Health, 2151 Berkeley Way, Berkeley, CA. 94704. Telephone: (415)-540-3063.
(4) Worker Health and Safety Branch, Department of Food and Agriculture, 1220 N Street, Sacramento, CA. 95814. Telephone: (916)-445-8474.
(5) Davies, J.E. and V.H. Freed (eds.). 1981. An agromedical approach to pesticide management Some health and environmental considerations. Consortium for International Crop Protection. Berkeley, CA.
(6) Goh, Kean, W.G. Smith, R.F. Pendleton. 1985. Pesticide safety for IPM field scouts. Chemicals Pesticides Program. Cornell University, Ithaca, NY.
(7) Golz, H.H. and C.B. Shaffer. 1960. Toxicological information on cyanamid Insecticides. American Cyanamid Co., Princeton, NJ.
(8) Paul, Jane. 1987. Commercial pesticide applicators may get mandatory blood tests. Agrichemical Age. March.
(9) Smith, William G. 1983. Cholinesterase. Chemicals Pesticide Program. Cornell Cooperative Extension Information. New York State College of Agriculture and Life Sciences, Cornell University, Ithaca, NY.
(10) Van Driesche, R G. 1985. Cholinesterase testing information. Pesticide Facts. Cooperative Extension Service, University of Massachusetts, Amherst, MA. June 7, 1985.
DISCLAIMER: The information in this brief does not in any way replace or supersede the information on the pesticide product label/ing or other regulatory requirements. Please refer to the pesticide product label/ing. | <urn:uuid:925bce7b-1f69-4a29-a183-4de888bd1647> | CC-MAIN-2021-21 | http://extoxnet.orst.edu/tibs/cholines.htm | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988837.67/warc/CC-MAIN-20210508031423-20210508061423-00536.warc.gz | en | 0.912494 | 4,404 | 3.375 | 3 |
« PreviousContinue »
DEMONIAC IDEALS IN POETRY.
Milton's demons, Johnson remarks,
“Sad cure; for who would lose, are too noble ; but they are, never Though full of pain, this intellectual theless, the most transcendent em
being, bodiments of Satanic nature in poetry.
Those thoughts that wander through They are ruined gods-gods in their
eternity ?" &c., everlasting natures—in their immor- lines wbich breathe a noble aspiration. tal, intellectual power-devils only Many others in the speeches of Milin their hatred of the Supreme ton's angels mark them as belonging Goodness, which is a consequence of to the highest order of imaginative their fall, and in the spirit of eternal conception, and distinguish them revenge by which they are actuated; altogether from the fiends of Dante, all their other attributes-courage, who are existences of blind, devouring undisturbed capacity of thought in hatred, cruelty, and rage. The latter, their surroundments of horror, and, however, though inspired by the baramid unimaginable agonies, fidelity barism of ignorant middle-age fancy, one to the other, &c.-are deitific and are truer to the ideal of Evil. sublime. The demoniac nature ap Dante's demons and Lucifer empears in the boast of possessing body the middle-aged conception of
th' unconquerable will, and study the spirit and form of evil-intensified of revenge, immortal hate, and cour- by a genius characterized by a powerage never to submit or yield,” they ful, but somewhat narrow imaginafeel " strengthundiminished, and tion. Although he has faithfully eternal being to undergo eternal turned to shape many of the gloomy punishment.
legends of his age, it appears to us
that haď he had any opportunity of “ If then His providence acquainting himself with the contemOut of our evil seek to bring forth good,
porary serf-life of Germany in the Our labours must be to pervert that end; And out of good still to find means of Sabbath was an institution, he might
twelfth century, in which the witches' evil, Which ofttimes may succeed, so as, per
have drawn several pictures of demohaps,
niac nature more fearful and appalling To grieve Him."
than almost any he has introduced
into the Inferno. Nevertheless the And when Beelzebub recommends
21st and 22nd cantos display one of
the most hideous and uncouth, but at “ By sudden onset, either with hell fire the same time ideally true, reflections
To waste his whole creation, or possess of fiend nature in literature. CrossAll as our own and drive as we were
ing the gloomy bridge, which in the driven
fifth region of hell leads to the lake The puny habitants; or if not drive Seduce them to our party, that their God of boiling pitch in which the sinners May prove their foe, and with repenting wallow—the bridge which one of the hand
demons, Malacauda (Evil-tail), says, Abolish His own works."
“Just five hours later yesterday
than now, twelve hundred three score In places the noblest ideas flash and six years ago, was broken across through the speeches of the superior the abyss”-they see legions of black angels, founded on reason, courage,
fiends" armed with hooks, lurking ambition, &c., as in Satan's address. beneath the 'arches, who rush upon Belial's oration is perhaps the finest them, roaring with impetuous rage, of them, and, as a composition, the and one of the Scarmiglion attempts most finished. A sublime inelancholy to strike him until pierced by their pervades it, as in the lines in which captain. Then comes the scene in he regrets the assumed loss of exist- which they exhibit their delight in ence, consequent upon exasperating torturing the damned, and the comthe powers of Deity to effect their bat which takes place between two annihilation :
of them Calcabrina and Alechino, who,
As he pro
on the escape of the sinner Crampolo, is rather like the image of some monsrush together, exhausting their fury trous nightmare than an imaginative on themselves. Both tumbling into conception, true to a high ideal. The the trench, combat with ungovernable best touch in the Lucifer picture is fury, until in the rage of the combat the description of the effect which the their bodies are seen to glow with first sight of the dark, hideous form fire even in the flaming pool. This produces on the mind of the obscene, in which the overmastering server :passions of hatred and destruction,
“I' non mori', e non rimasi vivo." natural to the demons, foiled of its exercise on other objects, turn against sublime in his contempt.
Dante, as we have said, is most themselves, exhibits, despite the grotesqueness of the details, a penetra- ceeds in the invention of horrors he tive conception of fiend nature. De becomes almost always bizarre and spite these and other scenes, however, fiery tombs; the speaking flames in
uncouth--except in the scenes of the scattered throughout the “Inferno, Dante, in the 3rd canto, has exhibited the awful plain, when the fiery snow in a few lines an intensity of concep-giants buried to the waist in the sea
is falling ; in the description of the tion as regards demoniac character and its sufferings, which he did not of ice-one of whom, Nimrod, cries attain in any of those succeeding: lost tongue ; and in the glimpse we
out after Dante, in the accents of a The few lines descriptive of the have of the fiends referred to and torments of the envious reach the acme of the sublime of contempt :
their irresistible, unappeasable, mal
evolent fury and hatred raging to “Questi non hanno speranza di morte : exhaustion. E lor cieca vita è tanto altra sorte.
The genius of Tasso, whose element Che 'nvidiosi son d'ogni altra sorte
was chivalric grandeur and beauty, Fama di loro di mondo esser non lassa :
failed deplorably when it attempted Misericordia e Giustizia gli sdegna.
the sublime, as may be seen by conNon ragiosiam di lor, ma guarda e trasting his grotesque, and, indeed, passa."
ludicrous, description of hell and its
inmates with the inimitable paintings While in the line
and dramatizations of Milton. In his "A Dio spiacenti ed a' nemici sui" conception of Satan and his attending he has painted this last extremity of demons, Tasso is merely a feeble folguilt and despair.
lower of Dante. His fiends are an in-. Dante's “ Lucifer,” of which we congruous collection of bestial monget a glimpse in the 34th canto, is a
sters and hobgoblin forms, taken from monstrous and blockish representa
classical mythology-serpents, hartion of the terrible power-antagonist pies, centaurs, sphynxes, gorgons, pyof the Almighty himself
. He appears thons, chimeras, &c., who are enumelike a mountain rising from the dark, rated with but few touches of descripfrosty plain, whose icy winds are
tion ; the faces are human, the heads created by the movement of his wings wreathed with snakes, and they have (which are compared to those of wind hoofs and tails. The only poetic line mills !) in the poet's usual manner of in this portraiture is that in which he selecting a realistic representative says they have terror and death in
their image, however it may lower the idea eyes of the subject he is treating. Lucifer, “Quant' è neg'i occhi lor terrore e morte." with his three faces, one red, one In his sketch of Pluto, also, he exyellow, and another black, each of hibits an utter want of true imaginawhose mouths are tearing a sinner tion and taste. The description is (and the selection of the parties so made up of the most confused and positioned, Judas, Brutus, &c., is to contradictory images. The King of the last degree incongruous). Dante Terrors is a monstrous form, so huge, and Virgil mounting on his back, we are told, that beside hím Calpe secured by his wings, and his plunge and Atlas would appear as little hills. through the centre of the earth with So far, so well ; but when the poet them, at the other side of which they goes on to describe his horns, tail, emerge into day-all this and more beard, and mouth befouled with black
your Lord ?
blood, le presents us with merely a
O Faustus! leave these frivolous raw head and bloody bone monstrosity.
demands His eyes, indeed, flame with light like
That strike a terror to my fainting
soul.” that of an inauspicious comet: " Come infausta cometa, il guardo splende;"
Again, the reason he gives for inbut they are red, and distil poison, &c. ducing Faust to sell hiin his soul :In a word the Pluto and Pandemonium “Faust. --Stay, Mephistopheles, and tell of Tasso are an olla podrida-a classi
me what good will my soul do cal fable, and middle-age grotesque fancy; and the only good stanza in
Mep.--Enlarge his kingdom,
Faust. Is that the reason why he tempts the entire description is that in which
us thus? he paints the assembling of the infernal
Mep.Solamen miseris socios habuisse powers. In the diction in which he
doloris. paints the sound of the trumpet, the Faust.-Why have you any pain that earthquake, &c., abounding with as
torture others ? pirates, he has done wonders with the Mep.--As great as have the human souls soft Italian : " Chiami gli abitator dell'ombre eterne, To have companions in misery is
Il ranco suon della tartarea tromba, &c.” the motive by which the devils of
The Furies of Æschylus, like many Marlowe are actuated in tempting of his conceptions, have an air of pri- mankind, mordial and awful sublimity. The The above melancholy demoniac sketch of their appearance as they lie sentiment contrasts strongly with the asleep in the temple, around the mur- human in Virgil. derer, Orestes, is at once loathsomeand “Non ignara mali miseris succurrere terrible-aged women, garbed in sable disco.” stoles, "abhorred and execrable,” their Satan in Job appeared as the harsh breath rattling in their throats, tempter. The Mephistopheles of and rheumy gore distilling from their Goethe
is at once a tempter, denier, and closed eyelids, &c. These beings, mocker. He llas wholly lost the subdaughters of Night, embody the an- lime elements of the ruined archtique, savage idea of blood for blood angel, and his dry intellect acts alterjustice-a raging, Tartarian thirst for nately in laying a destructive snare, revenging crime. At first they appear and flashing a withering sneer. Whatas inexorable, demoniac powers, of ever heart he had is ashes-likewise ruthless retribution ; but although his imagination and passions—all save their natures and purposes display a his love of evil. It is Iago in mediæone-idead directness, resembling that val dress, with supernatural power; of the august Fates, they are not im- and, like his, the impulse of Mephiplacable, as appears from the last stopheles toward destruction is purscene of the drama.
poseless. Goethe's Mephistopheles is The Mephistopheles of Marlowe, in the most philosophical conception of his “Tragical History of Doctor Faus- demonaic nature in literature. tus,” though inconsistent as a dramatic The sketch of Satan in Byron's Cain, character, is a highly poetic concep; which is partly copied from the Miltion. His nature, though lost, is still tonic ideal, as regards his character as half human, and an awful melancholy the eternal adversary of God, is, howbroods round his figure. When Faus- ever, chiefly an embodiment of the tus asks him where are the spirits sceptical criticism of Voltaire and the that fell with Lucifer
French infidels. Milton, in his de“Mep.-In hell.
lineation of Satan, terminated at the Faust.-How comes it then that thou art point where, entering into the serpent, out of hell?
he accomplished the fall by flattering Mep.—Why this is hell, nor am I out of it; Eve to taste the apple--of whose core
Think’st thou that I that saw the mankind have since chewed the cud.
face of God, And tasted the eternal joys of bours by logic to render his mind hos
In tempting Cain, Byron’s Lucifer laheaven, Am not tormented with ten thou- tile to the nature of the Supreme sand hells
Deity by all the cut-and-dry arguIn being deprived of everlasting ments comprised in speculations upon bliss ?
the origin of evil ; the result of which
is, that he refuses to join Abel in the “But, bringing up the rear of this bright host, sacrifice he is about to offer, and, in An angel of a different aspect waved the quarrel which ensues, kills him.
His wings, like thunder-clouds above The scene in Hades displays little
sume coast, imagination ; and there is but little
Whose barren beach by frequent wrecks
is paved ; poetry in the scenes in which the
His brow was like the deep, when temruined archangel appears, and less in
pest-tossed; the language of the drama generally, Fierce and unfathomable thoughts enwhich is, for the most part, tame graved prose tortured into blank verse. The Eternal wrath on his immortal face; strained, sentimental misanthropy of And where he gazed, a gloom pervaded Byron's personality is as apparent in his Lucifer as in Harold, Lara, The last, which is the best idea in and the other creatiulis of his one this description, is, it is hardly necesidead genius. In, however, his bur- sary to say, taken from the preparlesque poem,
“The Vision of Judg. ing combat of Death and Satan in ment," there is one stanza which, Paradise Lost”:though in part plagiarized from Mil
“So frowned the mighty combatants, that ton, is finer than any passage in hell
Grew darker at their frown."
WYLDER'S HAN D.
THE BRANDON CONSERVATORY
CAPTAIN LAKE did look in at The souls in that sort of worldly limbo. Lodge in the morning, and remained In which frame of mind he took from an hour in conference with Mr. Jos his coat pocket a copy of Captain Larkin. I suppose everything went Lake's marriage settlement, and read off pleasantly. For although Stanley over again a covenant on the Captain's Lake looked very pale and vicious as part that, with respect to this partihe walked down to the iron gate of cular estate of Five Oaks, he would The Lodge, among the evergreens and do no act, and execute no agreement, bass-mats, the good Attorney's coun- deed, or other instrument whatsotenance shone with a serene and hea- ever, in any wise affecting the same, venly light, so pure and bright, indeed, without the consent in writing of the that I almost wonder his dazzled ser- said Dorcas Brandon; and a second vants, sitting along the wall while he covenant binding him and the trustees read and expounded that morning, of the settlement against executing did not respectfully petition that a any deed, &c., without a similar conveil, after the manner of Moses, might sent; and specially directing, that in be suspended over the seraphic efful- the event of alienating the estate, the gence.
said Dorcas must be made an assentSomehow his T'imes did not interesting party to the deed. him at breakfast; these parliamentary He folded the deed, and replaced it wrangles, commercial speculations, in his pocket with a peaceful smile and foreign disputes, are they not, and closed eyes, murmuringafter all, but melancholy and dreary “I'm much mistaken if the gray records of the merest worldliness; and mare's the better horse in that stud.", are there not moments when they be He laughed gently, thinking of the come almost insipid ? Jos Larkin Captain's formidable and unscruputossed the paper upon the sofa. French lous nature, exhibitions of which he politics, relations with Russia, com- could not fail to remember. mercial treaties, party combinations, “No, no, Miss Dorkie won't give us how men can so wrap themselves up much trouble.” in these things !
He used to call her “Miss Dorkie,” And he smiled ineffable pity over playfully, to his clerks. It gave him the crumpled newspaper-on the poor consideration, he fancied. And now
with this Five Oaks to begin with, taking by return, at foot of which, in £1,400 a-year—a great capability, im- pencil
, he wrote, “ N.B.-Yes. mensely improvable, he would stake This arrangement necessitated his half he's worth on making it more providing himself with a guarantee than £2,000 within five years; and from the Vicar; and so the little acwith other things at his back, an able count as between the Vicar and Jos man like him might before long look Larkin, Solicitor, and the Vicar and as high as she. And visions of the Messrs. Burlington, Smith, and Co., grand jury rose dim and splendid-an Solicitors, grew up and expanded with Heiress, and a seat for the county; a tropical luxuriance. perhaps he and Lake might go in to About the same time-while Mr. gether, though he'd rather be asso- Jos Larkin, I mean, was thinking ciated with the Hon. James Clutt- over Miss Dorkie's share in the deed, worth, or young Lord Griddlestone with a complacent sort of interest, Lake, you see, wanted weight, and, anticipating a struggle, but sure of notwithstanding his connexions, was, victory-that beautiful young lady it could not be denied, a new man in was walking slowly from flower to the county:
flower, in the splendid conservatory So Wylder, Lake, and Jos Larkin which projects southward from the had each projected for himself, pretty house, and rears itself in glacial arches much the same career; and probably high over the short, sweet, and flowery each saw glimmering in the horizon patterns of the outer garden of Branthe golden round of a coronet. And don. The unspeakable sadness of I suppose other modest men are not wounded pride was on her beautiful always proof against similar flatteries features, and there was a fondness in of imagination.
the gesture with which she laid her Jos Larkin had also the Vicar's fingers on these exotics and stooped business and reversion to attend to. over them, which gave to her solitude The Rev. William Wylder had a letter a sentiment of the pathetic. containing three lines from him at From the high glass doorway, comeight o'clock, to which he sent an an- municating with the drawing-rooms, swer; whereupon the solicitor de- at the far end, among towering ranks spatched a special messenger, one of of rare and gorgeous flowers, over the his clerks, to Dollington, with a letter encaustic tiles, and through this atto the sheriff's deputy, from whom mosphere of perfume, did Captain he received duly a reply, which ne- Stanley Lake, in his shooting coat, cessitated a second letter with a for- glide, smiling toward his beautiful mal undertaking, to which came ano- young wife. ther reply ; whereupon he wrote to She heard the door close, and lookBurlington, Smith, and Co., acquaint- ing half over her shoulder, in a low ing them respectfully, in diplomatic tone indicating surprise, she merely fashion, with the attitude which af- said, fairs had assumed. With this went “Oh !" receiving him with a proud, a private and confidential, non-official, sad look. note to Smith, desiring him to answer “ Yes, Dorkie, I'm here at last. I've stiffly and press for immediate settle- been for some weeks so insufferably ment, and to charge costs fairly, as busy," and he laid his white hand Mr. William Wylder would have lightly over his eyes, as if they and ample funds to liquidate them. Smith the brain within were alike weary. knew what fairly meant, and his en “How charming this place is the tries went down accordingly. By the temple of Flora, and you the divisame post went up to the same firm nity !" a proposition-an after thought And he kissed her cheek. sanctioned by a second miniature cor “I'm now emancipated for, I hope, respondence with his client, now sail- a week or two. I've been so stupid ing before the wind, to guarantee and inattentive. I'm sure, Dorkie, you them against loss consequent against must think me a brute. I've been staying the execution in the sheriff's shut up so in the library, and keephands for a fortnight, which, if they ing such tiresome company-you've agreed to, they were further requested no idea ; but I think you'll say it was to send a draft of the proposed under- time well spent, at least I'm sure VOL. LXIII.—NO. CCCLXXIII. | <urn:uuid:2068dcf4-4356-4292-b9f4-98fe23b40b57> | CC-MAIN-2021-21 | https://books.google.co.in/books?id=2Zrx0s_t4oUC&pg=PA31&focus=viewport&vq=followed&dq=editions:UOM39015064575429&lr=&output=html_text | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991829.45/warc/CC-MAIN-20210514214157-20210515004157-00095.warc.gz | en | 0.956992 | 4,914 | 2.9375 | 3 |
The overall aim of this document is to ensure that all HATW staff are able to recognise and act appropriately to all cases of self-harm in young people that we are working with.
- To recognise any form of self-harm or harmful behaviours.
- To help staff to understand and prepare for the fact that self-harming is almost always a symptom of some underlying emotional or psychological issue.
- To put in place a framework for intervention.
- For this document to be a practical way to help service users access support.
What is Self-Harm?
Self-harm is the act of deliberately causing harm to oneself either by causing a physical injury, by putting oneself in dangerous situations and/or self-neglect, and can include but is not limited to:
- burning or scalding their skin
- banging or bruising
- scrubbing, picking or scouring their skin
- deliberate bone-breaking
- punching themselves
- sticking things into their body
- swallowing inappropriate objects or liquids
- taking too many tablets
- biting themselves
- pulling their hair or eye lashes out
- alcohol and substance misuse
- controlled eating patterns- anorexia, bulimia, over-eating
- indulging in any risky behaviours/ risky sexual behaviours
- an unhealthy lifestyle (for example, not taking good physical care of oneself)
- deliberately provoking aggressive reactions from others (intentionally getting into fights)
Things for us to Remember
- Anyone from any background or of any age can self-harm, including very young children.
- Self-harm affects people from all family backgrounds, religions, cultures and demographic groups.
- Self-harm affects all sorts of people across a range of gender identities.
- People who self-harm can often keep their problems to themselves which may mean opening up can be difficult.
- You cannot just tell someone who self-harms to stop – it’s not that easy.
Links to Emotional Distress
Those who self-harm are usually suffering emotional or psychological distress and it is vital that all such distress is taken seriously to assist in alleviating it or to minimise the risk of increasing distress and potentially suicide.
Emotional/psychological risk factors associated with self-harm can include but are not limited to:
- recent trauma e.g. death of a friend or relative, parental divorce
- negative thought patterns and low self-esteem
- bullying or being rejected by peers
- difficulty in making relationships/loneliness
- abuse- sexual, physical, emotional or through neglect
- sudden changes to social situations and/or academic performance
- relationship difficulties (with family or friends)
- learning difficulties
- school or work pressures to achieve (for example: from teachers or parents/guardians)
- substance abuse (including tobacco, alcohol or drugs)
- issues around sexuality or gender identity
- depression/anxiety – though it may not be formally diagnosed
- inability to express oneself
- lack of positive coping mechanisms
Other casual or risk factors:
- inappropriate advice or encouragement from internet websites or chat-rooms
- experimentation, ‘dares’ or bravado, copycat behaviour
- a history of abuse or mental health issues in a family
- parental separation or family alienation or distancing/ poor parental relationships and arguments
- neglect or domestic abuse and/or substance misuse in the home
- media influence
- issues surrounding religion or cultural identity
Self-harm may present but not always visible. Therefore staff should be vigilant and should take any warning signs seriously. These may include but are not limited to:
- visible signs of injury
- a change in dress habits that may be intended to disguise injuries
- changes in eating or sleeping habits
- increased isolation from friends or family; becoming socially withdrawn
- changes in activity or mood (e.g. becoming more introverted)
- struggling in school or lowering of academic achievements
- a withdrawal from out of school activities or after school clubs could be a sign of distress or isolation
- alternatively, an increased amount of activities, workload and pressures leaving little time for the young person to relax or have any personal time
- talking or joking about self-harm or suicide
- drug and alcohol abuse
- expressing, verbally or otherwise, feelings of failure, uselessness, loss of hope and low self-worth
All staff should take these signs seriously; however, we are dealing with young people who self-harm, on a daily basis. If it is already identified that a young person self-harms and they are discussing past or current behaviours in a group setting, one on one in mentoring or in a conversation, then a disclosure of self-harm shouldn’t necessarily cause serious concern for staff.
If they disclose that they are a serious risk to themselves or others then it should be escalated as a child protection and safeguarding issue.
If they disclose that the self-harming behaviour is a result of abuse or neglect (as defined in the Child Protection Policy) then it should be escalated as a child protection issue.
In the event of an escalation, the relevant organisations to be contacted would be the school contact, and also one of the following:
- the police
- social services
If you aren’t sure whether a disclosure should be escalated then talk to one of the Directors, as every young person’s case will be individual and will be handled as such.
The risk of self-harm can be significantly reduced by the creation of a supportive environment in which an individual’s self-esteem is raised and healthy peer relationships are fostered. This can be achieved through respect, honesty and openness. Staff awareness of issues leading to self-harm is increased through accessing training, following the child safeguarding polices created by HATW and sharing stories of how people have overcome their issues on the HATW website (www.hatw.co.uk)
HATW doesn’t aim to necessarily enforce stopping self-harming, but rather, our aim is to introduce other, more positive alternatives, for service users to find their own way, and stop self-harming in their own time.
HATW will provide service users with a wide range of internal and external sources of help that can be contacted or used through a variety of methods. Staff will all have access to contact information for external agencies that can offer advice and/or assist with issues including self-harm.
Procedures for Dealing with Self-Harm
The first thing to remember is that if someone has chosen to tell you that they are self-harming then you will be someone that they trust and feel comfortable talking to. It is not easy to tell someone for the first time about something very private like self-harm. The person may have considered for a long time whether to talk about it or not and the fact that they have disclosed to you, even though it might be difficult, might be the first steps to finding help and changing their situation.
If a disclosure is made take it in your stride. Keep your body language open and remain emotionally neutral. They are not alone in self-harming and neither are you as someone trying to support them. Self-harm is a coping mechanism but it does not necessarily mean that the person is feeling suicidal or mean that they are at serious risk. However, staff should remain vigilant for signs that an issue is more serious.
If a young person discloses that they self-harm there are some initial key things to remember including:
- Self-harm is a coping mechanism.
- It is not about attention-seeking.
- There is a difference between self-harm and suicide.
- Understand that it is a long and hard journey to stop self-harming. Be aware that someone will only stop self-harming when they feel ready and able to do so.
Practical responses to self-harm disclosure
- It’s OK to ask for time to let the news sink in
- Take things at the young person’s pace
- Ask what you can do to help
- Don’t give a no-self-harm ultimatum
- Encourage them to seek professional help.
- Don’t worry about saying the wrong thing
- Show them genuine concern
- Be open and make time to listen to them
- Encourage them to make their own decisions and ask what they want you to do, and how they want help
- Be calm and patient with them
- Give them a message of hope- that things will get better, for example, the stories shared through HATW.
- Try not to show disappointment or distaste.
- Don’t shout or demand answers
- Don’t force anyone to talk about anything they aren’t ready to
- Avoid confiscating equipment as it might mean that the young person will find something else to use that they may not be used to, and may cause more damage.
- Do not force anyone to stop what they are doing. Instead talk about what triggers them, what things they think they might find helpful instead and if they want any further help or support.
When self-harming behaviours are being discussed remember to:
- Let the person who self-harms know that you want to listen to them and hear how they are feeling when they feel ready and able to talk
- Some people will just want to be heard and empathised with. If they’re comfortable talking, let them talk. If they’re not comfortable talking, try not to push them by asking questions that may overwhelm them.
- Be clear about why they are discussing this with you, and what they are looking to get out of the conversation.
- Be compassionate and respect what the person is telling you, even though you may not understand or find it difficult to accept what they are doing.
- Self-harm is not the only way for people to deal with emotional distress. Try to encourage the young person to seek alternative and more constructive coping mechanisms. However, do not expect them to be able to stop self-harming immediately.
- Be careful with your choice of language and keep your tone respectful. Discussing self-harm in graphic detail can be distressing and triggering. Do not use violent and/or graphic language or imagery (e.g. ‘slashing your wrists’ etc.)
Some Practical Things That You Might do While Supporting Someone Who Self-Harms
- Ask them how they would like you to help them. It’s okay to ask questions and not know all the answers. The person who is self-harming is probably the one who knows best how they want to be supported, so just ask.
- Don’t accuse the person of being attention seeking, there are common misconceptions that someone who self-harms does it because they want people to notice them but the reality is that many people self-harm but do everything they can to ensure that no one else finds out. Even if self-harm is being used by someone to get attention, that person is still struggling; self-harm is not a positive way to get the attention they are looking for and they need our support just as much as any other person.
- While you are there for them and you will do your best to support the person who is self-harming, remember that you are not able to do everything alone. You can encourage the person to think about seeking help – perhaps from an understanding GP, parent, youth worker or teacher – but don’t force them to if they don’t want to. Just let them know that you are there for them and that there is more support out there when they are ready.
- Don’t tell them to stop. Self-harm is a coping mechanism, it is something they have come to rely on to deal with difficult things at the moment. Other healthier coping mechanisms will need to be found before the person can stop self-harming and this process can take a long time, and can only happen when the person is ready.
- Don’t focus only on the self-harm or ask the person to show you their scars/injuries. You should instead try to look at the underlying issues or the reasons behind the self-harm. By helping them to talk about the emotions/feelings/thoughts that are leading to them hurting themselves, you may be able to help them manage these things in a healthier way.
- Some people may want further help with their self-harm and in this case you may be able to help by putting them in touch with organisations that that may be able to help further (see ‘Helplines’).
- If they want to talk to their parents or Doctors about their self-harm it may be helpful for you to discuss with the young person what they expect, how they think it will go and how they hope it will go and put some action plans in place to achieve this, and talk about the best language to use or the best approach.
- If they don’t want to stop self-harming immediately it may be best to make sure they stay safe and reduce the damage to their body e.g. Using clean utensils, not cutting too deep, keeping wounds clean and free from infection.
If physical harm occurs during a HATW session, the young person should be taken to the Health Centre or to A&E for medical assessment and care. If it is severe or life threatening, ring 999 immediately.
If a young person harms themselves in front of other service users, then all witnesses should be spoken to individually, and supported appropriately, to ensure that they’re not at an increased risk of self-harming as result of the incident.
Things to Suggest Instead of Self-Harm
There isn’t a “one size fits all” solution to self-harm. Try to help the young person come up with things that might work for them. If the young person cannot or will not find their own solution, some suggestions could be made like writing, screaming into a pillow, going for a really fast run or painting/listening to music really loudly.
They could also consider:
- Talking to a family member, a friend or a helpline. If they are on their own perhaps phoning or emailing/texting/messaging a friend or helpline could be helpful.
- Distract themselves by going out, singing or listening to music, or by doing anything (harmless) that interests them.
- Relax and focus their mind on something pleasant or try some yoga poses or meditation techniques- creating their very own comforting place.
- Find another way to express their feelings, perhaps through creative means.
- Give themselves some ‘harmless pain’- eat a hot chilli, have a cold shower/holding ice cubes, or drawing red lines on their skin.
- Focus their mind on positives eg. things they have to look forward to, things they have in their lives that they enjoy doing, things they are grateful for.
- Write a diary/letter to explain what is happening- even if no-one else ever sees it.
Confidentiality and Reporting
While working on behalf of Heads Above The Waves, every conversation should be prefaced with the confidentiality guidelines, so that the young person is aware of what will and will not be disclosed outside of that conversation.
Confidentiality is about keeping things that you are told between the people involved, unless someone is at risk or in danger (this could be the person who is self-harming or anyone else). Be honest and tell them if you need to tell someone else.
While you listen and talk to the person about how they are feeling, you should never promise to keep everything they are telling you a secret.
If you believe that the person self-harming is in need of medical attention or has taken an overdose then you will need to tell someone. Primarily the child protection officer but also potentially a teacher, youth worker or parent.
If the person mentions that they are suicidal, you must take it seriously. Establish whether they have a plan in place to complete suicide, and if they do tell a responsible adult (primarily the child protection officer but also a teacher, youth worker etc.), even if they tell you not to. If there is no plan in place it is still worth discussing with one of the aforementioned people. Perhaps suggest that you go to talk to someone together.
In relation to confidentiality, where there is no child protection issue raised, although it is better if parents or a carer are notified and involved to support the young person, each individual case and approach needs to be handled carefully and sympathetically to support the well being of the young person. The decision about involving parents/guardians should be taken into consultation with the young person’s school. If a decision to contact parents/guardians is reached, then the school will make the contact, wherever possible.
In the case of severe self-harm requiring medical intervention/ A&E, parents will be informed immediately, unless it is known that self-harm is symptomatic of abuse in the home, at which point, you may take the decision to make a referral directly to the appropriate authority without informing the parents.
If a member of staff becomes aware of or is alerted to a new or escalated self-harming issue, or a young person discloses new or escalated self-harm, they should make a written report. This report should include the date of the event, what was disclosed, how concerned staff member is about it. A report should be made even if the incident eventually turns out to be an isolated one that was not indicative of a serious underlying emotional or abusive cause.
If a young person suggests there is evidence of self-harm beneath their clothing, a member of staff should accept such statements and must never ask the pupil to remove clothing to reveal wounds/bruises etc. A school nurse or a Doctor may investigate such evidence in a sensitive and appropriate manner in the Health Centre or A&E.
A Health Centre may be a school’s medical office, or a local/nearest Doctors’ surgery.
Regarding and Reporting Incidents of Self-Harm Disclosed to HATW Staff
A Self-Harm Report Form should be completed and will be kept as a record of all incidents in a private locked drawer that only the Directors will have access to.
The Directors may review this record to identify any trends or other areas of concern. They may also show the form to third parties such as the NSPCC, Police, School or Social Services but only in line with the Data Protection Policy.
Self-Care for Staff
Finally make sure that you take care of yourself. It is hard dealing with the fact that someone you know or are in regular contact with is self-harming. You shouldn’t be afraid of seeking some support for yourself. Remember, you will be able to better support the person who is self-harming if you are taking care of yourself too.
Should you require additional support or someone else to talk to, either contact an outside listening service, or get in touch with the voluntary counsellor for Heads Above The Waves, and request time to talk through your concerns. Conversations with the voluntary counsellor are bound by the same confidentiality and privacy guidelines as all Heads Above The Waves work.
Useful Resources and Helplines
National Self-Harm Network – 0800 622 6000 – nshn.co.uk – email@example.com
The Mix – 0808 808 4994 – themix.org.uk
ChildLine – 0800 1111 – childline.org.uk
Samaritans – 116 123 – samaritans.org – firstname.lastname@example.org
NightLine – nightline.ac.uk/nightlines to find your local branch
SupportLine – 01708 765 200 – supportline.org.uk
CALL Helpline – 0800 132 737 – callhelpline.org.uk – Text “Help” to 81066
MIND Info Line – 0300 123 3393 – mind.org.uk
SANE – 0300 3047000 – sane.org.uk
Monitoring and Review
This policy will be reviewed annually (or earlier if necessary) by the Directors. | <urn:uuid:55646236-dc96-42dd-a1c3-fb9892b3aae2> | CC-MAIN-2021-21 | https://hatw.co.uk/policy/self-harm/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991562.85/warc/CC-MAIN-20210519012635-20210519042635-00414.warc.gz | en | 0.958075 | 4,249 | 3.609375 | 4 |
The medieval household was, like modern households, the center of family life for all classes of European society. Yet in contrast to the household of today, it consisted of many more individuals than the nuclear family. From the household of the king to the humblest peasant dwelling, more or less distant relatives and varying numbers of servants and dependents would cohabit with the master of the house and his immediate family. The structure of the medieval household was largely dissolved by the advent of privacy in early modern Europe.
Variations were immense over an entire continent and a time span of about 1000 years. Yet it is still possible to speak of a classical model of the medieval household, particularly as it evolved in Carolingian France and from there spread over great parts of Europe.
Neither Greek nor Latin had a word corresponding to modern-day "family". The Latin familia must be translated to "household" rather than "family". The aristocratic household of ancient Rome was similar to that of medieval Europe, in that it consisted – in addition to the paterfamilias, his wife and children – of a number of clients (clientes), or dependents of the lord who would attend upon him, counsel him and receive rewards. Where it differed from its medieval equivalent was in the use of slaves rather than paid servants for the performance of menial tasks. Another difference was that, due to the relative security and peacefulness within the borders of the Roman Empire, there was little need for fortification. The aristocratic household of medieval Europe, on the other hand, was as much a military as a socio-economic unit, and from the 9th century onwards the ideal residence was the castle.
As a result of the military nature of the medieval noble household, its composition was predominately male. Towards the end of the medieval period the ratio levelled out somewhat, but at an earlier date the feminine element of the household consisted only of the lady and her daughters, their attendants, and perhaps a few domestics to perform particular tasks such as washing. Many of the male servants were purely military personnel; there would be a gatekeeper, as well as various numbers of knights and esquires to garrison the castle as a military unit. Yet many of these would also serve other functions, and there would be servants entirely devoted to domestic tasks. At the lower level, these were simply local men recruited from the localities. The higher level positions – in particular those attending on the lord – were often filled by men of rank: sons of the lord's relatives, or his retainers.
The presence of servants of noble birth imposed a social hierarchy on the household that went parallel to the hierarchy dictated by function. This second hierarchy had at its top the steward (alternatively seneschal or majordomo), who had the overriding responsibility for the domestic affairs of the household. Taking care of the personal wellbeing of the lord and his family were the Chamberlain, who was responsible for the chamber or private living-quarters, and the Master of the Wardrobe, who had the main responsibility for clothing and other domestic items.
Of roughly equal authority as the steward was the marshal. This officer had the militarily vital responsibility for the stables and horses of the household (the "marshalsea"), and was also in charge of discipline. The marshal, and other higher-ranking servants, would have assistants helping them perform their tasks. These – called valets de chambre, grooms or pages, ranking from top to bottom in that order – were most often young boys, although in the larger royal courts the valet de chambres included both young noble courtiers, and often artists, musicians and other specialists who might be of international repute. Assigning these the office of valet was a way of regularising their position within the household.
One of the most important functions of the medieval household was the procuration, storage and preparation of food. This consisted both in feeding the occupants of the residence on a daily basis, and in preparing larger feasts for guests, to maintain the status of the lord. The kitchen was divided into a pantry (for bread, cheese and napery) and a buttery (for wine, ale and beer). These offices were headed by a pantler and a butler respectively. Depending on the size and wealth of the household, these offices would then be subdivided further. The following is a list of some of the offices one could expect to find in a large medieval aristocratic or royal household:
|Household offices: |
|Administration||Food and Drink
|Food and Drink
In addition to these offices there was a need for servants to take care of the hunting animals. The master huntsman, or the veneur, held a central position in greater noble households. Likewise, the master falconer was a high-ranking officer, often of noble birth himself. There were spiritual needs to be cared for, and a chapel was a natural part of every large household. These household chapels would be staffed by varying numbers of clerics. The chaplains, confessors and almoners could serve in administrative capacities as well as the religious ones.
The households of medieval kings were in many ways simply aristocratic households on a larger scale: as the Burgundian court chronicler Georges Chastellain observed of the splendidly ordered court of the dukes of Burgundy, "after the deeds and exploits of war, which are claims to glory, the household is the first thing that strikes the eye, and which it is, therefore, most necessary to conduct and arrange well." In some ways though, they were essentially different. One major difference was the way in which royal household officials were largely responsible for the governance of the realm, as well as the administration of the household.
The 11th century Capetian kings of France, for instance, "ruled through royal officers who were in many respects indistinguishable from their household officers." These officers – primarily the seneschal, constable, butler, chamberlain and chancellor – would naturally gain extensive powers, and could exploit this power for social advancement. One example of this is the Carolingians of France, who rose from the position of royal stewards – the Mayors of the Palace – to become kings in their own right. It was the father of Charlemagne, Pepin the Short, who gained control of government from the enfeebled Merovingian king Childeric III. [a] Another example can be found in the royal House of Stuart in Scotland, whose family name bore witness to their background of service.
Eventually the central positions of the royal household became little else than honorary titles bestowed upon the greatest families, and not necessarily even dependent on attendance at court. In Flanders, by the thirteenth century, the offices of constable, butler, steward and chamberlain had become the hereditary right of certain high noble families, and held no political significance.
Finally, the royal household differed from most noble households in the size of their military element. If a king was able to muster a substantial force of household knights, this would reduce his dependence on the military service of his subjects. This was the case with Richard II of England, whose one-sided dependence on his household knights – mostly recruited from the county of Cheshire – made him unpopular with his nobility and eventually contributed to his downfall.
The medieval aristocratic household was not fixed to one location, but could be more or less permanently on the move. Greater nobles would have estates scattered over large geographical areas, and to maintain proper control of all their possessions it was important to physically inspect the localities on a regular basis. As the master of the horses, travel was the responsibility of the marshal. Everything in the noble household was designed for travel, so that the lord could enjoy the same luxury wherever he went.
Particularly for kings, itineration was a vital part of governance, and in many cases kings would rely on the hospitality of their subjects for maintenance while on the road. This could be a costly affair for the localities visited; there was not only the large royal household to cater for, but also the entire royal administration. It was only towards the end of the medieval period, when means of communication improved, that households, both noble and royal, became more permanently attached to one residence.
The aristocratic society centered on the castle originated, as much of medieval culture in general, in Carolingian France, and from there spread over most of Western Europe. In other parts of Europe, the situation was different. On the northern and western fringes of the continent, society was kin-based rather than feudal, and households were organised correspondingly.
In Ireland, the basis for social organisation was the " sept", a clan that could comprise as many as 250 households, or 1250 individuals, all somehow related. In Viking-age Scandinavia, housing arrangements were more humble than those of contemporary France or England, but also here the greater lords would own grand halls wherein they might entertain large numbers of guests.
In the Byzantine Empire, slaves were employed until the end of the Empire, as were eunuchs. Little is known of the living arrangements of the Byzantines, as very few buildings remain. From historical and architectural evidence it is known that, even though castles were rare, the wealthy lived in palaces of varying magnitude, with chapels and gardens, and rich decorations of mosaics and frescoes.
The households of medieval peasant families were naturally smaller than those of the aristocracy, and as such resembled modern households more. The patterns of marriage fluctuated greatly over the course of the Middle Ages. Even though most of the available evidence concerns the higher classes, and the source material for southern Europe is richer than for the rest, it is still possible to make some rough generalisations. It seems clear that the average age of marriage during the Early Middle Ages was comparatively high, in the early twenties, and quite equal for men and women. The reason for this can be found in traditions brought forward from the Germanic tribes, but equally in the fact that habitation was confined to small areas, a factor that enforced restrictions on population growth.
As more land was won for cultivation, this trend changed. During the High and Late Middle Ages, women were increasingly married away in their teens, leading to higher birth rates. While women would be married once they reached reproductive age, men had to possess independent means of sustenance – to be able to provide for a family – before entering into marriage. For this reason, the average age of marriage for men remained high, in the mid- to late twenties.
Even though peasant households were significantly smaller than aristocratic ones, the wealthiest of these would also employ servants. Service was a natural part of the cycle of life, and it was common for young people to spend some years away from home in the service of another household. This way they would learn the skills needed later in life, and at the same time earn a wage. This was particularly useful for girls, who could put the earnings towards their dowry.
The houses of medieval peasants were of poor quality compared to modern houses. The floor was normally of earth, and there was very little ventilation or sources of light in the form of windows. In addition to the human inhabitants, a number of livestock animals would also reside in the house. Towards the end of the medieval period, however, conditions generally improved. Peasant houses became larger in size, and it became more common to have two rooms, and even a second floor.
The medieval world was a much less urban society than either the Roman Empire or the modern world. The fall of the Roman Empire had caused a catastrophic de-population of the towns and cities that had existed within the Empire. Between the 10th and 12th centuries, however, a revival of the European city occurred, with an increase in the urbanisation of society.
The practice of sending children away to act as servants was even more common in towns than in the countryside. The inhabitants of towns largely made their livelihood as merchants or artisans, and this activity was strictly controlled by guilds. The members of these guilds would in turn employ young people – primarily boys – as apprentices, to learn the craft and later take a position as guild members themselves. [b] These apprentices made up part of the household – or "family" – as much as the children of the master.
Towards the end of the Middle Ages, the functions and composition of households started to change. This was due primarily to two factors. First of all, the introduction of gunpowder to the field of warfare rendered the castle a less effective defence, and did away with the military function of the household. The result was a household more focused on comfort and luxury, and with a significantly larger proportion of women.
The second factor that brought about change was the early modern ascendancy of the individual, and focus on privacy. [c] Already in the later Middle Ages castles had begun to incorporate an increasing number of private chambers, for the use both of the lord and of his servants. Once the castle was discarded to the benefit of palaces or stately homes, this tendency was reinforced. This did not mean an end to the employment of domestic servants, or even in all cases a reduction in household staff. What it did mean, however, was a realignment whereby the family – in a genealogical sense – became the cornerstone of the household.
- Royal Household
- Medieval cuisine
- Medieval demography
- Medieval fortification
- Medieval gardening
- Medieval hunting
- Herlihy, p. 2.
- Veyne, Paul, Phillippe Ariès, Georges Duby, and Arthur Goldhammer (1992). A History of Private Life, Volume I, From Pagan Rome to Byzantium. Belknap Press. pp. 38–9. ISBN 0-674-39974-9.
- Morris, p. 14.
- Reuter, Timothy (ed.) (2000). The New Cambridge Medieval History, Volume III c.900-c.1024. Cambridge: Cambridge University Press. p. 47. ISBN 0-521-36447-7.CS1 maint: extra text: authors list ( link)
- Woolgar, pp. 34-6.
- Gies, Joseph & Frances (1979). Life in a Medieval Castle (3rd ed.). New York, Toronto: Harper Perennial. p. 95. ISBN 0-06-090674-X.
- Woolgar, pp. 103-4.
- Woolgar, pp. 36-7.
- Woolgar, p. 18-9.
- Woolgar, p. 17.
- Woolgar, pp. 42-3.
- Woolgar, pp. 31-2.
- Duncan, Archibald A. M. (1993). "The 'Laws of Malcolm MacKenneth'" in Medieval Scotland: Crown, Lordship and Community: Essays Presented to G.W.S. Barrow, Alexander Grant and Keith J. Stringer (eds.), Edinburgh, Edinburgh University Press, p. 249. ISBN 0-7486-0418-9.
- Woolgar, pp. 17-8, 111, 144, 168 et passim.
- Cummins, pp. 175-7.
- Cummins, pp. 217-8.
- Allmand, Christopher (ed.) (1998). The New Cambridge Medieval History, Volume VII c.1415-c.1500. Cambridge: Cambridge University Press. p. 324. ISBN 0-521-38296-3.CS1 maint: extra text: authors list ( link)
- Woolgar, 176-7.
- Quoted in Johan Huizinga, The Waning of the Middle Ages, 1924:31.
- Reuter, p. 122.
- Luscombe, David and Jonathan Riley-Smith (eds.) (2004). The New Cambridge Medieval History, Volume IV c.1024-c.1198 (part 2). Cambridge: Cambridge University Press. pp. 127–8. ISBN 0-521-41411-3.CS1 maint: extra text: authors list ( link)
- Cantor, p. 167.
- Allmand, p. 517.
- Abulafia, David (ed.) (1999). The New Cambridge Medieval History, Volume V c.1198-c.1300. Cambridge: Cambridge University Press. p. 408. ISBN 0-521-36289-X.CS1 maint: extra text: authors list ( link)
- Saul, Nigel (1999). Richard II. New Haven and London: Yale University Press. pp. 444–5. ISBN 0-300-07875-7.
- According to May McKisack, The Fourteenth Century (Oxford History of England)n1959:1, note references.
- Woolgar, p. 181.
- Daniell, Christopher (2003). From Norman Conquest to Magna Carta: England, 1066-1215. London: Routledge. ISBN 0-415-22215-X.
- Woolgar, p. 197.
- Davies, R.R. (2000). The First English Empire: Power and Identities in the British Isles 1093-1343. Oxford: Oxford University Press. pp. 66–7. ISBN 0-19-820849-9.
- Herlihy, pp. 32-4.
- Roesdahl, Else (1998). The Vikings (2nd ed.). London: Penguin Books. pp. 41–5. ISBN 0-14-025282-7.
- Hussey, p. 132.
- Hussey, p. 137.
- Herlihy, p. 79.
- Herlihy, pp. 77-8.
- Hanawalt, Barbara. 1988. The Ties That Bound: Peasant Families in Medieval England. Oxford University Press. pp. 95-100
- Young, Bruce Wilson. 2009. Family life in the Age of Shakespeare. Greenwood Press. pp. 21
- Herlihy, pp. 103-7.
- Horrox & Ormrod, pp. 422-3.
- Herlihy, pp. 107-11.
- Hollister, p. 169.
- Horrox & Ormrod, pp. 420-1.
- Herlihy, p. 153.
- Murray, Jacqueline (ed.) (2001). Love, Marriage, and Family in the Middle Ages: A Reader. Peterborough, Ontario: Broadview. p. 387. ISBN 1-55111-104-7.CS1 maint: extra text: authors list ( link)
- Cipolla, Carlo M. (1993). Before the Industrial Revolution: European Society and Economy, 1000-1700 (3rd ed.). London, New York: Routledge. p. 91. ISBN 0-415-09005-9.
- Hollister, pp. 179-80.
- Contamine, Philippe (1984). War in the Middle Ages. Oxford: Blackwell. pp. 200–7. ISBN 0-631-13142-6.
- Woolgar, pp. 197-204.
- Woolgar, p. 61.
- Ariès, Phillippe, Georges Duby and Arthur Goldhammer (2003). A History of Private Life, Volume II, Revelations of the Medieval World. Belknap Press. pp. 513–4. ISBN 0-674-40001-1.
- Lewis, Thorpe (ed.) (1969). Two Lives of Charlemagne (new ed.). London: Penguin Classics. pp. 56–7. ISBN 0-14-044213-8.CS1 maint: extra text: authors list ( link)
- Abulafia, p. 31.
- Burckhardt, Jacob (1990). The civilization of the Renaissance in Italy. London: Penguin books. p. 98. ISBN 0-14-044534-X.
- Allmond, pp. 244-5.
- The Medieval Peasant Household by J. G. Hurst
- Cantor, Norman F. (1994). The Civilization of the Middle Ages. New York: Harper Perennial. ISBN 0-06-017033-6.
- Cummins, John (2001). The Hound and the Hawk: The Art of Medieval Hunting. London: Phoenix. ISBN 1-84212-097-2.
- Herlihy, David (1985). Medieval Households. Cambridge, Massachusetts; London: Harvard University Press. ISBN 0-674-56375-1.
- Hollister, C. Warren (2001). Medieval Europe: A Short History, 9th edition, Boston, London: McGraw-Hill. ISBN 0-07-112109-9.
- Horrox, Rosemary and W. Mark Ormrod (2006). A Social History of England, 1200-1500. Cambridge: Cambridge University Press. ISBN 978-0-521-78954-7.
- Hussey, Joan Mervyn (1982). The Byzantine World. Greenwood Press Reprint. ISBN 0-313-23148-6.
- Maslakovic, Anna et al. (eds.) (2003). The Medieval Household in Christian Europe, c.850-c.1550: Managing Power, Wealth, and the Body. Turnhout: Brepols. ISBN 2-503-52208-4.
- Morris, Marc (2003). Castle. London: Channel 4 Books. ISBN 0-7522-1536-1.
- The New Cambridge Medieval History (1995–2005) 7 vols. Cambridge: Cambridge University Press. ISBN 0-521-85360-5.
- Woolgar, C. M. (1999). The Great Household in Late Medieval England. New Haven and London: Yale University Press. ISBN 0-300-07687-8. | <urn:uuid:6102c3de-006e-413c-aff6-896debe259b1> | CC-MAIN-2021-21 | https://webot.org/basic/?search=Medieval_household | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989874.84/warc/CC-MAIN-20210518222121-20210519012121-00416.warc.gz | en | 0.937403 | 4,647 | 3.75 | 4 |
Study objective: To test whether mortality selection was a dominant factor in determining trends in old age mortality, by empirically studying the existence of a negative correlation between trends in late middle age mortality and trends in old age mortality among the same cohorts.
Design and methods: A cohort approach was applied to period data on total and cause specific mortality for Denmark, England and Wales, Finland, France, the Netherlands, Norway, and Sweden, in 1950–1999. The study described and correlated mortality trends for five year centralised cohorts from 1895 to 1910 at ages 55–69, with the trends for the same cohorts at ages 80–89. The research distinguished between circulatory diseases, cancers, and diseases specifically related to old age.
Main results: All cause mortality changes at ages 80–89 were strongly positively correlated with all cause mortality changes at ages 55–69, especially among men, and in all countries. Virtually the same correlations were seen between all cause mortality changes at ages 80–89 and changes in circulatory disease mortality at ages 55–69. Trends in mortality at ages 80–89 from infectious diseases, pneumonia, diabetes mellitus, symptoms, or external causes showed no clear negative correlations with all cause mortality trends at ages 55–69.
Conclusions: The consistently positive correlations seen in this study suggest that trends in old age mortality in north western Europe in the late 20th century were determined predominantly by the prolonged effects of exposures carried throughout life, and not by mortality selection.
- elderly populations
- mortality selection
- causes of death
Statistics from Altmetric.com
The aging of populations has important consequences for future demands of health care services and old age benefit systems. The degree of aging of populations is strongly influenced by future patterns of old age mortality.1,2 Therefore, projections of future mortality trends are highly important for public health. To make informed projections of future mortality trends it is important to accurately describe past trends in old age mortality and to analyse its determinants.
Past trends in old age mortality have been studied in many countries. Among low mortality countries, a general decline in mortality among those aged 80 and over since the 1950s has been found.3–6 However, when this period of mortality decline is examined more closely, important cross national differences in the pace of decline in old age mortality appear. From the 1980s onwards, mortality decline stagnated in Denmark, Netherlands, and among Norwegian men, while in other countries the mortality decline continued.7–9
The mechanisms behind these trends are still largely unknown. Next to effects of lifestyle or events occurring in late life itself, studies have focused on the effects of events or life style earlier in life, in accordance with the life course perspective. One mechanism mentioned in literature as a possible determinant of old age mortality trends, and one that is related to the life course perspective, is mortality selection.1,7,10,11
Mortality selection indicates that when mortality at younger ages is high, it tends to affect the frail people first, leaving a more selected and more robust population that survive up to high ages.12 With decreasing mortality at younger ages, the increasing proportion of the elderly population might be expected to be less healthy when compared with their more selected predecessors,1 and subsequently could experience comparatively higher morbidity and mortality at older ages.
Mortality selection effects have been posited in studies that use mathematical models to study cohort mortality,12–18 for example to explain the deceleration of the age pattern of mortality at older ages (for example, Horiuchi and Wilmoth16), or the black-white mortality crossover (for example, Manton and Stallard12). In addition, there is a long history of empirical cohort analyses of mortality. These studies focused mainly on the association between debilitating events or mortality in early life and mortality in adult ages.19–25 They showed predominantly positive associations, indicating no mortality selection. However, one study reported negative associations,26 and another reported no associations.27 Three other empirical studies, focusing more explicitly on old age mortality, did not find any empirical evidence for mortality selection.28–30
Thus, the evidence on mortality selection effects is rather mixed. Moreover, as most of these studies studied the mortality experience of single cohorts, little is known on the role of mortality selection in long term mortality trends. Furthermore, because previous studies often focused on the effects of mortality at very early ages on adult mortality, the effects of adult mortality on old age mortality are largely unknown.
The objective of this paper is to empirically study whether, and in what way, trends in late middle age mortality are correlated with old age mortality trends among the same cohorts. We hypothesise that mortality selection is a driving factor in old age mortality trends in seven north western European countries from 1950 to 1999. Consequently, we expect inverse correlations.
To test this hypothesis, we use data on all cause mortality and mortality data for causes of death that are especially susceptible to mortality selection—that is, circulatory diseases at late middle age and diseases specifically related to old age. Mortality declines in circulatory diseases (predominantly ischaemic heart disease) have been shown to lead to increased prevalence of chronic heart diseases at older ages,31 with subsequently higher mortality risks of related diseases.32 With respect to diseases specifically related to old age, recent mortality increases were observed.9 These increases could possibly result from an increasing proportion of frailer people at higher ages, due to decreased selection, because of mortality declines at younger ages.
In this study, we assess whether trends in old age mortality (ages 80–89) among subsequent birth cohorts are inversely correlated with mortality trends at late middle age (ages 55–69) for the same cohorts, and whether different correlations are seen for (a) trends in circulatory diseases mortality at late middle age, and (b) mortality trends from diseases specifically related to old age.
For this analysis, data on all cause mortality, cause specific mortality, and population numbers, by five year age groups and sex, were obtained from national statistical offices and related institutes, for Denmark, England and Wales, Finland, France, the Netherlands, Norway, and Sweden, for the years 1950 to 1999.
In addition to all cause mortality, we included three main groups of causes of death in our analysis: all circulatory diseases, all cancers, and the remaining causes of death. Within all circulatory diseases we distinguished between ischaemic heart diseases and cerebrovascular diseases. Within the remaining causes of death, we focused on diseases specifically related to old age—that is, infectious diseases, pneumonia, diabetes mellitus, dementia, and symptoms. See Janssen et al for the three digit codes used for these causes of death in the different revisions of the International Classification of Diseases (ICD) from the World Health Organisation.33 For ischaemic heart diseases we included the numbers of deaths for code 422.1 under ICD-6/7.
The use of three digit codes can still generate mortality discontinuities because of ICD revisions or incidental coding changes, such as the ones in England and Wales between 1984 and 1992, and in Sweden after 1980.33 We identified and adjusted for these coding related mortality discontinuities in our analysis. Adjustment involved the recalculation of the number of cause specific deaths by means of sex and cause specific transition coefficients. These transition coefficients are the parameter estimates of variables associated with a coding change (for example, ICD-8toICD-9), and obtained through sex specific regression models. In these regression models, cause specific mortality was the dependent variable and age, year of death, and variables associated with a coding change were independent variables. To obtain the sex and cause specific transition coefficients to recalculate cause specific deaths for those aged 55–69 and those aged 80–89, the regression model was applied to cause specific mortality among those aged 60 and over, and those aged 80 and over, respectively. For those aged 55–69, recalculation was applied to ischaemic heart diseases in the Netherlands and Sweden, and cerebrovascular diseases in Finland. For those aged 80–89, deaths from all selected causes, except all circulatory diseases and infectious diseases, were adjusted for coding changes.
We aggregated the mortality and population data into five year periods, and calculated five year age specific mortality rates by dividing the mortality data by the mid-year population estimates. To these period data, we applied a “mixed cohort approach”, by using as the unit of observation centralised birth cohorts. In this approach, the data are combined by five year period and five year age group, and centred around the cohort calculated by subtracting the age group from the period (see fig 1). We were able to study four different centralised birth cohorts (those born around 1895, 1900, 1905, and 1910) to fulfil our aim of correlating mortality among those aged 55–69, with mortality among those aged 80–89 using the available data (1950–1999).
Cohort mortality rates for ages 55–69 and ages 80–89 were obtained by taking for each separate centralised cohort the unweighted average of the age specific rates over the three or two five year age groups, respectively. Relative changes in these cohort mortality rates were calculated by relating the mortality rate of a given centralised cohort to the mortality rate of the preceding centralised cohort.
As a first exploration of the mortality selection mechanism, we correlated all cause mortality levels between those aged 55–69 and those aged 80–89. Our main analysis, however, consisted of the correlation of relative cohort mortality changes between the late middle aged and the elderly population. For all cause mortality, we correlated absolute cohort mortality changes as well. Whereas absolute mortality changes more accurately express the importance of the mortality changes at younger ages, and the effect that they can have on trends in mortality at older ages, relative mortality changes can be more readily compared between countries.
The correlations were calculated across the seven countries, two sexes, and (changes in the) four centralised cohorts. In addition, correlations were calculated separately for men and women, each (change in) centralised cohort, and each country. The correlations of the mortality levels were stratified by sex.
In an additional analysis, we correlated the relative changes in all cause mortality trends—that is, the deceleration or acceleration of mortality trends in both late middle age and old age. We did so to find out if mortality at late middle age and old age is not only related in terms of the direction of mortality changes (is an increase in the one associated with a decrease in the other?) but also in terms of the pace of the mortality change (is an acceleration of the one associated with a deceleration in the other?). This additional analysis was conducted as an attempt to explain the deceleration of old age mortality decline that was seen in Denmark, the Netherlands, and among Norwegian men.9
To check the robustness of our results, different age groups were used when correlating all cause mortality trends—that is, ages 70–79 instead of 55–69 and ages 80–94 instead of 80–89. For the latter analysis only three instead of four centralised cohorts could be analysed.
Among those aged 55–69, mortality levels were highest in Finland, England and Wales, and France (men), and lowest in Norway, Sweden, and the Netherlands (women) (table 1). Among those aged 80–89, Finland had by far the highest mortality level, and France the lowest level, especially among the more recent cohorts. Correlation between all cause mortality levels of those aged 55–69 and those aged 80–89, showed highly positive and significant correlations, among both men and women, although less so for the more recent cohorts (table 2).
All cause mortality generally declined over the five year centralised birth cohorts from 1895 to 1910 both among men and women aged 55–69 and among men and women aged 80–89 (table 1). The trends for men in Denmark, the Netherlands, and Norway were less favourable. Correlations between both relative and absolute changes in all cause mortality at ages 55–69 with those at ages 80–89 were significant and positive (0.61 and 0.58, respectively) (table 3, fig 2). Among men, the correlation coefficients were especially high (0.7). Among women, the correlations were lower (0.3) and not statistically significant. The correlations were strongest for the Netherlands, Norway, and Sweden (men only).
Circulatory disease mortality among those aged 55–69 generally declined over subsequent centralised birth cohorts (data not shown). For men, this decline started only among later cohorts. For Dutch men, circulatory disease mortality increased. Relative changes in circulatory disease mortality at ages 55–69 correlated significantly and positively with all cause mortality changes at ages 80–89 (0.61) (table 4). For women, the positive correlation was not significant. Correlations of all cause mortality changes at ages 80–89 with trends in mortality from ischaemic heart diseases and cerebrovascular diseases at ages 55–69 were also significant and positive, but less strong (0.40 and 0.42, respectively).
All cause mortality trends at ages 55–69 correlated significantly and positively with mortality trends in circulatory diseases and cancer at ages 80–89 (0.58 and 0.69, respectively) (table 5). The positive correlation for cancer mortality was seen for both men and women, whereas for circulatory diseases only for men. All cause mortality trends at ages 55–69 did not clearly correlate with mortality trends at ages 80–89 from diseases other than circulatory diseases and cancer, nor with diseases specifically related to old age, such as infectious diseases, pneumonia, diabetes mellitus, dementia, symptoms, and external causes of death. A significant positive correlation was found only for diabetes mellitus and symptoms (among women). While a few inverse correlations were seen, especially for pneumonia, these correlations were weak, inconsistent, and non-significant.
Acceleration or deceleration of mortality trends among those aged 80–89 was positively correlated with the pace of mortality change among those aged 55–69, although correlations were weak (0.27) and non-significant.
Trends in all cause mortality at ages 70–79 (instead of 55–69) correlated significantly and highly positive with trends in all cause mortality at ages 80–89 (0.69). The correlation of all cause mortality changes at ages 55–69 with those at ages 80–94 (instead of 80–89) was significant and positive (0.59) as well.
In this paper, we explored the relation between mortality trends in late middle age (55–69) and mortality trends in old age (80–89) for male and female cohorts born around 1895, 1900, 1905, and 1910 in seven European low mortality countries. All cause mortality changes at ages 80–89 are strongly positively correlated with all cause mortality changes at ages 55–69, especially among men, and in all countries. Virtually the same correlations were seen between all cause mortality changes at ages 80–89 and changes in circulatory disease mortality at ages 55–69. Mortality trends at ages 80–89 from diseases specifically related to old age—that is, infectious diseases, pneumonia, diabetes mellitus, symptoms, and external causes, showed no clear negative correlations with all cause mortality trends at ages 55–69.
This evidence suggests that mortality selection has not been a driving factor behind old age mortality trends in the countries under study. Our results were found robust against the selection of different age groups (70–79 instead of 55–69 and 80–94 instead of 80–89). Furthermore, we found no indications that the recent deceleration of the mortality decline among the elderly population in Denmark, the Netherlands, and Norway was related to accelerated declines in mortality at earlier ages of the same cohorts.
This study is unique in its attempt to link, in a cohort-wise manner, trends in middle age mortality with trends in old age mortality. Perhaps closest to our study is a study by Manton on the effects of increases in life expectancy at advanced ages on mortality from conditions associated with a debilitation at those ages. He also found no evidence for decreased selectivity.29 Persons who survive up to age 85 were on average healthier than their predecessors, in contrast with what would be expected according to the mortality selection theory when mortality is improving.
Evaluation of data and methods
The mortality and population data used in this study stem from countries considered to have good or excellent population and vital registries.4,5 Reported survivorship counts are highly accurate.5,34 Comparison of our mortality data with the mortality data among those aged 80 and over from the Kannisto-Thatcher database5—in which the data were checked for age heaping and were subjected to a number of checks for plausibility—showed only small discrepancies.
In our analysis, we applied a mixed cohort approach—that is, a cohort approach applied to period data. A pure cohort approach was difficult to conduct with the available data, and would have led to the inclusion of only three subsequent five year cohorts, which we considered too few for correlation analyses. A disadvantage of the mixed cohort approach is that it cannot clearly separate subsequent cohorts. Consequently, the identified cohorts overlap, which could lead to an underestimation of mortality trends and possibly to a dilution of the strength of the correlations between trend estimates. However, some dilution of effect could not explain the observed positive instead of inverse relation between mortality trends at late middle age and old age.
The evidence on mortality selection effects is rather mixed, and studied predominantly by relating mortality at very early ages with adult mortality among single cohorts. Consequently, little is known on mortality selection effects of mortality trends at adult ages on old age mortality trends.
In all countries under study, trends in mortality at late middle age correlate positively with trends in old age mortality of the same birth cohorts.
Positive correlations are also seen with trends in mortality from cardiovascular diseases at late middle age. Weak, but not inverse correlations are seen with trends in mortality from diseases specifically related to old age.
The observed positive correlations point to effects of early life circumstances carried throughout life and prolonged exposure to, or longlasting effect of, risk factors emerging in adult life. Effects of mortality selection seem to be of lesser importance in determining old age mortality trends.
Our results do not support the concern that strong declines in middle age mortality will lead to an increase in old age mortality for the same cohorts.
We made an extraordinary effort to deal with ICD and other coding related changes that can affect cause specific mortality trends, and that are often neglected in other studies. Even though some residual effects of coding problems could not be excluded, we expect that these problems did not affect the results to any substantial extent.8,9,33
We do not expect that the potentially inferior quality of cause of death coding among the very elderly as compared with the late middle age would substantially affect our results. The quality of coding can only bias the correlation of trends if clear changes in the quality of coding over time occurred, which is unlikely.
Explanations of the absence of hypothesised inverse correlations
One possible explanation of the lack of negative correlations is that our study lacked potential to empirically observe an effect of mortality selection, because of comparatively little variation in mortality at younger ages between subsequent cohorts. Larger variations were, however, seen between the selected countries. Moreover, life table calculations for the Netherlands in 1950 showed that the mortality declines among those aged 55–69 within most individual countries were large enough to influence mortality trends among those aged 80–89. Considering the extreme situation in which all people saved from dying in the younger group will eventually die in the older group, a 10% mortality reduction among those aged 55–69 would lead to a mortality increase at age 80–89 years of 14% among men, and of 11% among women.
The lack of empirical support to the mortality selection hypothesis could also possibly be explained by not considering the greatest advances in medical care and resulting mortality declines since the 1970s among those aged 55–69.35 Improvements in medical care and new treatments lead to higher prevalences in chronic disease for those who survive,29 which could result in higher mortality at higher ages from these diseases. Although we cannot exclude that more recent declines in mortality at ages 55–69 might influence trends in old age mortality for more recent cohorts and future periods differently, our finding that positive correlations were also seen among the more recent cohorts studied, which were also characterised by important declines in mortality at middle age, casts doubt about the potential of mortality selection to determine future old age mortality trends.
Explanations of the positive correlations observed
The consistently positive correlations seen in our analysis suggest the existence of parallel trends in late middle age mortality and old age mortality. This points to common mechanisms that develop in a cohort-wise fashion.
It has frequently been mentioned in literature that risks established early in life influence health conditions at adult ages. Examples of relevant exposures include nutrition in utero, exposure to infectious diseases, and/or socioeconomic circumstances in infancy or childhood.19–23,36 Less clear is whether the effects of early life events last until old age.17,26,27 Our results could indicate that effects of early life events or conditions, that have been shown to influence mortality risk at late middle age, have the potential to exert their influence until old age.
Prolonged exposure to, or longlasting effects of, risks emerging during adult age might be another factor contributing to the observed positive correlations. Additional analyses showed that changes in circulatory disease mortality at ages 55–69 correlated most strongly with changes in mortality at ages 80–89 from circulatory diseases (0.61) and cancers (0.54), and hardly with old age mortality changes in remaining causes of death (0.15) and infectious diseases (−0.10). This indeed suggests that risk factors for circulatory diseases, like physical activity, hypertension, diet, smoking, and utilisation of medical care37–39 emerging during adult age, are common determinants of mortality at both adult and old age among the same cohorts. With respect to smoking among men, changes in all cause mortality at ages 55–69 were indeed strongly correlated to changes in mortality from lung cancer at ages 80–89 (0.71).
The consistently positive correlations between mortality changes at late middle age and mortality changes among the elderly population suggest that old age mortality trends in north western Europe in the late 20th century are determined predominantly by the effects of early life circumstances carried throughout life and prolonged exposure to, or longlasting effects of, risk factors emerging in adult life. Mortality selection has no discernible effect on secular trends in mortality. Our results, thus, do not support the concern that strong declines in middle age mortality will ultimately lead to increases in old age mortality for the same cohorts. In fact, the positive associations seen in our study suggest that recent trends in all cause mortality among the middle aged may be used to inform projections of future trends in all cause mortality among the elderly population.
We are grateful to Jacques Vallin (INED, France), Martine Bovet (INSERM, France), Hilkka Ahonen (Statfin, Finland), Annika Edberg (National Board of Health and Welfare, Sweden), Örjan Hemström (Sweden), Allan Baker and Glenn Meredith (ONS, England and Wales), Knud Juel (National Institute of Public Health, Denmark) and Jens-Kristian Borgan (Statistics Norway) for providing cause specific mortality and population data.
↵* The Netherlands Epidemiology and Demography Compression of Morbidity research group, which also includes J J Barendregt, L Bonneux, C de Laet, W J Nusselder, O Franco Duran, A Al Mamun, and F J Willekens.
Funding: this paper is part of a project financed by the sector of Medical Sciences of the Organisation for Scientific Research, the Netherlands (ZonMw).
Conflicts of interest: none.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. | <urn:uuid:701a5ac1-12c9-434a-9dcd-a8a9fa64927a> | CC-MAIN-2021-21 | https://jech.bmj.com/content/59/9/775?ijkey=fd3e4946f75fa8391267dff85148042147ceb4c5&keytype2=tf_ipsecsha | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989637.86/warc/CC-MAIN-20210518125638-20210518155638-00096.warc.gz | en | 0.948045 | 5,000 | 2.953125 | 3 |
Melanoma is the most aggressive form of skin cancer, representing over 10% of all skin cancers, but responsible for more than 80% of skin cancer-related deaths (1).
The mitogen-activated protein kinases (MAPK) pathway is a key oncogenic signaling system of a relay of kinases that culminate in cell proliferation, differentiation and survival. Genomic classification of cutaneous melanoma proposed four subtypes: BRAF mutations, NRAS mutation, loss of NF-1 and triple wild-type.
The discovery of hotspot mutations in BRAF V600E, a key serine-threonine kinase in the RAS-RAF-MEK-ERK (MAPK pathway) signaling pathway, led to development of molecular targeted therapies for melanoma (2). Activating BRAF mutations harbor 50% of cutaneous melanoma with non-chronic sun damage (involves another tumor as colorectal cancer, ovarian, thyroid) (Figure 1). In other clinical subtypes of melanoma, BRAF mutations are present in 10–20% of mucosal or acral melanoma, but absent in uveal melanoma (3,4). The most common mutation is a substitution of valine to glutamic acid (V600E) or lysine (V600K) at codon 600 in 20% of BRAF-mutants patients (5). The BRAF V600R occurs in 7% of patients, presents a substitution of valine to arginine. Mutations in NRAS are the second most common genetic alteration, being present in 20% of melanomas, and always exclusive to BRAF mutations (6). Twenty to thirty percent of mucosal melanomas harbor mutations or genomic amplification of cKIT (7). These are infrequently altered in cutaneous melanoma, while more than 85% of uveal melanomas contains mutations in GNAQ and GNA11; these mutations are also rarely present in cutaneous melanoma (8,9). BRAF inhibitors (vemurafenib, dabrafenib) and MEK inhibitors (trametinib) have been approved for treatment of unresectable or metastatic BRAF mutated melanoma, since they have shown improved progression-free survival (PFS) and overall survival (OS) compared to chemotherapy (10). Although responses and tumor control with BRAF inhibitors are impressive, durability of response is limited due to resistance, and evidence of disease progression can be seen within 6 to 8 months of starting therapy due to development of resistance mechanisms (11,12). Combined therapy with BRAF and MEK inhibitors has shown benefit in PFS and response rate, compared with monotherapy, delaying the appearance of alterations involved in resistance (13).
Mechanisms of resistance to MAPK pathway inhibition can be subdivided in two groups: MAPK–dependent and MAPK-independent. Into BRAF/MEK/ERK dependent reactivation, mechanisms of resistance including: amplification of BRAF, splicing BRAF, NRAS mutation, MEK mutation, loss of NF1. MAPK-independent includes: up-regulated receptor tyrosine kinases (RTKS), overexpression COT.
Primary and acquired resistance, tumoral heterogeneity
Numerous mechanisms of resistance have been detected using in vitro and in vivo models, and many have been observed in pre- and post-treatment tumor samples. It is very difficult to explain the behavior of neoplastic cells, but melanoma cells are highly heterogeneous, regardless of their mutational and epigenomic profile. Resistant melanoma cells may become so under selective pressure from therapies from preexisting resistant clones, or secondary as an evolving process during treatment. Melanoma cells do not show arrangement, but express great plasticity, with several tumor subclones sustained by the microenvironment. This microenvironment supports tumor growth and the maintenance of two populations, slow-cycling tumor cells, and cells with epithelial to mesenchymal transition (EMT). Plasticity supports organization within the tumor, and survival during treatment with BRAF inhibitors in vitro and in vivo (3–6% complete response) (14).
These mechanisms are known as primary or intrinsic resistance when no clinical benefit is achieved, and as secondary or acquired when progressive disease is seen after clinical benefit. Mechanisms of primary resistance include mutations in RAC1, loss of PTEN, amplification of cyclin D; secondary resistance mechanisms include alternative splicing of BRAF, BRAF copy number amplification and alterations in PI3K (Table 1).
Mechanisms of primary resistance
Loss of PTEN
Loss of PTEN occurs in 10–35% of melanomas, is mutually exclusive to NRAS mutations and coexists with BRAF mutations. PTEN is lost in the most of melanomas by loss of heterozygosity, mutations, and methylation. PTEN serves a tumor suppressor and major regulator of PI3K (15,16). Deletions or mutations in PTEN are associated with shorter PFS in patients treated with BRAF inhibitors. It is known, that PTEN loss alone is not sufficient to confer resistance to BRAF inhibitors other concurrent alterations, such as activation of AKT, are necessary. Cell lines with inactivation of PTEN are less sensitive to BRAF inhibitors than wild-type PTEN melanoma cells (17). In clinical practice, patients with wild-type PTEN treated with BRAF inhibitors had longer PFS than patients with mutated PTEN (18) (32.1 vs. 18 weeks; P=0.066), and a weak association was seen between low expression of PTEN and lower response rates in patients treated with BRAF inhibitors (19). Dual inhibition BRAF and PI3K has been studied as a means of overcoming this resistance and restoring apoptosis in deleted PTEN cells (20).
Dysregulation of cyclin-dependent kinase 4 (CDK4)
In the cell cycle, cyclin D1 regulates proliferation through binding to CDK4 and CDK6, which activate retinoblastoma protein and lead to cell cycle progression. CDK4 mutations and cyclin D1 amplification confer strong resistance to therapy with BRAF inhibitors (21). Cyclin D1 amplifications are found in about 20% of BRAF mutated melanomas. CDK4-6 inhibitors (key regulators of G1-S transition of the cell cycle) alone failed to decrease tumor size, but when BRAF and MEK inhibitors were combined, complete responses were achieved in 30% of mouse models (22).
Hepatocyte grow factor (HGF) and microenvironment
Stromal cells secrete several factors such as HGF receptor c-MET, able to activate tumor cell growth in a paracrine form upregulating PI3K, thus conferring resistance to BRAF inhibitors or combinations of BRAF and MEK inhibitors (23). It was reported in cell lines that the combination of BRAF and AKT inhibitors or anti-MET therapies can lead to overcoming resistance of this pathway (24).
Loss of NF1
NF1 is a tumor suppressor of RAS; mutations of NF-1 are present in 14% of melanomas. Inactivation of NF1 leads to activation of RAS, PI3K-AKT-mTOR and MAPK pathways. NF1 mutations prevents under BRAF inhibition senescence of melanoma cells and too, NF1 mutations and NRAS mutations coexist in the inactivating BRAF, were been required RAS isoforms for the pro-tumorigenic activity of these cells. In this scenario, one means of overcoming resistance to BRAF inhibition is the combination of MEK and mTOR inhibitors (25).
RAC1 is a key regulator of motility and proliferation cells and a GTPase effector of RAS. RAC mutations are present in 9% of melanomas, coexisting with BRAF and NRAS mutations. In clinical practice, from a cohort of 45 patients treated with BRAF inhibitors, 14 showed primary resistance, three had RAC1 mutations, any of the which no patients with reached response to therapy (26,27).
Mechanisms of secondary resistance
Acquired resistance mechanisms are associated mainly with the reactivation of the MAP kinase pathway (>70%), sometimes it coexists with the reactivation of the PI3K-AKT pathway, in a less percentage of patients it depends in exclusive on the AKT reactivation in parallel (PI3K-PTEN-mTOR).
BRAF inhibitors drive suppression MEK/ERK signaling, although they activate MEK/ERK signaling in RAS mutant cells. In the presence of oncogene RAS, BRAF inhibitors lead to the formation of CRAF-BRAF heterodimers or homodimers. One part of BRAF inhibitors bound to hetero/homodimer, and another part that is drug-free. The BRAF inhibitor bound leads to activation of the drug-free, and through conformational changes, activating CRAF, and finally MEK-ERK activation. To overcome this resistance have been tested the combination targeting of BRAF and MEK inhibitors.
To date, no secondary gatekeeper BRAF mutations have been found. Two aberrations have been described affecting BRAF gene: gene copy number gain or amplification of BRAF, and alternative splicing of BRAF. Amplification of BRAF is a copy gain of the mutant allele of BRAF, resulting in overexpression and leading to reactivation of ERK independently of RAS (28). This aberration has been detected in about 20% of melanomas after treatment with BRAF inhibitors. ERK reactivation could be blocked with higher doses of BRAF inhibitors or with the combination of BRAF and MEK inhibitors. However, BRAF amplification also has been detected in patients treated with the combination of MEK and BRAF inhibitors. BRAF splicing is present in 32% of melanomas (29). The combination of BRAF and MEK or single therapy with ERK inhibitors should prevent this phenomenon, although BRAF splicing has been also detected in patients treated with the combination of BRAF and MEK inhibitors.
NRAS mutations (Q61, Q12, Q13) occurring al either codon 12 or 61, and with mutations of NF1 drive MAPK activation in 30% of melanomas. BRAF and NRAS mutations are considered to be mutually exclusive. NRAS mutations not only activate MAPK pathway, is thought activate the PI3K pathway.
NRAS mutations are the second most common oncogenic alteration in melanoma (20%) and represent a clinical problem since they are associated with more aggressive tumors and shorter survival in early and late stage melanoma (30,31). The mutation of NRAS, actives transduction signals through CRAF in patients treated with BRAF inhibitors, resulting in a paradoxical transactivation of MAPK signaling via dimerization of BRAF and CRAF (32). Preclinical data in NRAS mutated patients supported the use of MEK, ERK and Pan-RAF inhibitors due to their high level of activity. In clinical trials, a MEK inhibitor (binimetinib) achieved 20% of response rate in NRAS mutant melanoma (33). Two trials have now completed enrollment, one phase II comparing pimasertib versus dacarbacine, and another phase III comparing a MEK inhibitor (binimetinib) versus dacarbacine in an NRAS mutated population. Data presented, but not published of this phase III of binimetinib, showed significant benefit in PFS. Preclinical data are interesting, although the benefit from MEK inhibitors is transient (34). Therefore, one possibility is to look for the last effector of this pathway, in this case blocking CDK4. Clinical trials are ongoing with the combination of MEK and CDK4-6 inhibitors (35). One clinical trial of MEK inhibitor with CDK4/6 inhibitor, binimetinib with LEE011 combination, showed 33% response rate in NRAS mutant population, with good tolerability.
Hyperactivation of RTKS
Overexpression or hyperactivation of RTKs could drive resistance by activation of parallel pathways or by direct induction of the RAS pathway (36). The most frequently involved receptors are platelet derived grow factor receptor beta (PDGFRβ) and insulin-like grow factor I receptor (IGF-1R) (37,38). The activation of these receptors is due to epigenetic changes. The activation of RTKs induces additional activation of the PI3K pathway in patients treated with BRAF or MEK inhibitors, therefore leading to resistance (39). The epidermal growth factor receptor (EGFR) gene is not normally expressed in non-treated melanoma, but in some patients that develop resistance to BRAF or MEK inhibitors overexpression of EGFR is induced by negative feedback. In this case, it is possible restoring sensitivity by inhibiting EGFR (40).
Aberrations in PI3K-PTEN-AKT pathway
MAPK pathway is deregulated in more than 70% of melanomas, but the PI3K/AKT/mTOR pathway is deregulated in more than 50% of melanomas.
It has been found that in 10–20% of cases that develop early resistance, or are intrinsically resistant to the MAPK inhibition, there is a loss of PTEN, or mutations in PI3K or AKT. Experiments in melanoma cell lines supports combined treatment with BRAF/MEK plus PI3K/AKT inhibitors to overcome resistance. Although the results in preclinical models are promising, there is currently limited clinical data (41,42).
Targeted therapy in non-cutaneous melanoma
Uveal melanoma (5% of all melanomas) has mutations in GNAQ/GNA11 (codon 209 or 183) in more than 80% of cases, and result in partial or complete loss of GTPase activity, thereby leading to constitutive activation effector pathways. This aberration activates the MAPK or PI3K pathways or protein kinase C. This activation can be suppressed by PKC inhibitors In a phase II clinical trial, a MEK inhibitor (selumetinib) as monotherapy was compared with chemotherapy. . The study showed a benefit in terms of response rate and PFS, but there was no improvement in terms of OS (43). There are ongoing clinical trials testing the combination of a MEK inhibitor (trametinib) with an AKT inhibitor (GSK2141795), the PKC inhibitor AEB071 as single agent or the combination of combined MEK or PI3K inhibitors.
Mucosal or acral melanomas (3% of all melanomas) harbor mutations or amplifications in cKIT (20–30% of these melanomas) Activating KIT mutations lead to activation of KIT tyrosine kinase activity, stimulate the MAPK and PI3K/AKT pathway; and mutation non amplification predict response to TK inhibitors. The cKIT inhibitor imatinib was tested in in three clinical trials, demonstrating a response rate around 30%. Clinical trials with other ckit inhibitors (nilotinib, dasatinib, sunitinib) have been completed and results are pending (44,45) (Table 2).
Strategies to overcome resistance
Currently, the combination of BRAF and MEK inhibitors represents the gold standard of targeted therapy in BRAF mutated melanoma. However, even with this combination, efficacy is limited due to development of resistance.
There are several strategies for overcoming such resistance, as combination with other targeted therapies, sequential/intermittent treatment schedules, and the combination of this targeted therapy and with immunotherapy.
The addition of a third drug might help to overcome resistance and several trials are ongoing testing the triple combination of MEK plus BRAF inhibitors with MET, FGF, CDK, VEGF, or mTOR inhibitors (46,47).
Treatment with MAPK inhibitors increases the expression of melanocytic antigens, and CD8 lymphocyte infiltration. This observation supports a possible synergism with the combination of targeted therapy with immunotherapy (50). An early attempt, combining a BRAF inhibitor with anti-CTL4 antibody (ipilimumab), was failed due a high grade of hepatoxicity in the phase I trial that led to an early stop of the study (51). Results of the clinical trial testing the sequential combination of dabrafenib plus ipilimumab are pending. New immunotherapeutic agents, such as anti-PD-1 antibodies (pembrolizumab, nivolumab) demonstrated much higher activity and less toxicity than anti-CTL4 antibody.
The tumor infiltrating cytotoxic CD8 lymphocyte is a component of the adaptive immune response against melanoma associated antigens (after treatment with BRAF inhibitors), circulating CD8 cells sustain a strong inflammatory response with cytotoxic effects. In the exhaustion profile of CD8, leading to their incapacity to proliferate and produce cytokines (IL-2, INF), is mediated up-regulation of inhibitory signaling pathways as PD-1, PD-L1 and CTL4 (52,53). Clinical trials are underway to determine the clinical activity of the combination of BRAF inhibitors with anti PD-1 antibodies (Figure 2).
Preclinical studies have demonstrated that intermittent as opposed to continuous therapy with a BRAF/MEK inhibitor, may delay the development of acquire resistance (54). There are several studies assessing sequential or intermittent dosing of BRAF and MEK inhibitors are ongoing. In the phase II COMBAT study (CT.gov: NCT02224781), patients are randomized to the combination of dabrafenib and trametinib versus their combination, after 8 weeks of monotherapy with dabrafenib or trametinib. Serial biopsies on treatment and at progression are used, to assess biomarkers related to response or resistance. Another clinical trial, SWOG study S1320 (CT.gov: NCT02196181) is looking at intermittent schedule, with the combination with dabrafenib y trametinib during an 8-week lead in period; the patients without disease progression in the lead period, ongoing continuous dosing or to intermittent dosing during 5 weeks on with 3 weeks off. In this study includes serial biopsies to determinate resistance mechanisms.
Reactivation of MAPKinase pathway leds to a highly recurrent transcriptomic alterations across resistant tumors, in contrast to mutations, and were correlated with differential methylation. The authors identified in the tumor: c-MET up-expression, LEF1 down-expression and YAP1 signature enrichment, as a drivers of acquired resistance. The authors observed high intra-tumoral cytolytic T cell inflammation, prior to BRAF inhibitor therapy preceded CD8 T cell exhaustion, and loss of antigen presentation in half of progressive melanomas, suggesting resistance to anti PD-1/PD-L1.
In the presence of BRAF/MEK inhibitor are the adaptive mechanisms of resistance. During the early phase, when the patients still respond to drug with inhibition MAPK pathway, adaptive resistance to BRAF inhibitors can occur, within the first 24−48 hours, leading to dampening of the inhibitor effect. Adaptive signaling seen involves: acquired EGFR and PDGFR expression, increase sensitivity to grow factors as EGF, FGF, HGF, neuregulin-1; increased phosphorylation AKT, up-regulation ERBB3 and enhanced MITF expression (55).
Recently, it has been published that an oncogene MITF is a driver of an early non-mutational and reversible drug tolerance state, which is induced by PAX-3-mediated up-regulation of MITF, before acquire resistance. Nelfinavir, HIV-1 protease inhibitor, was showed as a potent suppressor of PAX3 and MITF expression. Nelfinavir sensitizes BRAF, NRAS and PTEN mutant melanoma cells to MAKP inhibitors (56).
Targeted therapies are highly active drugs against metastatic melanoma. Different mechanisms of resistance have been described: epigenetic (57), genomic (58) and phenotypic (59) changes produces several alterations, leading to intrinsic, acquired or adaptive resistance. Tumor heterogeneity is a major driver of resistance in melanoma. In clinical practice, combination of BRAF and MEK inhibitors is the gold standard for metastatic BRAF mutant melanoma patients. The combination is highly active, but the duration of response is limited due to the development of acquired and adaptive resistance mechanisms. In order to overcome this phenomenon, there are different strategies, as the combination with other drugs—as CDK, PI3K, ERK and AKT inhibitors, intermittent schedules, and the combination with immunotherapy drugs.
José Luís Manzano was supported by Fondo de Investigación Sanitaria (FIS)—Instituto de Salud Carlos III (ISCIII). Anna Martínez Cardús was supported by Red Temática de Investigación Cooperativa en Cáncer (RTICC) and Olga Torres Private Foundation.
Conflicts of Interest: The authors have no conflicts of interest to declare.
- Tsao H, Atkins MB, Sober AJ. Management of cutaneous melanoma. N Engl J Med 2004;351:998-1012. [Crossref] [PubMed]
- Davies H, Bignell GR, Cox C, et al. Mutations of the BRAF gene in human cancer. Nature 2002;417:949-54. [Crossref] [PubMed]
- Menzies AM, Haydu LE, Visintin L, et al. Distinguishing clinicopathologic features of patients with V600E and V600K BRAF-mutant metastatic melanoma. Clin Cancer Res 2012;18:3242-9. [Crossref] [PubMed]
- Sosman JA, Kim KB, Schuchter L, et al. Survival in BRAF V600-mutant advanced melanoma treated with vemurafenib. N Engl J Med 2012;366:707-14. [Crossref] [PubMed]
- Hodis E, Watson IR, Kryukov GV, et al. A landscape of driver mutations in melanoma. Cell 2012;150:251-63. [Crossref] [PubMed]
- Sullivan RJ, Flaherty K. MAP kinase signaling and inhibition in melanoma. Oncogene 2013;32:2373-9. [Crossref] [PubMed]
- Curtin JA, Busam K, Pinkel D, et al. Somatic activation of KIT in distinct subtypes of melanoma. J Clin Oncol 2006;24:4340-6. [Crossref] [PubMed]
- Van Raamsdonk CD, Bezrookove V, Green G, et al. Frequent somatic mutations of GNAQ in uveal melanoma and blue naevi. Nature 2009;457:599-602. [Crossref] [PubMed]
- Van Raamsdonk CD, Griewank KG, Crosby MB, et al. Mutations in GNA11 in uveal melanoma. N Engl J Med 2010;363:2191-9. [Crossref] [PubMed]
- Flaherty KT, Infante JR, Daud A, et al. Combined BRAF and MEK inhibition in melanoma with BRAF V600 mutations. N Engl J Med 2012;367:1694-703. [Crossref] [PubMed]
- Larkin J, Ascierto PA, Dréno B, et al. Combined vemurafenib and cobimetinib in BRAF-mutated melanoma. N Engl J Med 2014;371:1867-76. [Crossref] [PubMed]
- Long GV, Stroyakovskiy D, Gogas H, et al. Combined BRAF and MEK inhibition versus BRAF inhibition alone in melanoma. N Engl J Med 2014;371:1877-88. [Crossref] [PubMed]
- Robert C, Karaszewska B, Schachter J, et al. Improved overall survival in melanoma with combined dabrafenib and trametinib. N Engl J Med 2015;372:30-9. [Crossref] [PubMed]
- Dummer R, Flaherty KT. Resistance patterns with tyrosine kinase inhibitors in melanoma: new insights. Curr Opin Oncol 2012;24:150-4. [Crossref] [PubMed]
- Paraiso KH, Xiang Y, Rebecca VW, et al. PTEN loss confers BRAF inhibitor resistance to melanoma cells through the suppression of BIM expression. Cancer Res 2011;71:2750-60. [Crossref] [PubMed]
- Xing F, Persaud Y, Pratilas CA, et al. Concurrent loss of the PTEN and RB1 tumor suppressors attenuates RAF dependence in melanomas harboring (V600E)BRAF. Oncogene 2012;31:446-57. [Crossref] [PubMed]
- Nathanson KL, Martin AM, Wubbenhorst B, et al. Tumor genetic analyses of patients with metastatic melanoma treated with the BRAF inhibitor dabrafenib (GSK2118436). Clin Cancer Res 2013;19:4868-78. [Crossref] [PubMed]
- Shao Y, Aplin AE. Akt3-mediated resistance to apoptosis in B-RAF-targeted melanoma cells. Cancer Res 2010;70:6670-81. [Crossref] [PubMed]
- Trunzer K, Pavlick AC, Schuchter L, et al. Pharmacodynamic effects and mechanisms of resistance to vemurafenib in patients with metastatic melanoma. J Clin Oncol 2013;31:1767-74. [Crossref] [PubMed]
- Shi H, Hugo W, Kong X, et al. Acquired resistance and clonal evolution in melanoma during BRAF inhibitor therapy. Cancer Discov 2014;4:80-93. [Crossref] [PubMed]
- Smalley KS, Lioni M, Dalla Palma M, et al. Increased cyclin D1 expression can mediate BRAF inhibitor resistance in BRAF V600E-mutated melanomas. Mol Cancer Ther 2008;7:2876-83. [Crossref] [PubMed]
- Flaherty KT, Lorusso PM, Demichele A, et al. Phase I, dose-escalation trial of the oral cyclin-dependent kinase 4/6 inhibitor PD 0332991, administered using a 21-day schedule in patients with advanced cancer. Clin Cancer Res 2012;18:568-76. [Crossref] [PubMed]
- Straussman R, Morikawa T, Shee K, et al. Tumour micro-environment elicits innate resistance to RAF inhibitors through HGF secretion. Nature 2012;487:500-4. [Crossref] [PubMed]
- Wilson TR, Fridlyand J, Yan Y, et al. Widespread potential for growth-factor-driven resistance to anticancer kinase inhibitors. Nature 2012;487:505-9. [Crossref] [PubMed]
- Gibney GT, Smalley KS. An unholy alliance: cooperation between BRAF and NF1 in melanoma development and BRAF inhibitor resistance. Cancer Discov 2013;3:260-3. [Crossref] [PubMed]
- Krauthammer M, Kong Y, Ha BH, et al. Exome sequencing identifies recurrent somatic RAC1 mutations in melanoma. Nat Genet 2012;44:1006-14. [Crossref] [PubMed]
- Van Allen EM, Wagle N, Sucker A, et al. The genetic landscape of clinical resistance to RAF inhibition in metastatic melanoma. Cancer Discov 2014;4:94-109. [Crossref] [PubMed]
- Poulikakos PI, Persaud Y, Janakiraman M, et al. RAF inhibitor resistance is mediated by dimerization of aberrantly spliced BRAF(V600E). Nature 2011;480:387-90. [Crossref] [PubMed]
- Shi H, Moriceau G, Kong X, et al. Melanoma whole-exome sequencing identifies (V600E)B-RAF amplification-mediated acquired B-RAF inhibitor resistance. Nat Commun 2012;3:724. [Crossref] [PubMed]
- Devitt B, Liu W, Salemi R, et al. Clinical outcome and pathological features associated with NRAS mutation in cutaneous melanoma. Pigment Cell Melanoma Res 2011;24:666-72. [Crossref] [PubMed]
- Jakob JA, Bassett RL Jr, Ng CS, et al. NRAS mutation status is an independent prognostic factor in metastatic melanoma. Cancer 2012;118:4014-23. [Crossref] [PubMed]
- Heidorn SJ, Milagre C, Whittaker S, et al. Kinase-dead BRAF and oncogenic RAS cooperate to drive tumor progression through CRAF. Cell 2010;140:209-21. [Crossref] [PubMed]
- Ascierto PA, Schadendorf D, Berking C, et al. MEK162 for patients with advanced melanoma harbouring NRAS or Val600 BRAF mutations: a non-randomised, open-label phase 2 study. Lancet Oncol 2013;14:249-56. [Crossref] [PubMed]
- Kwong LN, Costello JC, Liu H, et al. Oncogenic NRAS signaling differentially regulates survival and proliferation in melanoma. Nat Med 2012;18:1503-10. [Crossref] [PubMed]
- Sosman JA, Kittaneh M, Lokelma MPJK, et al. A phase 1b/2 study of LEE011 in combination with binimetinb in patients NRAS mutante melanoma: early encouraging clinical activity. J Clin Oncol 2014;32:abstr 9009.
- Deng W, Gopal YN, Scott A, et al. Role and therapeutic potential of PI3K-mTOR signaling in de novo resistance to BRAF inhibition. Pigment Cell Melanoma Res 2012;25:248-58. [Crossref] [PubMed]
- Gopal YN, Deng W, Woodman SE, et al. Basal and treatment-induced activation of AKT mediates resistance to cell death by AZD6244 (ARRY-142886) in Braf-mutant human cutaneous melanoma cells. Cancer Res 2010;70:8736-47. [Crossref] [PubMed]
- Shi H, Kong X, Ribas A, et al. Combinatorial treatments that overcome PDGFRβ-driven resistance of melanoma cells to V600EB-RAF inhibition. Cancer Res 2011;71:5067-74. [Crossref] [PubMed]
- Goel VK, Lazar AJ, Warneke CL, et al. Examination of mutations in BRAF, NRAS, and PTEN in primary cutaneous melanoma. J Invest Dermatol 2006;126:154-60. [Crossref] [PubMed]
- Sun C, Wang L, Huang S, et al. Reversible and adaptive resistance to BRAF(V600E) inhibition in melanoma. Nature 2014;508:118-22. [Crossref] [PubMed]
- Rodrik-Outmezguine VS, Chandarlapaty S, Pagano NC, et al. mTOR kinase inhibition causes feedback-dependent biphasic regulation of AKT signaling. Cancer Discov 2011;1:248-59. [Crossref] [PubMed]
- Chandarlapaty S, Sawai A, Scaltriti M, et al. AKT inhibition relieves feedback suppression of receptor tyrosine kinase expression and activity. Cancer Cell 2011;19:58-71. [Crossref] [PubMed]
- Carvajal RD, Sosman JA, Quevedo JF, et al. Effect of selumetinib vs chemotherapy on progression-free survival in uveal melanoma: a randomized clinical trial. JAMA 2014;311:2397-405. [Crossref] [PubMed]
- Hodi FS, Corless CL, Giobbie-Hurder A, et al. Imatinib for melanomas harboring mutationally activated or amplified KIT arising on mucosal, acral, and chronically sun-damaged skin. J Clin Oncol 2013;31:3182-90. [Crossref] [PubMed]
- Carvajal RD, Antonescu CR, Wolchok JD, et al. KIT as a therapeutic target in metastatic melanoma. JAMA 2011;305:2327-34. [Crossref] [PubMed]
- Lee B, Sandhu S, McArthur G. Cell cycle control as a promising target in melanoma. Curr Opin Oncol 2015;27:141-50. [Crossref] [PubMed]
- Sullivan RJ, Fisher DE. Understanding the biology of melanoma and therapeutic implications. Hematol Oncol Clin North Am 2014;28:437-53. [Crossref] [PubMed]
- Das Thakur M, Salangsang F, Landman AS, et al. Modelling vemurafenib resistance in melanoma reveals a strategy to forestall drug resistance. Nature 2013;494:251-5. [Crossref] [PubMed]
- Das Thakur M, Stuart DD. Molecular pathways: response and resistance to BRAF and MEK inhibitors in BRAF(V600E) tumors. Clin Cancer Res 2014;20:1074-80. [Crossref] [PubMed]
- Frederick DT, Piris A, Cogdill AP, et al. BRAF inhibition is associated with enhanced melanoma antigen expression and a more favorable tumor microenvironment in patients with metastatic melanoma. Clin Cancer Res 2013;19:1225-31. [Crossref] [PubMed]
- Sullivan RJ, Lorusso PM, Flaherty KT. The intersection of immune-directed and molecularly targeted therapy in advanced melanoma: where we have been, are, and will be. Clin Cancer Res 2013;19:5283-91. [Crossref] [PubMed]
- Jiang X, Zhou J, Giobbie-Hurder A, et al. The activation of MAPK in melanoma cells resistant to BRAF inhibition promotes PD-L1 expression that is reversible by MEK and PI3K inhibition. Clin Cancer Res 2013;19:598-609. [Crossref] [PubMed]
- Atefi M, Avramis E, Lassen A, et al. Effects of MAPK and PI3K pathways on PD-L1 expression in melanoma. Clin Cancer Res 2014;20:3446-57. [Crossref] [PubMed]
- Das Thakur M, Stuart DD. The evolution of melanoma resistance reveals therapeutic opportunities. Cancer Res 2013;73:6106-10. [Crossref] [PubMed]
- Hugo W, Shi H, Sun L, et al. Non-genomic and Immune Evolution of Melanoma Acquiring MAPKi Resistance. Cell 2015;162:1271-85. [Crossref] [PubMed]
- Smith MP, Brunton H, Rowling EJ, et al. Inhibiting Drivers of Non-mutational Drug Tolerance Is a Salvage Strategy for Targeted Melanoma Therapy. Cancer Cell 2016;29:270-84. [Crossref] [PubMed]
- Vizoso M, Ferreira HJ, Lopez-Serra P, et al. Epigenetic activation of a cryptic TBC1D16 transcript enhances melanoma progression by targeting EGFR. Nat Med 2015;21:741-50. [Crossref] [PubMed]
- Lin L, Sabnis AJ, Chan E, et al. The Hippo effector YAP promotes resistance to RAF- and MEK-targeted cancer therapies. Nat Genet 2015;47:250-6. [Crossref] [PubMed]
- Roesch A. Tumor heterogeneity and plasticity as elusive drivers for resistance to MAPK pathway inhibition in melanoma. Oncogene 2015;34:2951-7. [Crossref] [PubMed] | <urn:uuid:a11ccc2d-75c9-4371-b72b-fc10926f1037> | CC-MAIN-2021-21 | https://atm.amegroups.com/article/view/10777/html | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988759.29/warc/CC-MAIN-20210506175146-20210506205146-00576.warc.gz | en | 0.848672 | 7,816 | 2.546875 | 3 |
Images of flood on the day of the battle
12 panoramas of the battle site
All History Guide: Your guide to history on the Internet..
" .. this unusual, and yes, excellent history book.."
"More books like this one introducing historical study in a sympathetic was are needed.."
Now in paperback
... and into its 3rd reprint!
What did the vegetation look like?
The evidence suggests that the vegetation has changed little since the time of the battle. Two detailed analyses have been done. The first was on Fulford Ings.
“.. the pollen grains present and the respective percentage assemblages suggested little change in vegetation between 1720-1860 A.D. and 1060-1080 A.D., at which time the Battle of Fulford took place on the site. The landscape, in both cases, was dominated by Gramineae (grasses), Cyperaceae (sedges) and Typha latifolia (bulrush), with relatively little tree pollen, similar to present day cover.”[i]
“When the percentages are expressed as total tree pollen plus Corylus it is apparent that Salix (willow) and Fraxinus (ash) account for most of the tree pollen at the base of the profile [1000CE], whilst Salix, Betula (birch), Alnus (alder) and Corylus (hazel) are important at the top [1800CE]. There is no Fraxinus pollen found at the top of the core. Quercus (oak) and Pinus (pine) percentages are similar throughout the profile.”[ii]
This suggests that the landscape along the Ings has become drier since the time of the battle which is consistent with the drainage that followed the enclosures discussed earlier.
The second area which has been subject to a detailed analysis was in the central section of the beck. Samples were taken at various depths in the 2.6m deep layer of peat using an excavator because the ground was unstable. In the top layer,
“The fauna consisted of aquatic and waterside species, including rather abundant Plateumaris sp., typical of emerging vegetation. This deposit may have formed in a swamp, perhaps representing an early stage of terrestrialisation of the more fully aquatic conditions seen in [the lower contexts]”[iii]
The same work notes an absence of dung beetles which, given the lack of evidence for fauna typical of grazing land, suggests that this part of the Beck was not used for animal husbandry when the peat was growing.
At the base, an alder twig was dated to 2060+_35BP ( 73BCE) and a sedge nutlet from the top sample gave a date of 1385+-35BP (636CE) [iv]. This timescale spans the late Iron Age to the Anglian or mid Saxon era. It is possible that the peat continued to grow after that time but the later layers could have been removed by cutting. However, the terminal date of around 600CE matches the model for flooding. Around this date the growth of the peat might have been checked by the influx of alluvium from the Ings.
This suggests that in the four centuries before the battle, the basin of the beck was transforming from a growing peat bog to a swamp. The channelisation of the beck along the southern edge of the peat has covered the peat with the spoil that is removed whenever the ditch is maintained. If this modern 20+cm layer is scraped back, it is possible to tread on the peat which will support a person’s weight when moving but one will slowly sink if standing still.
This would have been the soft surface of 1066 that separated the two armies. As this area is adjacent to the fording area it would have been hard, but not impossible, to cross, as the interpretation of the battle proposed later suggests.
Although the sampling of environmental evidence is limited, the data is consistent with the story derived from the literature which suggests that this was relatively open land. There is no suggestion that woodlands played any part in the battle. However, the landscape did support many hedgerows as one would expect in a landscape where livestock were grazed.
The enclosure acts of the late 17th to 19th century led to the planting of many hedges and produced the chequered pattern of hedged-fields which are still common in the British countryside today. Hedges were used because they were cheap, effective and largely self-maintaining fences. Hawthorn was the most widely planted species because of its dense growth of thorns. However, in time, any hedge develops to incorporate mixed species where the seeds are deposited by the wind or wildlife.
Figure 3.12 Pattern of trees: This 1891 OS map seems to show field boundaries are marked with trees or hedges. Many of these can be traced today. The modern landscape suggests that it is individual trees that are marked, rather than hedges, although the latter might be implied by the boundary. Many of the trees along Germany Beck can be identified. Very few native species can expect to survive 1000 years but it is believed that replacements would be encouraged along the existing boundaries so they can be taken as an indicator of an ancient pattern. Also shown on this map are the extensive gravel workings just north of St Oswald’s Road along the line of the putative road. The railway that carried the gravel to a landing at the southern end of the Ings remained in use to supply the army Ordnance Depot but has, in the last decade, been covered by alluvial build-up.[v]
Following suggestions by Max Hooper in 1971, it was hypothesised that the number of woody species might indicate the age of a hedge. There is no strict, biological reasoning behind this rule, but studies, mostly in the east of England, suggest a good correlation between the number of species and the date when the hedge was planted. There were 72 hedges in the sample he used that could be traced back to the 10th century through written records.
Hooper’s Rule is popular with landscape historians, but has evoked scepticism among botanists and it is clear that this dating method needs to be used in conjunction with other evidence.
“Hooper's Rule can distinguish hedges of the Enclosure Act period from those of Stuart or Tudor times or of the Middle Ages. We cannot expect it to date hedges more precisely, especially as many of the documents which form the primary evidence record the existence rather than the date of origin of a hedge. At present the rule seems not to extend back more than 1100 years; it does not differentiate Anglo-Saxon from Roman hedges.”[vi]
Figure 3.13 Hooper’s Rule chart: This working chart was complied from various sources and correlated the number of species in a given length with the possible age of the hedge. The grey area indicates the degree of uncertainty that attaches to any estimation.
Our work might also challenge the rule as there seems to be a limit to the number of species found in very ancient hedges in this are. There are a number of explanations for this. Disease, and removal of some species, might distort the figures for older hedges.[vii] However, even allowing for any distortions or margins of error, a study of hedges yields some useful information.
During the original desktop study on behalf of Hogg the Builders, 23 hedgerows were surveyed. The geology suggested that Germany Beck is a very ancient waterway and it is interesting to have this confirmed by the hedgerow survey. The mature ash and sycamore trees located within the survey area were estimated at 200-300 years old. “Some of the oldest species recorded were found to the north and south of Germany Lane.”
“There is no evidence of hawthorn which is the predominant species for hedge-row laying, but its general appearance suggests antiquity. This is further highlighted by the fact the modern dyke [Germany Beck] which runs parallel to hedge 1 totally respects it.” Using Hooper’s rule, the author of the desktop study suggests that the hedges along the beck range from at least 800-500 years old.
“Hedge 5 forms the western boundary to East Moor Field and Mitchell's Lane. There are nine species established in this hedge. The presence of Field Maple indicates its antiquity as this species, common in Lowland England, is often found in old hedges. Both the Ash and Willow were also of a well-established age, greater than 200 years old. Hedge 6 and 16 follow the course of Germany Beck and, as expected, suggest the antiquity of this watercourse and its associated hedgerows.”[viii]
Figure 3.14 Hedge dating: This data gathered using the Hooper’s rule methodology indicates that this is an ancient hedge and it is possible to suggest that in 1066, the forces would have encountered a hedge in this location. The interpretation provided later in this report suggest this hedge lay by the left flank of the English army and might have afforded them protection when they were obliged to retreat along the beck in the final stage of the battle.
‘Hedge 16’ was re-surveyed as a part of the project by Laura Winter and Ken Gill. Their work revealed the diversity among the surviving hedges near the beck in August 2004, 10 years after the original survey. This allowed the age to be quantified following Hooper’s Rule.
The hedges were analysed in 30 yard sections and the species counted. The 305 yards assessed provides an average of 6.27 species which would suggest that the hedges date back to the 14th century. The authors noted that some sections appear to have been disturbed and taking only the central section (B-I) the average is 7.13 species. Using Hooper’s Rule, this would move the date to the century after the battle although the provisos entered earlier should be noted.
It was also noted that few elms had survived because of Dutch elm disease and this might in places have distorted the model. There has been a substantial amount of damage done to this hedge at either end so it is legitimate to look at the peak central section rather than the average species count. All these indications strongly suggest that this hedge beside the beck is of great antiquity and could date back to the time of the battle.
It was also suggested that the present bend in the beck near the confluence and the brick bridge was not the original course. The observation was that sections J and K were younger (less species rich as well as less mature) which led to the suggestion that the beck formerly continued directly into the adjacent section.
Figure 3.15 Hedge locations: The 30 yard sections are marked on this section of the beck and the letters relate to the species table above.
Trenches dug by MAP along the line of the Beck indicate that the flow has indeed ‘wandered’ a little but has stayed within the clearly defined moraine bounds since the last geological upheaval. The land is covered by between 0.2 and 0.6 of a metre of silt and modern topsoil. Fluid sand prevented deeper or more extensive work in some trenches.[ix]
Other techniques to provide dating evidence for the hedges, such as carbon dating, are likely to be defeated by the biological process of decay and regeneration. Some investigations were carried out to see if any ‘ancestral’ material could be identified without success.
It is only possible to state that it is probable that there were hedges on the English left flank. Another hedge would probably have marked the right flank of the opposing army. In battlefield terms, both of these are significant obstacles especially as they are at the edge of a bank and a water-filled ditch.
Figure 3.16 Ancient hedge recovering: The oldest hedge, identified in the first desktop study, undertaken when a housing development here was first considered, was subsequently removed but some of the more determined species are recovering. This willow is adjacent to one area where extensive metalworking material was found (area 9); it is retaining the soil that has been built up in ‘40 Acre field’, east of the ford. Although most of the species were either grubbed out or cut back, many of them have recovered during the decade of the Fulford investigations.
Willows across the Ings
South of Stone Bridge there is a group of willows that were investigated. Because this area is subject to regular flooding, few plants survive. Consequently it was not possible to make direct comparisons and use Hooper’s Rule. There is a line of mature willows and there were strong indications that the willows were ‘related’ to each other. This relationship could be followed by tracing branches that had become submerged as the surface rose. These have sprouted roots to form a separate tree.
Two willows had been cut and it was possible to count the rings providing an age of 220 years for the older tree.
Figure 3.17 Buried willows: There is a group of willows running parallel to Landing Lane. These appear to have been pollarded. However, as the level of the Ings rose, the head of the tree became submerged and the branches, in time, took root and developed into a new tree. It is possible to trace this process on the modern surface. They have been neglected for many decades. Ken Gill is standing on the old crown of the willow that will soon be buried by the rising mud.
Without any rule to date these trees or models to assess how willow trees develop when the ground is rising, it can only be offered as a suggestion that a number of willow trees existed at the time of the battle. The defenders might have decided to cut the branches from the trees to deny the invaders any cover that the lush foliage of September would have afforded them. Wartime air photos suggest that this tree line was cut back and the line is only evident as a hedge.
The present line of the willows does not follow the current course of Germany Beck across the Ings. However, early OS maps show the Beck alongside the first of these willow trees. It is probable that at some time in the past the Beck conformed to the line of the willows and this line in turn defines the south side of the delta that it is believed marked the path of the Beck across the Ings towards the Ouse.
The place where the water from Germany Beck sliced through the moraine was identified earlier. It is located immediately to the west of the Stone Bridge. The ancient channel passes directly beneath the bridge and almost perpendicular to the line of the bridge. The depth of this cutting was impossible to determine without some specialised drilling equipment. Any suggestion that this is a man-made cutting can be dismissed.
Of relevance to the battle are the ‘walls’ at this place where the water flows through the moraine. It is more like a canyon than a valley. The depth of the channel is greater than 3.8m below the 7m contour so it is well below the surface of 1066 (2.35m below present). The banks at this point would have allowed the two armies to face each other with about 20m between them. But the bank on the north (English) side would have been too steep to allow them to launch an assault at this place. This would provide Morcar with excellent right-flank protection (with Edwin guarding the delta and Ouse bank which could also been referred to as the right flank but is termed within this report as the river-flank).
Our soil survey work, and the Ings level-rise model, was discussed earlier. This work indicates that sometime around 550 BCE, the flooding of the Ings would have begun to reach through the moraine and towards the land to the east, reaching the peat area regularly about a century later.[x] The reason for this change was the annual deposit of alluvial material deposited by the river Ouse which built up the level of the Ings, reducing its capacity as a flood reservoir. But once the level of the Ings reached the boulder clay threshold at the narrowest point in the moraine, near the modern A19 crossing point, Ouse water would have flowed along the Beck whenever the former was in flood.
This area has been scoured and eroded by the Beck as it adjusted its course to pass though the gap. The precise sequence of geological events which shaped the basin to the east of the moraine gap is not obvious. The modern cemetery, classified as sand/gravel, provides the eastern bank of the basin at the ford while it is the moraine material which forms the western flank of the basin. There was a stream between these two which has been buried in a duct since WWII as it is visible on early maps and air photographs. This duct now emerges near the Stone Bridge.
Figure 3.18 Outfalls for buried streams: The old fording area has two outfalls for the streams that used to run below. One emerges near Stone Bridge and the other near Landing Lane. The area has been in-filled to create a playing field for the local community. There are pictures of the outfalls in chapter 8. The underlying shape of the land surface prior to the dumping of building spoil was worked out by drilling a number of boreholes.
It seems likely that beneath the retreating ice sheet, various drumlins and small moraines were sculpted by melt-water channels which left the high ground along the line of the A19 that spanned the York and Escrick moraines. The ford was left as an amphitheatre-shaped feature to the south of the Beck and a smaller, steeper moraine bank to the north.
The location of a ford is suggested by extrapolating five lines
1. The footpath that runs through Water Fulford
2. The line of the beck deduced by modelling the basin east of the gap
3. A perpendicular to Stone Bridge
4. Bisecting the angle of the moraines at the surface
5. The line of the road through modern Fulford
These lines suggest that the ford lies within a 15m circle of uncertainty based on the grid SE 61164871.
Near this point, no grey alluvium could be identified in the borehole that was drilled. But at this point a layer of pebbles and sand was identified above the boulder clay. Sixty metres either side of the putative fording place, a layer of the grey alluvium was identified so this was a muddy ford.
This is one of the key findings from all of the work that was undertaken. At the time of the battle there was a broad, fordable crossing of Germany Beck which is located just to the east of Stone Bridge. The ford had dense boulder clay at its base but the surrounding area was covered by grey alluvial mud which probably supported some limited marshy vegetation similar to the Ings.
The water from the beck and possibly two other channels entering the ford might have been canalised by the locals but it is equally possible that it flowed, shallow and wide, across the base clay. The latter pattern would have made it easy to provide stepping-stones or consolidated base at the crossing and this might be the layer of stone that was identified in one bore-hole.
Other streams joined the Beck flowing from the south so there could have been two or three separate crossings in the basin that formed the fording place but the impression gained from the core samples, and the surrounding land, is of a shallow bowl with hard clay at the base.
This was a broad, shallow ford so that the rising tide would make the ford broader rather than much deeper. This channel provided the eastern boundary of Water Fulford and is still visible in air photos from 1952, by which time both flows hade been canalised.
The playing fields have now covered the old ford. The bore-holes on the field revealed that there is 3.7m of mixed building debris and clay over the original surface. The theodolite survey work suggests that the ford was 4.72m AOD and the water level of a quiescent Ouse is about 2m AOD. With tides rising at least 4m above the low river level, there can be little doubt that the area of the ford would have been wide and wet when battle commenced and this is confirmed by the alluvium detected around the ford.
It is therefore possible that the water at the ford was too deep for an hour before and perhaps three hours after high tide, shortly after 09:00 on the day of the battle, to prevent the armies engaging. But there are too many assumptions about the surface level of the river in 1066 and the hydrodynamic behaviour of flood-water along the Beck to be able to define the depth and extent of the water level at the ford.
[i] Susannah Gill A palaeoenvironmental reconstruction of late Holocene changes at Fulford Ings, York, 2002 Manchester University
[iii] A Hall & H Kenward, Assessments of plant and invertebrate macrofossils from a sequence of peat deposits by Germany Beck’ 2004
[iv] SUERC 2044 & 2043. The dates were measured in 2004
[v] The area for Fulford is listed as 1651 acres. The 1892 1:2,500 edition gives the field acreages and the 1893 1:10560 edition has the arable area as 1665.054 acres. The 1% increase in the area under cultivation in the 40 years between the two survey dates might reflect the steady increase in arable land produced by better drainage or wood clearance.
[vi] Oliver Rackham The History of the Countryside (Weidenfeld & Nicholson) 0297816225
[vii] Some ancient hedge trees such as Spindle and Barberry were believed to harbour the pests that spread diseases such as wheat rust so were grubbed out.
[viii] The desk-top study produced by Hogg The Builder in 1998 assessed the date of the hedges along Germany Beck. At the eastern edge of the site they were assesses to be about 1000 years old. Sadly, much of this has been grubbed out but the good news is that many of the willow trees have survived and are remerging.
[ix] Interim report of archaeological excavations. MAP
[x] The accuracy of the survey work and uncertainly about the model of the alluvial build-up only allows an accuracy of +- 200 years in the estimates.
Related sites Facebook Twitter (@ helpsavefulford) Visiting Fulford Map York
There is a site devoted to saving the battlesite: The site has the story of the process that has allowed the site to be designated an access road to a Green Belt, floodplain housing estate.
And another website for the Fulford Tapestry that tells the story of the September 1066: This tells the story embroidered into the panels.
The author of the content is Chas Jones - firstname.lastname@example.org last updated June 2015 | <urn:uuid:e198da96-0eec-4069-8e58-d51ea8d91851> | CC-MAIN-2021-21 | http://fulfordbattle.com/rep_vegetation.htm | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991829.45/warc/CC-MAIN-20210514214157-20210515004157-00096.warc.gz | en | 0.9736 | 4,844 | 3.578125 | 4 |
Page 1: Biography
Ngāti Toa leader
This biography, written by Steven Oliver, was first published in the Dictionary of New Zealand Biography in 1990. It was translated into te reo Māori by the Dictionary of New Zealand Biography team.
Te Rauparaha was the son of Werawera, of Ngāti Toa, and his second wife, Parekōwhatu (Parekōhatu) , of Ngāti Raukawa. He is said to have been a boy when Captain James Cook was in New Zealand. If so, it is likely that he was born in the 1760s. He was born either at Kāwhia or at his mother's home, Maungatautari. He was descended from Hoturoa of the Tainui canoe; both his parents were descended from the founding ancestors of their tribes. Although not of the highest rank, he rose to the leadership of Ngāti Toa because of his aggressive defence of his tribe's interests and his skill in battle. He was short in stature but of great muscular strength. In profile, he had aquiline features; when excited his eyes would gleam and his lower lip would curl downwards.
His name is derived from an edible plant called rauparaha. Soon after he was born a Waikato warrior who had killed and eaten a relation of his threatened to eat the child as well, roasted with rauparaha leaves; the child was called Te Rauparaha in defiance of this threat. The other name by which he was known during his childhood was Māui Pōtiki, because he, like Māui Pōtiki, was lively and mischievous. Much of his childhood was spent with his mother's people at Maungatautari, but he may have been instructed at the whare wānanga at Kāwhia.
From the late eighteenth century Ngāti Toa and related tribes, including Ngāti Raukawa, were constantly at war with the Waikato tribes for control of the rich fertile land north of Kāwhia. The wars intensified whenever a major chief was killed or insults and slights suffered. Te Rauparaha was involved in many of these incidents as tensions mounted. He led a war party into disputed territory north of Kāwhia and the Waikato chief Te Uira was killed. On another occasion he led a war party by canoe to Whāingaroa (Raglan Harbour) to avenge the killing of a group of Ngāti Toa; his nieces had been among the victims. Young warriors gathered around him as he was an aggressive war leader.
As warfare intensified Ngāti Toa killed Te Aho-o-te-rangi, a Waikato chief, who had led an attack on Kāwhia. Te Rau-anga-anga, Te Aho-o-te-rangi's grandson and father of Te Wherowhero, led a large war party to avenge his killing. Ngāti Toa were driven back to the pā of Te Tōtara, at the southern end of Kāwhia Harbour, where peace was made, but it was broken when Te Rauparaha led a fishing party into grounds claimed by Ngāti Maniapoto. Waikato came to the assistance of Ngāti Maniapoto and took the pā of Hikuparoa after a feigned retreat. Te Rauparaha escaped to Te Tōtara pā and after much fighting peace was restored.
Te Rauparaha left Kāwhia after this episode but on his return joined a war party seeking revenge for the death of a prominent Ngāti Toa warrior, Tarapeke, in a duel outside Te Tōtara. Under Te Rauparaha's command they went north and killed Te Wharengori of Ngāti Pou. Waikato again invaded Kāwhia, and after defeat in several battles Ngāti Toa retreated to Ōhāua-te-rangi pa. However, there were relations of Waitohi, Te Rauparaha's sister, among the attackers, and through them she negotiated a peaceful settlement.
During times of peace Te Rauparaha travelled widely to visit tribes friendly to Ngāti Toa. He was at Maungatautari when Ngāti Raukawa chief Hape-ki-tūārangi died and he became his successor by responding to the chief's dying query, 'Who will take my place?' None of Hape's sons or relatives responded. Te Rauparaha later took Hape's widow, Te Ākau, as his fifth wife. He had previously married Marore, Kāhuirangi, Rangitāmoana (the sister of Marore), and Hopenui. Between 1810 and 1815 Te Rauparaha was with Ngāti Maru in the Hauraki Gulf and was given his first musket. He also visited Ngāti Whātua at Kaipara, where he was probably trying to build a coalition to attack Waikato. It is possible, too, that he was looking for a place where his tribe could be resettled.
Ngāti Toa had long-standing alliances with the tribes of northern Taranaki, the southern neighbours of Ngāti Maniapoto. In 1816 the marriage festivities of Nohorua, Te Rauparaha's older half-brother, and a woman of Ngāti Rāhiri, turned to disaster when the canoes of Ngāti Rāhiri carrying a return feast overturned. In fury Ngāti Rāhiri attacked Ngāti Toa. Two Ngāti Whātua chiefs, Murupaenga and Tūwhare, from north of present day Auckland, joined Ngāti Toa's retaliatory raid into Taranaki about 1818. However, Ngāti Rāhiri were old allies and peace was made at Te Taniwha pā. As part of the peacemaking, muskets were fired for the first time in Taranaki. Later, Te Rauparaha joined Te Pūoho-o-te-rangi of Ngāti Tama in attacks on other Taranaki tribes, before returning to Kāwhia.
In 1819 Te Rauparaha joined a large northern war party, armed with muskets, led by Tūwhare, Patuone and Nene. This expedition passed through the lands of Te Āti Awa, Ngāti Toa's allies, and attacked Ngāti Maruwharanui of central Taranaki. Te Kerikeringa and other pā fell to them; warriors who had never encountered guns before became demoralised. In this manner the expedition continued south to Cook Strait. Ngāti Ira successfully held a pā at Pukerua with traditional weapons but were deceived by a false offer of peace, it is said from Te Rauparaha. On its return the expedition fought with Ngāti Apa in Rangitīkei. Te Rangihaeata, Te Rauparaha's nephew, captured Te Pikinga of Ngāti Apa and made her his wife. On reaching Kāwhia Ngāpuhi gave muskets to Ngāti Toa and continued on their way north.
Te Rauparaha probably also took part in the expedition of 1819–20 to find a new home for his people. Their position at Kāwhia was becoming untenable as war with the Waikato tribes intensified. While at Cook Strait Te Rauparaha had seen a sailing ship passing through the strait, probably one of the Russian ships of the Bellingshausen expedition. A northern chief told him that there were good people on the ships, and that if he moved south he could become great by trading for guns with the ships now coming to Cook Strait.
About this time Te Rauparaha's wife Marore was killed in Waikato while attending a funeral. In revenge he and her relations killed a Waikato chief on a pathway where travellers had safe conduct. In 1820 several thousand Waikato and Ngāti Maniapoto warriors invaded Kāwhia. Ngāti Toa was defeated at Te Kakara, near Lake Taharoa, and Waikawau pā, south of Tirau Point, was captured. Te Rauparaha withdrew to Te Arawī pā, near Kāwhia Harbour, which was besieged. Among the besiegers were relations of Ngāti Toa who did not wish to see the tribe exterminated. Ngāti Maniapoto leader Te Rangituataka secretly supplied food to the pā and advised Te Rauparaha to take refuge with Te Āti Awa in Taranaki. Te Rauparaha had considered fleeing east to his Ngāti Raukawa relations, but the way was blocked by hostile forces. Because many were closely related to Waikato tribes they were allowed to leave Kāwhia and begin the first section of their migration to the south, known as Te Heke Tahu-tahu-ahi.
Te Rauparaha burned his carved house and recited a lament for Kāwhia. Ngāti Toa went a few miles south to Pukeroa pā, where the people were related to both Ngāti Toa and Ngāti Maniapoto. Most women and children and the injured were left there while the warriors went further south and crossed the Mōkau River into the territory of their Ngāti Tama allies. Te Rauparaha went back to Pukeroa with 20 warriors armed with muskets to bring out those left behind. He knew that Ngāti Maniapoto had come in pursuit so he dressed his people in red cloth and spread a rumour that a Ngāpuhi war party, wearing red, was in the area. Ngāti Maniapoto then kept away from the refugees. At night, while waiting to cross the Mōkau River, Te Rauparaha addressed imaginary groups of warriors, lit many fires and spread cloaks over bushes, to give the impression of a large army. Reunited south of the Mōkau, about 1,500 Ngāti Toa went to Te Kaweka in Taranaki and began cultivating land Te Āti Awa allowed them to use. A Waikato force, led by Te Wherowhero, came south but was defeated at the battle of Motunui in late 1821 or early 1822. It is said that after this battle Te Rauparaha in his turn helped Waikato by warning them not to retreat north, where a Ngāti Tama force was waiting. This victory freed Ngāti Toa from the threat of pursuit.
Te Rauparaha left Ngāti Toa in Taranaki and returned north to Maungatautari, to try to persuade Ngāti Raukawa to join his migration because he needed more fighting men. But Ngāti Raukawa had other ambitions in Heretaunga (Hawke's Bay). He then went on to Rotorua and encouraged Te Arawa to attack a Ngāpuhi war party, to avenge the killing by Ngāpuhi of his Ngāti Maru relations. Some Tūhourangi had joined the attack on Ngāpuhi, and followed Te Rauparaha back to Taranaki.
By 1822 the section of the migration of Ngāti Toa known as Te Heke Tātaramoa, which was to bring them to Kāpiti Island, was under way. Joined by some Te Āti Awa, the migration travelled 250 miles through enemy land which Te Rauparaha had raided several years before. The migration was initially peaceful because Te Rauparaha had made peace and marriage alliances with some tribes. Others retreated from his path, having learned to fear a war party armed with muskets, and distrusting his intentions. The Whanganui tribes withdrew upriver, and in Rangitīkei Ngāti Apa were at first friendly; they were related to Ngāti Toa by the marriage of Te Pikinga to Te Rangihaeata.
Trouble began when the migration reached the Manawatū River. Canoes were stolen when Nohorua led a foraging expedition. In revenge Ngāti Toa attacked a Rangitāne settlement and killed several people. The tribes of Manawatū and Horowhenua began to resist. Toheriri of Muaūpoko invited Te Rauparaha and his family to a feast near Lake Papaitonga; when night fell Muaūpoko began killing them. Te Rauparaha escaped but his son Te Rangihoungāriri and daughter Te Uira, and at least one other of his children, were killed. He vowed to kill Muaūpoko from dawn until dusk. The lake pā of Muaūpoko were taken and they were massacred without mercy.
While Te Rauparaha was attacking the tribes of Horowhenua, Te Pēhi Kupe, the senior chief of Ngāti Toa, surprised Muaūpoko on Kāpiti and captured the island. As Ngāti Toa were threatened by both Ngāti Kahungunu and Ngāti Apā, they moved to Kāpiti for security. Fighting continued on the mainland. Rangitāne were slaughtered at Hotuiti, after a false offer of peace had disarmed them. A great canoe fleet of southern tribes assembled about 1824, with contingents from Taranaki to Te Whanganui-a-Tara (Wellington Harbour) in the North Island and from the South Island. A night attack made on Kāpiti at Waiorua was defeated. This victory established Ngāti Toa securely in the south of the North Island. Allies from Taranaki and from Ngāti Raukawa joined Te Rauparaha in numerous migrations over the next decade and were found land in the conquered territories.
Whalers and other European ships had been trading at Kāpiti since 1827. Te Rauparaha's power over his allied tribes rested on his control of the trade in arms and ammunition. Captives were taken to Kāpiti to scrape flax to be traded for muskets, powder and tobacco. He also wanted to control the supply of greenstone, and the South Island, where greenstone was to be found, was open to conquest as the tribes there had not yet acquired guns. Some of their chiefs had insulted him and some had fought against Ngāti Toa at Waiorua. About 1827 Te Rauparaha took a war party across Cook Strait to Wairau, where several Rangitāne pā were taken. A year or so later a larger invasion fleet left Kāpiti. Te Āti Awa attacked the territory around Te Ara-a-Paoa (Queen Charlotte Sound), while Te Rauparaha, with 340 warriors mostly armed with guns, entered Te Hoiere (Pelorus Sound) and heavily defeated Ngāti Kuia at Hikapu. At Kaikōura many Ngāi Tahu were taken by surprise and killed or enslaved.
Te Rauparaha led part of the war party to the Ngāi Tahu stronghold, Kaiapoi pa. Te Pēhi Kupe and seven other Ngāti Toa chiefs entered the pā to trade for greenstone. The people at Kaiapoi knew of the attack on their relations at Kaikōura and the Ngāti Toa chiefs were killed and eaten. Ngāti Toa then unsuccessfully attacked the pā, although killing about 100 Ngāi Tahu prisoners. Te Rauparaha returned to Kāpiti. In 1830 the attack on Ngāi Tahu was resumed. Captain John Stewart took about 100 Ngāti Toa warriors to Akaroa, hidden in the brig Elizabeth. He lured Ngāi Tahu chief Tama-i-hara-nui aboard by offering to trade for muskets. Tama-i-hara-nui was taken, together with his wife and daughter, tortured and put to death at Kāpiti. On the ship, he strangled his daughter to prevent her from being enslaved.
Te Rauparaha went to Sydney in 1830 where he met Samuel Marsden, the chaplain of New South Wales. The ship that returned him to Kāpiti is said to have taken him and his warriors to Rangitoto (D'Urville Island), where they captured Ngāti Kuia refugees, and to have transported them to Kāpiti. In 1831 Te Rauparaha again besieged Kaiapoi pā and captured the pā by sapping and by firing the palisades. He returned to Akaroa and took the pā Ōnawe, and then returned to Kāpiti, leaving his allies and some of his own people to rule over the enslaved tribes. Meanwhile the migrant tribes in the south-west of the North Island, none of which accepted Te Rauparaha's authority, were competing with each other and with the original inhabitants for land and resources. Fighting broke out between Ngāti Raukawa and Te Āti Awa in 1834; this threatened Te Rauparaha's leadership, as he was allied to Ngāti Raukawa. Other Ngāti Toa, led by Te Hiko-o-te-rangi, the son of Te Pēhi Kupe, supported Te Āti Awa and besieged Te Rauparaha at the Rangiuru Stream. He had to appeal to the Ngāti Tūwharetoa leader Mananui Te Heuheu Tūkino II for help. When peace was made Te Rauparaha at first intended to return to the north with Mananui. But he was persuaded to stay by Te Rangihaeata and went back to Kāpiti. By the mid 1830s Te Rauparaha and his allies had conquered the south-west of the North Island and most of the northern half of the South Island.
He now wanted to extend his conquest to the rest of the South Island; however, Ngāi Tahu had obtained guns from the whalers in Otago and were able to resist him. About 1833 he had been nearly captured by Ngāi Tahu from Otago, at Kaparatehau (Lake Grassmere). Inconclusive battles were fought at Ōraumoaiti and Ōraumoanui. Te Rauparaha was unable to prevent Ngāi Tahu attacks on whaling stations under his patronage and when they sent a war party to the Cook Strait area in the late 1830s he did not confront it.
After Te Rauparaha's sister, Waitohi, the mother of Te Rangihaeata, died in 1839 war broke out among the tribes allied to Te Rauparaha. A huge funeral gathering was held. A Rangitāne slave of Te Āti Awa, who had brought tribute from the South Island, was killed and eaten, against Te Āti Awa's wishes. Quarrelling at the feast led to renewed fighting between Te Āti Awa and Ngāti Raukawa, culminating in the battle of Te Kūititanga at Waikanae. Te Rauparaha crossed over from Kāpiti to assist Ngāti Raukawa, but had to escape in a whaling boat when they suffered a severe defeat. After the battle there was no looting of the dead or cannibalism, as Christian influences had been brought to Te Āti Awa by freed slaves returning from the Bay of Islands. Ngāti Raukawa dead were buried with their clothing and arms and ammunition.
Later in the same day as the battle of Te Kūititanga, 16 October 1839, the New Zealand Company ship Tory arrived at Kāpiti. Colonel William Wakefield wanted to buy vast tracts of land. Negotiations took place and Te Rauparaha accepted guns, blankets and other goods for the sale of land, the extent of which later became a matter of dispute. He insisted that he had only sold Whakatū and Te Taitapu, in the Nelson and Golden Bay areas. All land sales were declared void by Lieutenant Governor William Hobson after his arrival in 1840, and a commission was set up to investigate land claims. On 14 May 1840 Te Rauparaha signed a copy of the Treaty of Waitangi presented to him by CMS missionary Henry Williams. He believed that the treaty would guarantee him and his allies the possession of territories gained by conquest over the previous 18 years. He signed another copy of the treaty on 19 June, when Major Thomas Bunbury insisted that he do so.
Te Rauparaha resisted European settlement in those areas he claimed he had not sold. Disputes occurred over Porirua and the Hutt Valley. But the major clash came in 1843 when Te Rauparaha and Te Rangihaeata prevented the survey of the Wairau plains. Arthur Wakefield led a party of armed settlers from Nelson to try to arrest Te Rauparaha. Fighting broke out in which Te Rongo, the wife of Te Rangihaeata, was killed. After the settlers had surrendered, Te Rangihaeata killed them to avenge his wife's death.
In the crisis that followed Te Rauparaha stayed on the defensive. There was a reluctance for war among those influenced by the missionary Octavius Hadfield at Ōtaki. Te Rauparaha had much to lose if he attacked the European settlements. Settlers believed that he intended war and that he had sent for a Whanganui war party to attack Wellington, as Te Āti Awa of Waikanae had refused to do so. The crisis was ended on 12 February 1844 when Governor Robert FitzRoy declared at Waikanae that the settlers had provoked the fighting at Wairau and that although he deplored the killing of the prisoners no further action would be taken. During this crisis Te Rauparaha, by avoiding war with the settlers, contributed greatly to its peaceful resolution.
On 16 May 1846 Te Mamaku, of Whanganui, who had joined Te Rangihaeata in resisting settlement, led an attack on the troops stationed at Almon Boulcott's farm in the Hutt Valley. There were again rumours of an imminent assault on Wellington. The new governor, George Grey, decided that Te Rauparaha could not be trusted and must be arrested. He visited him at his Taupō pā, near Porirua, and then left on the naval vessel Driver. Two hours before dawn the ship returned and British troops took Te Rauparaha on board. He was held without charge on another naval vessel, the Calliope, for 10 months and then allowed to live in Auckland. On his petition to the governor he was returned to his people at Ōtaki in 1848. He was accompanied on the return voyage by George and Eliza Grey, and by numerous Maori, including Pōtatau Te Wherowhero.
Te Rauparaha lived at Ōtaki for the brief remainder of his life, although he visited Wairau. By the end of his life his influence appears to have declined, possibly because of the humiliation of his imprisonment. His wives in the last part of his life were Pipikūtia, Kahukino and Kahutaiki. He had had 8 wives in the course of his life, and 14 children, some of whom survived him. He did not adopt Christianity, although he attended church services. Te Rauparaha died on 27 November 1849 and was buried near the church, Rangiātea, in Ōtaki. He is believed to have been reinterred on Kāpiti.
Te Rauparaha was a great tribal leader. He took his tribe from defeat at Kāwhia to the conquest of new territories in central New Zealand. As a war leader he enjoyed great success. The tribes he defeated attribute his success to Ngāti Toa's possession of muskets rather than to Te Rauparaha's military genius. Without his leadership, however, it is doubtful if Ngāti Toa would have attempted the great migration and seized the opportunities open to them. Having done so, they changed the tribal structure of New Zealand for ever. | <urn:uuid:b9cb8992-b750-4e4f-9c89-6509944255e9> | CC-MAIN-2021-21 | https://admin.teara.govt.nz/en/biographies/1t74/te-rauparaha | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989812.47/warc/CC-MAIN-20210515035645-20210515065645-00615.warc.gz | en | 0.986877 | 5,277 | 2.890625 | 3 |
The Foot and Ankle Online Journal 7 (3): 6
In recent years, barefoot running and running in minimalist footwear as opposed to running in traditional running shoes has increased in popularity. The influence of such footwear choices on center of pressure (COP) displacements and velocity variables linked to injuries is yet to be understood. The aim of this study was to investigate differences between COP variables, linked to injuries measured in barefoot running, a minimalist running shoe, and with traditional running shoes and conditions during running in a healthy female population. Seventeen healthy female participants were examined. Participants performed five footfalls in each footwear condition while running at 12km/h±10% over a pressure plate while COP variables were recorded at 500Hz. The results suggest that minimalist running shoe COP characteristics were similar to those of barefoot runners, with various significant differences reported in both groups compared to runners with the traditional running shoe.
Keywords: footwear, barefoot, running, COP, center of pressure, plantar pressure
Following the introduction of running specific footwear, in recent years barefoot (BF) running as opposed to running in traditional running shoes (TRS) with elevated cushioned heels has increased in popularity among participants and coaches . When running barefoot on roads or pathways the plantar region of the foot may be exposed to cuts and general discomfort from debris and uneven surfaces, therefore running in minimalist footwear that may allow for the change in running kinetics and kinematics observed in barefoot running compared to shod while protecting the plantar region of the feet from injury and discomfort appears to be desirable.
This has led to a rise in the popularity of barefoot inspired footwear amongst running populations and subsequent research . Running barefoot does not appear to restrict athletes from competing at an elite level, with competitors winning Olympic medals in such conditions. In terms of energy cost to the runner, running barefoot appears to reducing angular inertia of the lower extremities. Research suggests minimalist shoes may also decrease oxygen consumption during running [3,4]. However, recent research suggests there is no reduction of metabolic cost when running barefoot compared to lightweight running shoes .
Some research suggests that wearing traditional running shoes may restrict freedom of movement and flexibility that can be achieved in comparison to barefoot running . Furthermore, running barefoot compared to shod has been identified as causing adaptation in running mechanics, resulting in a more midfoot footfall in contrast to a favored heel striking movement strategy while running in traditional running shoes [2,7]. Research also suggests that such adaptations occur instantaneously with only minor changes in the lower extremity kinematics observed in the reported knee angle after two weeks of training in minimalist footwear . Such adaptations observed in barefoot running have been proposed as a mechanism by which the potentially detrimental loading imposed upon the musculoskeletal system during running may be attenuated [9–11]. Conflicting research has however reported such increases in loading of the musculoskeletal system in barefoot running compared to shod, in participants who habitually wore shoes [12,13]. Furthermore, foot injuries including stress fractures most prominently in the metatarsals have been reported in minimalist shoe runners . Currently there appears to be a lack of evidence confirming the influence of barefoot running on movement strategy and injury rates [15,16].
Research identifying the influence of footwear conditions should initially focus on areas of greatest injury risk within the musculoskeletal system which research suggests is ankle ligament damage . The ankle joint is unique in that the vast majority of injuries sustained across different populations are of one type; ligament sprains [17–21]. It is worth noting that such injury rates in females are higher than those of males .
The reason for the higher occurrence of ankle sprains while running can only be hypothesized. Research has suggested that during running the ankle is often placed in a compromised supinated position when the athlete’s center of gravity (COG) is positioned over the lateral border of the weight bearing limb [24,25]. It has been identified that the functionally unstable ankle may be the result of proprioceptive neuromuscular deficits arising from structural damage following an injury [26–29].
Various kinetic and kinematic variables have been investigated to compare differences between barefoot and shod conditions. However there is a paucity of research investigating the differences in center of pressure (COP) variables between the conditions . Plantar COP velocities and displacements measured during running have been identified as indicators of exercise induced lower leg injuries [30,31]. As such, identifying characteristics of the COP have been identified as suitable reference points for studying the dynamics of the rearfoot and foot function [31,32] and to identify differences in footwear conditions . Studies analyzing the gait of those individuals with functional unstable ankles have identified a tendency for a laterally situated COP on initial foot contact with a greater pressure concentration at the lateral aspect of the heel [26,30]. If the COP is focused to the lateral side of the calcaneus during heel strike, it is possible that the additional force required to place the individual into a compromised position may be minimal . As a result, by examining the location of the COP upon initial contact it may be possible to identify running conditions that could potentially reduce the likelihood of sustaining a lateral ankle sprain by avoiding the COP displacements seen in the unstable ankles.
A commercially available design of minimalist design footwear (huaraches (HU)) have been developed (Figure 1) with minimum cushioning (4mm tread) and string uppers designed to minimally restrict natural foot movement. By comparing COP variables in participants running barefoot and wearing the HU footwear it may be possible to see the different foot mechanics in each. Therefore the aim of this study was to investigate the differences between COP variables, many of which are linked to ankle ligament injuries, measured in barefoot, huaraches and traditional foot wear runners (Figure 1). The differences in kinetics and kinematics measured between genders [19,34–37] demonstrates a need for studies investigating kinetics of locomotion to consider each gender separately and as such this research will focus on conditions during running in a healthy female population.
Selection and Description of Participants
Seventeen healthy female participants were examined (aged 21.2±2.3 years, height 165.4±5.6 cm, body mass 66.9±9.5 kg, foot size 6.8±1.0 UK). All participants were free from musculoskeletal pathology and provided written informed consent in accordance with the declaration of Helsinki.
Figure 1 HU footwear (above) and TRS (below).
Participants were given time to practice running in the minimalist footwear until they felt comfortable, no prior training was undertaken . Participants performed five footfalls in each footwear condition at a controlled speed of 12km/h±10% over a Footscan pressure plate (RsScan International, 1m x 0.4m, 8192 sensors) (Figure 1) collecting COP data at 250Hz positioned in the center of a 28.5m runway. Participants practiced running along the runway and adjusted their starting position to achieve a natural footstrike on the pressure mat to minimize any influence of targeting . They were also instructed to look at a point on the far wall and not slow down until passing the second timing gate.
Various times (Initial Metatarsal contact (IMC), initial forefoot flat contact (IFFC, first instant all the metatarsals heads are in ground contact) and heel off (HO)) during foot to ground contact were identified (Fig.2), anterior-posterior and medial-lateral displacement and velocity data were calculated at these time points [30,39]. COP displacement and velocity values were normalized to a percentage of foot width and length as appropriate and using the same methods as in previous research [30,39]. This method of collecting COP progression data in direct foot contact and under the shoe has been confirmed as reasonable [40,41].
Descriptive statistics including means and standard deviations were calculated for each COP variable in each condition. One way repeated measures ANOVAs were used to determine the differences between footwear conditions with significance accepted at the p<0.05 level. The Shapiro-Wilk statistic for each condition confirmed that the data were normally distributed and where the sphericity assumption was not met, correctional adjustment was made using Greenhouse-Geisser. Effect sizes were calculated using an Eta2 (η2). Post-hoc analyses were conducted using a Bonferroni correction to control type I error (Table 1). All statistical procedures were conducted using SPSS 19.0 (SPSS Inc., Chicago, IL, USA).
The COP data collected was observed for each trial and various key points in time during the stance phase were identified (Figure 2)
Figure 2 Typical BF plantar pressure.
The means were calculated for the COP timing (Table 1), COP medial-lateral (Table 2) and COP anterior-posterior (Table 3) variables.
Analysis of the timing variables reported between the footwear conditions is displayed in Table 1 and indicated a significant main effect for the timing of IMC (F(1.41, 22.55)= 57.29, p<0.0005, η2=0.782) and IFFC (F(2, 32)= 43.69, p<0.001, η2=0.732) no significant effect was reported for HO (F(1.30, 20.87)= 2.56, p=0.118, η2=0.138). Post hoc analysis revealed significant differences (p<0.001) between the TRS and both the BF and HU conditions for timing of IMC, This was also the case for the IFFC event timing which additionally reported a significant difference (p=0.04) between the BF and HU conditions.
Table 1 Means and standard deviations of center of pressure variables timing variables. †=Significantly different (p<0.05) from BF, ¥=significantly different (p<0.05) from HU, *=significantly different (p<0.05) from TRS.
Medial Lateral COP Variables
Analysis of the movement of the COP in the Medial Lateral plane of the foot between footwear conditions are displayed in Table 2 and report that a significant main effects for the position of the COP in terms of medial lateral position (X-comp) were identified at IMC X-comp (F(1.454, 23.268= 5.87, p=0.014, η2=0.269), IFFC X-comp (F(2, 32)= 18.9, p<0.001, η2=0.542) and HO X-comp (F(2, 32)= 15.6, p<0.001, η2=0.494).) No significant main effect was identified for IFCX-comp (F (2, 32) = 3.161, p=0.056, η2=0.165). Post hoc analysis revealed a significant difference for IMC X-comp (p=0.025), IFFC X-comp (p=0.001) and HO X-comp (P=0.003) between BF and TRS conditions, and a significant difference between IFFC X-comp (p<0.001) and HO X-comp (p<0.001) between HU and TRS conditions.
Significant main effects for the position of the medial lateral velocity of the COP in terms of position (VEL-X) were identified for HO VEL-X (F (2, 32) = 32.6, p<0.001, η2=0.671). Post hoc analysis revealed a significant difference for HO VEL-X between BF and TRS (p<0.001) and HU and TRS (p<0.001). No significant main effect was identified for IMC VEL-X (F (1.46, 23.31= 1.314, p=0.279, η2=0.076) or IFFC VEL-X (F (1.33, 21.24) = 2.073, p=0.161, η2=0.115).
Table 2 Means and standard deviations of center of medial-lateral pressure variables. †=Significantly different (P<0.05) from BF, ¥=significantly different (p<0.05) from HU, *=significantly different (p<0.05) from TRS, FW%=Percentage of foot width.
Anterior Posterior COP Variables
Analysis of the movement of the COP in the Anterior Posterior plane of the foot between footwear conditions are displayed in Table 2 and report that a significant main effects for the position of the COP in terms of anterior posterior position (Y-comp) were identified at IFCY-comp (F (2, 32) = 5.04, p<0.013, η2=0.239) and HO Y-comp (F (1.09, 17.39) = 30.71, p<0.001, η2=0.657). No significant main effect was identified for IMC Y-comp (F (1.42, 22.66) = 3.28, p=0.07, η2=0.170) or IFFC Y-comp (F(1.22, 19.58)= 0.88, p=0.38, η2=0.052). Post hoc analysis revealed a significant difference for HO Y-comp (p<0.001) and IFC Y-comp (p=0.025) between BF and TRS, a significant difference was also identified for HO Y-comp between HU and TRS conditions (p<0.001).
Significant main effects for the position of the anterior posterior velocity of the COP in terms of position (VEL-Y) were identified for IMC VEL-Y (F(1.41, 22.58)= 13.60, p<0.0005 η2=0.460) and HO VEL-Y (F(1.17, 18.77)= 13.26, p=0.001, η2=0.453) No significant main effect was identified for IFFC VEL-Y (F(1.21, 19.33)= 1.710, p=0.209, η2=0.097). Post hoc analysis revealed a significant difference between BF and TRS for IMC VEL-Y (p=0.005) and HO VEL-Y (p=0.001), significant differences were also identified between HU and TRS for IMC VEL-Y (p=0.002) and HO VEL-Y (P=0.011).
Table 3 Means and standard deviations of anterior-posterior center of pressure variables.†=significantly different (p<0.05) from BF, ¥=significantly different (p<0.05) from HU, *=significantly different (p<0.05) from TRS, FL%=Percentage of foot length.
The purpose of the current investigation was to compare the COP variables of a healthy female population running in BF, HU and TRS conditions. The first aim was to identify if there existed any differences between the shod and BF conditions, in order to identify whether running in such footwear produced similar kinetics to those found in BF running. The second aim was to determine if there were any significant differences between footwear in the COP variables implicated in the etiology of injury .
The significant differences in the IMC and IFFC time parameters (p<0.05) in the TRS compared to the BF and HU conditions, suggest a more plantarflexed foot placement (in BF and HU) at ground contact. This has been reported previously in analyses comparing BF to shod [2,12] and minimalist footwear compared to shod conditions and suggests HU rather than TRS would be the favored footwear to reduce the incidence of injury in runners [10–12]. During running there is often uneven terrain, and as the calcaneus lands, it lends itself to movement in the coronal plane by the very nature of its shape. Furthermore, it has been identified that patients with ankle instability have a longer duration of contact from the initial heel contact to the forefoot landing . Therefore, a quicker loading of the forefoot as observed in the BF and HU conditions, may offer greater support to potentially limit hazardous injury.
During locomotion, as the foot makes contact with the ground, the line of the resulting reaction force is determined by the position of the foot in relation to the athletes COG . Previous research reported that when an increased angle of supination upon touchdown was present, an apparent increase in the number of ankle sprains ensued . With the TRS condition in the current study exhibiting a trend towards a more laterally displaced COP, this may infer that the initial contact of the foot was made whilst being held in slight supination, and therefore similar those suffering from ankle instability which may increase susceptibility to injury.
Previous research identified that an ankle sprain group exhibited a higher loading under the medial border of the foot, and this was identified as an indicator or susceptibility to ankle sprain . The significant difference between the shod and both the BF and HU condition for the IFFC X-comp variable indicated a more medially loaded foot. This may also be a predisposing factor for an inversion ankle sprain.
It appears that the HU shoe minimizes the changes in COP characteristics seen in TRS compared to BF running with only one variable (IFFC time) reporting a significant (p<0.05) difference between HU and BF. Furthermore, this particular minimalist design (HU) may more closely simulate BF running compared to some other footwear designed to simulate BF running . These results suggest that proposed health benefits associated with BF running may be prevalent in HU footwear conditions.
The data collected in this study provides evidence that the HU design of footwear may be a suitable alternative to running BF for females, by offering protection to the plantar surface of the foot whilst adjusting the running strategy identified through COP variables in a similar way to BF running when compared to running in TRS. From a rehabilitation point of view, it may advantageous to initiate a return to running using minimalist footwear as this appears to have the potential to reduce excessive COP characteristics linked to ankle inversion injury compared to shoes. However potential injury risk reduction benefits of BF running are yet to be conclusively substantiated and any change in habitual running style through footwear choice should be approached with caution.
This study focused on a population of healthy females. Previous research has demonstrated differences between genders biomechanically and regarding injury rates [19,44] and as such the results cannot be generalized to a male sample. Therefore there is clear need to perform a similar examination using a male population. Previous research has suggested that the thickness of cushioning in running shoes may not have a significant effect on loading characteristics during foot to ground impact. The HU design of shoe is commercially available in different sole thickness. Testing for similar effects of sole thickness that are observed in the HU design of shoe warrant further investigation to identify a move towards the possibility for an optimum design in the general population.
- Nigg B, Enders H. Barefoot running – some critical considerations. Footwear Sci . Taylor & Francis; 2013 Feb 26;35(1):1–7. – Link
- Sinclair J, Greenhalgh A, Brooks D, Edmundson CJ, Hobbs SJ. The influence of barefoot and barefoot-inspired footwear on the kinetics and kinematics of running in comparison to conventional running shoes. Footwear Sci . Taylor & Francis; 2013;5:45–53. – Link
- Squadrone R, Gallozzi C. Biomechanical and physiological comparison of barefoot and two shod conditions in experienced barefoot runners. J Sports Med Phys Fitness . 2009/02/04 ed. 2009;49(1):6–13. – Pubmed citation
- Perl DP, Daoud AI, Lieberman DE. Effects of Footwear and Strike Type on Running Economy. Med Sci Sport Exerc . 2012;44(7). – Link
- Franz JR, Wierzbinski CM, Kram R. Metabolic cost of running barefoot versus shod: is lighter better? Med Sci Sports Exerc . 2012;44:1519–25. – Pubmed citation
- Morio C, Lake MJ, Gueguen N, Rao G, Baly L. The influence of footwear on foot motion during walking and running. J Biomech . 2009/08/01 ed. 2009;42(13):2081–8. – Pubmed citation
- Hamill J, Russell EM, Gruber AH, Miller R. Impact characteristics in shod and barefoot running. Footwear Sci . Taylor & Francis; 2011 Feb 13;3(1):33–40. – Link
- Willson JD, Bjorhus JS, Williams DSB, Butler RJ, Porcari JP, Kernozek TW. Short-Term Changes in Running Mechanics and Foot Strike Pattern After Introduction to Minimalistic Footwear. PM&R . 2013 [cited 2013 Dec 20]; – Link
- Daoud AI, Geissler GJ, Wang F, Saretsky J, Daoud YA, Lieberman DE. Foot strike and injury rates in endurance runners: a retrospective study. Med Sci Sport Exerc . 2012/01/06 ed. 2012;44(7):1325–34. – Pubmed citation
- Lieberman DE, Venkadesan M, Werbel WA, Daoud AI, D’Andrea S, Davis IS, et al. Foot strike patterns and collision forces in habitually barefoot versus shod runners. Nature . 2010/01/30 ed. 2010;463(7280):531–5. – Pubmed citation
- Robbins SE, Hanna AM. Running-related injury prevention through barefoot adaptations. Med Sci Sports Exerc . 1987/04/01 ed. 1987;19(2):148–56. – Pubmed citation
- De Wit B, De Clercq D, Aerts P. Biomechanical analysis of the stance phase during barefoot and shod running. J Biomech . 2000;33(3):269–78. – Link
- Giuliani J, Masini B, Alitz C, Owens BD. Barefoot-simulating footwear associated with metatarsal stress injury in 2 runners. Orthopedics . 2011/07/02 ed. 2011;34(7):e320–e323. – Pubmed citation
- Salzler MJ, Bluman EM, Noonan S, Chiodo CP, de Asla RJ. Injuries Observed in Minimalist Runners. Foot Ankle Int . 2012 Apr 1;33 (4 ):262–6. – Link
- Jenkins DW, Cauthon DJ. Barefoot running claims and controversies: a review of the literature. J Am Podiatr Med Assoc . 2011/05/31 ed. 2011;101(3):231–46. – Pubmed citation
- Rixe JA, Gallo RA, Silvis ML. The barefoot debate: can minimalist shoes reduce running-related injuries? Curr Sports Med Rep . 2012/05/15 ed. 2012;11(3):160–5. – Pubmed citation
- Nelson AJ, Collins CL, Yard EE, Fields SK, Comstock RD. Ankle Injuries Among United States High School Sports Athletes, 2005–2006. J Athl Train. 2007;42:381–7.
- Bahr R, Bahr IA. Incidence of acute volleyball injuries: a prospective cohort study of injury mechanisms and risk factors. Scand J Med Sci Sports. 1997;7:166–71.
- Beynnon BD, Renström PA, Alosa DM, Baumhauer JF, Vacek PM. Ankle ligament injury risk factors: a prospective study of college athletes. J Orthop Res . 2001;19(2):213–20. – Link
- Fong DT, Hong Y, Chan LK, Yung PS, Chan KM. A systematic review on ankle injury and ankle sprain in sports. Sport Med . 2006/12/28 ed. 2007;37(1):73–94. – Pubmed citation
- Hawkins RD, Hulse MA, Wilkinson C, Hodson A, Gibson M. The association football medical research programme: an audit of injuries in professional football. Br J Sports Med. 2001;35:43–7.
- Willems TM, Witvrouw E, Delbaere K, Philippaerts R, De Bourdeaudhuij I, De Clercq D. Intrinsic risk factors for inversion ankle sprains in females–a prospective study. Scand J Med Sci Sport . 2005/09/27 ed. 2005;15(5):336–45. – Pubmed citation
- Willems TM, Witvrouw E, Delbaere K, Mahieu N, De Bourdeaudhuij I, De Clercq D. Intrinsic risk factors for inversion ankle sprains in male subjects: a prospective study. Am J Sports Med . 2005/02/18 ed. 2005;33(3):415–23. – Pubmed citation
- Tropp H. Commentary: Functional Ankle Instability Revisited. J Athl Train . 2003/08/26 ed. 2002;37(4):512–5. – Pubmed citation
- Wilkerson GB, Pinerola JJ, Caturano RW. Invertor vs. evertor peak torque and power deficiencies associated with lateral ankle ligament injury. J Orthop Sports Phys Ther . 1997/08/01 ed. 1997;26(2):78–86. – Pubmed citation
- Becker H, Rosenbaum D, Claes L, Gerngro H. Measurement of plantar pressure distribution during gait for diagnosis of functional lateral ankle instability. Clin Biomech . 1997/04/01 ed. 1997;12(3):S19. – Pubmed citation
- Hertel J. Functional Anatomy, Pathomechanics, and Pathophysiology of Lateral Ankle Instability. J Athl Train . 2003/08/26 ed. 2002;37(4):364–75. – Pubmed citation
- Konradsen L. Factors Contributing to Chronic Ankle Instability: Kinesthesia and Joint Position Sense. J Athl Train . 2003/08/26 ed. 2002;37(4):381–5. – Pubmed citation
- Lentell G, Baas B, Lopez D, McGuire L, Sarrels M, Snyder P. The contributions of proprioceptive deficits, muscle function, and anatomic laxity to functional instability of the ankle. J Orthop Sports Phys Ther . 1995/04/01 ed. 1995;21(4):206–15. – Pubmed citation
- Willems T, Witvrouw E, Delbaere K, De Cock A, De Clercq D. Relationship between gait biomechanics and inversion sprains: a prospective study of risk factors. Gait Posture . 2005/05/12 ed. 2005;21(4):379–87. – Pubmed citation
- Willems TM, De Clercq D, Delbaere K, Vanderstraeten G, De Cock A, Witvrouw E. A prospective study of gait related risk factors for exercise-related lower leg pain. Gait Posture . 2005/11/29 ed. 2006;23(1):91–8. – Pubmed citation
- Dixon SJ. Application of center-of-pressure data to indicate rearfoot inversion-eversion in shod running. J Am Pod Med Assoc . 2006/07/27 ed. 2006;96(4):305–12. – Pubmed citation
- Nigg BM, Stergiou P, Cole G, Stefanyshyn D, Mundermann A, Humble N. Effect of shoe inserts on kinematics, center of pressure, and leg joint moments during running. Med Sci Sport Exerc . 2003/02/06 ed. 2003;35(2):314–9. – Pubmed citation
- Chumanov ES, Wall-Scheffler C, Heiderscheit BC. Gender differences in walking and running on level and inclined surfaces. Clin Biomech . 2008/09/09 ed. 2008;23(10):1260–8. – Pubmed citation
- Chung MJ, Wang MJ. Gender and walking speed effects on plantar pressure distribution for adults aged 20-60 years. Ergonomics . 2011/08/20 ed. 2011; – Pubmed citation
- Eskofier BM, Kraus M, Worobets JT, Stefanyshyn DJ, Nigg BM. Pattern classification of kinematic and kinetic running data to distinguish gender, shod/barefoot and injury groups with feature ranking. Comput Methods Biomech Biomed Engin . 2011/02/05 ed. 2011;15(5):467–74. – Pubmed citation
- Ferber R, Davis IM, Williams 3rd DS. Gender differences in lower extremity mechanics during running. Clin Biomech (Bristol, Avon) . 2003/04/12 ed. 2003;18(4):350–7. – Pubmed citation
- Sinclair J, Hobbs SJ, Taylor PJ, Currigan G, Greenhalgh A. The Influence of Different Force and Pressure Measuring Transducers on Lower Extremity Kinematics Measured During Running. J Appl Biomech . Division of Sport Exercise and Nutritional Sciences, University of Central Lancashire, Lancashire, UK.; 2013 Jul; – Link
- De Cock A, Vanrenterghem J, Willems T, Witvrouw E, De Clercq D. The trajectory of the centre of pressure during barefoot running as a potential measure for foot function. Gait Posture . 2008;27(4):669–75. – Link
- Chesnin KJ, Selby-Silverstein L, Besser MP. Comparison of an in-shoe pressure measurement device to a force plate: concurrent validity of center of pressure measurements. Gait Posture . 2000/09/22 ed. 2000;12(2):128–33. – Link
- Lake M, Wilssens JP, Lens T, Mark R, Digby C. Barefoot, shod, plate and insole pressure measurement comparisons during 4-4.5m/s running in relationships to lower limb movements. 23 International Symposium on Biomechanics in Sports. 2005. p. 761–4.
- Nyska M, Shabat S, Simkin A, Neeb M, Matan Y, Mann G. Dynamic force distribution during level walking under the feet of patients with chronic ankle instability. Br J Sports Med . 2003/12/11 ed. 2003;37(6):495–7. – Pubmed citation
- Wright IC, Neptune RR, van den Bogert AJ, Nigg BM. The influence of foot positioning on ankle sprains. J Biomech . 2000;33(5):513–9. – Link
- Sinclair J, Greenhalgh A, Edmundson CJ, Brooks D, Hobbs SJ. Gender Differences in the Kinetics and Kinematics of Distance Running: Implications for Footwear Design. Int J Sport Sci Eng. 2012;6(2):118–28. | <urn:uuid:b1c70c68-9b13-4ab4-976b-23e0adfdf8e5> | CC-MAIN-2021-21 | http://faoj.org/2014/09/30/a-comparison-of-center-of-pressure-variables-recorded-during-running-in-barefoot-minimalist-footwear-and-traditional-running-shoes-in-the-female-population/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989018.90/warc/CC-MAIN-20210509213453-20210510003453-00574.warc.gz | en | 0.892893 | 6,688 | 2.890625 | 3 |
Your cart is empty
An introduction to the rapidly evolving methodology of electronic excited states For academic researchers, postdocs, graduate and undergraduate students, Quantum Chemistry and Dynamics of Excited States: Methods and Applications reports the most updated and accurate theoretical techniques to treat electronic excited states. From methods to deal with stationary calculations through time-dependent simulations of molecular systems, this book serves as a guide for beginners in the field and knowledge seekers alike. Taking into account the most recent theory developments and representative applications, it also covers the often-overlooked gap between theoretical and computational chemistry. An excellent reference for both researchers and students, Excited States provides essential knowledge on quantum chemistry, an in-depth overview of the latest developments, and theoretical techniques around the properties and nonadiabatic dynamics of chemical systems. Readers will learn: ? Essential theoretical techniques to describe the properties and dynamics of chemical systems ? Electronic Structure methods for stationary calculations ? Methods for electronic excited states from both a quantum chemical and time-dependent point of view ? A breakdown of the most recent developments in the past 30 years For those searching for a better understanding of excited states as they relate to chemistry, biochemistry, industrial chemistry, and beyond, Quantum Chemistry and Dynamics of Excited States provides a solid education in the necessary foundations and important theories of excited states in photochemistry and ultrafast phenomena.
Discovering quantum physics has never been easier. Combining bold graphics with easy-to-understand text, Simply Quantum Physics is an essential introduction to the subject for those who are short of time but hungry for knowledge. It is a perfect beginner's guide to the strange and fascinating world of subatomic physics that at times seems to conflict with common sense. Covering more than 100 key ideas from the basics of quantum mechanics to the uncertainty principle and quantum tunnelling, it is divided into pared-back, single- or double-page entries that explain concepts simply and visually. Assuming no previous knowledge of physics, Simply Quantum Physics demystifies some of the most groundbreaking ideas in modern science and introduces the work of some of the most famous physicists of the 20th and 21st centuries, including Albert Einstein, Neils Bohr, Erwin Schroedinger, and Richard Feynman. Whether you are studying physics at school or college, or simply want a jargon-free overview of the subject, this essential guide is packed with everything you need to understand the basics quickly and easily.
The renowned Oxford Chemistry Primers series, which provides focused introductions to a range of important topics in chemistry, has been refreshed and updated to suit the needs of today's students, lecturers, and postgraduate researchers. The rigorous, yet accessible, treatment of each subject area is ideal for those wanting a primer in a given topic to prepare them for more advanced study or research. The learning features provided, including exercises at the end of every chapter and online multiple-choice questions, encourage active learning and promote understanding. Moreover, cutting-edge examples and applications throughout the texts show the relevance to current research and industry of the chemistry being described. Computational Chemistry provides a user-friendly introduction to this powerful way of characterizing and modelling chemical systems. This primer provides the perfect introduction to the subject, leading the reader through the basic principles before showing the variety of ways in which computational chemistry is applied in practice to study real molecules, all illustrated by frequent examples. Online Resource Centre The Online Resource Centre to accompany Computational Chemistry features: For registered adopters of the text: * Figures from the book available to download For students: * Full worked solutions to the end-of-chapter exercises * Multiple-choice questions for self-directed learning
This Solutions Manual accompanies the second edition of Donald McQuarrie's Quantum Chemistry. It contains each of the more than 700 problems in the text, followed by a detailed solution. Written by chemistry faculty members Helen O. Leung and Mark D. Marshall, both of Amherst College, in conjunction with Prof. McQuarrie, each solution combines the clarity the authors use in teaching the same material in their own classrooms with the rigor appropriate to learning and appreciating an introduction to quantum chemistry. Both Helen Leung and Mark Marshall are recipients of the Henry Dreyfus Teacher-Scholar Award. They bring to the manual the insight gained from years of using quantum mechanics as spectroscopists with active research programs along with strong, effective pedagogy.
The renowned Oxford Chemistry Primers series, which provides focused introductions to a range of important topics in chemistry, has been refreshed and updated to suit the needs of today's students, lecturers, and postgraduate researchers. The rigorous, yet accessible, treatment of each subject area is ideal for those wanting a primer in a given topic to prepare them for more advanced study or research. The learning features provided, including end of book problems and online multiple-choice questions, encourage active learning and promote understanding. Furthermore, frequent diagrams and margin notes help to enhance a student's understanding of these essential areas of chemistry. Statistical Thermodynamics gives a concise and accessible account of this fundamental topic by emphasizing the underlying physical chemistry, and using this to introduce the mathematics in an approachable way. The material is presented in short, self-contained sections making it flexible to teach and learn from, and concludes with the application of the theory to real systems. Online Resource Centre: The Online Resource Centre to accompany Statistical Thermodynamics features: For registered adopters of the text: * Figures from the book available to download For students: * Worked solutions to the questions and problems at the end of the book. * Multiple-choice questions for self-directed learning
This contributed volume is inspired by the seminal discovery and identification of C60. Starting with a comprehensive discussion featuring graphene based nanostructures, subsequent chapters include topological descriptions of matrices, polynomials and indices, and an extended analysis of the symmetry and topology of nanostructures. Carbon allotropes such as diamond and its connection to higher-dimensional spaces is explored along with important mathematical and topological considerations. Further topics covered include spontaneous symmetry breaking in graphene, polyhedral carbon structures, nanotube junction energetics, and cyclic polyines as relatives of nanotubes and fullerenes. This book is aimed at researchers active in the study of carbon materials science and technology.
This book is a collection of select proceedings of the FOMMS 2015 conference. FOMMS 2015 was the sixth triennial FOMMS conference showcasing applications of theory of computational quantum chemistry, molecular science, and engineering simulation. The theme of the 2015 meeting was on Molecular Modeling and the Materials Genome. This volume comprises chapters on many distinct applications of molecular modeling techniques. The content will be useful to researchers and students alike.
The editors of this volume have compiled an important book that is a useful vehicle for important computational research - in the development of theoretical methodologies and their practical applications. Themes include new methodologies, state-of-the-art computational algorithms and hardware as well as new applications. This volume, Practical Aspects of Computational Chemistry IV, is part of a continuous effort by the editors to document recent progress made by eminent researchers. Most of these chapters have been collected from invited speakers from the annual international meeting: "Current Trends in Computational Chemistry" organized by Jerzy Leszczynski, one of the editors of the current volume. This conference series has become an exciting platform for eminent Theoretical/Computational Chemists to discuss their recent findings and is regularly honored by the presence of Nobel laureates. Certainly, it is not possible to cover all topics related to the Computational Chemistry in a single volume but we hope that the recent contributions in the latest volume of this collection adequately highlight this important scientific area.
This monograph is the first easy-to-read-and-understand book on prion proteins' molecular dynamics (MD) simulations and on prions' molecular modelling (MM) constructions. It enables researchers to see what is crucial to the conformational change from normal cellular prion protein (PrPC) to diseased infectious prions (PrPSc), using MD and MM techniques. As we all know, prion diseases, caused by the body's own proteins, are invariably fatal and highly infectious neurodegenerative diseases effecting humans and almost all animals for a major public health concern. Prion contains no nucleic acids and it is a misshapen or conformation-changed protein that acts like an infectious agent; thus prion diseases are called "protein structural conformational" diseases. PrPC is predominant in -helices but PrPSc are rich in -sheets in the form as amyloid fibrils; so very amenable to be studied by MD techniques. Through MD, studies on the protein structures and the structural conversion are very important for revealing secrets of prion diseases and for structure-based drug design or discovery. Rabbits, dogs, horses and buffaloes are reported to be the few low susceptibility species to prion diseases; this book's MD studies on these species are clearly helpful to understand the mechanism underlying the resistance to prion diseases. PrP(1-120) usually has no clear molecular structures; this book also studies this unstructured region through MD and especially MM techniques from the global optimization point of view. This book is ideal for practitioners in computing of biophysics, biochemistry, biomedicine, bioinformatics, cheminformatics, materials science and engineering, applied mathematics and theoretical physics, information technology, operations research, biostatistics, etc. As an accessible introduction to these fields, this book is also ideal as a teaching material for students.
This book provides a gentle introduction to equilibrium statistical mechanics. The particular aim is to fill the needs of readers who wish to learn the subject without a solid background in classical and quantum mechanics. The approach is unique in that classical mechanical formulation takes center stage. The book will be of particular interest to advanced undergraduate and graduate students in engineering departments.
This brilliant text, a completely original manifesto, covers quantum mechanics from a time-dependent perspective in a unified way from beginning to end. Intended for upper-level undergraduate and graduate courses in quantum mechanics, this text will change the way people think about and teach about quantum mechanics in chemistry and physics departments.
This book covers the results of the Tera op Workbench, other projects related to High Performance Computing, and the usage of HPC installations at HLRS. The Tera op Workbench project is a collaboration between the High Performance C- puting Center Stuttgart (HLRS) and NEC Deutschland GmbH (NEC-HPCE) to s- port users in achieving their research goals using High Performance Computing. The rst stage of the Tera op Workbench project (2004-2008) concentrated on user's applications and their optimization for the former ag ship of HLRS, a - node NEC SX-8 installation. During this stage, numerous individual codes, dev- oped and maintained by researchers or commercial organizations, have been a- lyzed and optimized. Within the project, several of the codes have shown the ability to outreach the TFlop/s threshold of sustained performance. This created the pos- bility for new science and a deeper understanding of the underlying physics. The second stage of the Tera op Workbench project (2008-2012) focuses on c- rent and future trends of hardware and software developments. We observe a strong tendency to heterogeneous environments on the hardware level, while at the same time, applications become increasingly heterogeneous by including multi-physics or multi-scale effects. The goal of the current studies of the Tera op Workbench is to gain insight in the developments of both components. The overall target is to help scientists to run their application in the most ef cient and most convenient way on the hardware best suited for their purposes.
Connects fundamental knowledge of multivalent interactions with current practice and state-of-the-art applications Multivalency is a widespread phenomenon, with applications spanning supramolecular chemistry, materials chemistry, pharmaceutical chemistry and biochemistry. This advanced textbook provides students and junior scientists with an excellent introduction to the fundamentals of multivalent interactions, whilst expanding the knowledge of experienced researchers in the field. Multivalency: Concepts, Research & Applications is divided into three parts. Part one provides background knowledge on various aspects of multivalency and cooperativity and presents practical methods for their study. Fundamental aspects such as thermodynamics, kinetics and the principle of effective molarity are described, and characterisation methods, experimental methodologies and data treatment methods are also discussed. Parts two and three provide an overview of current systems in which multivalency plays an important role in chemistry and biology, with a focus on the design rules, underlying chemistry and the fundamental principles of multivalency. The systems covered range from chemical/materials-based ones such as dendrimers and sensors, to biological systems including cell recognition and protein binding. Examples and case studies from biochemistry/bioorganic chemistry as well as synthetic systems feature throughout the book. Introduces students and young scientists to the field of multivalent interactions and assists experienced researchers utilising the methodologies in their work Features examples and case studies from biochemistry/bioorganic chemistry, as well as synthetic systems throughout the book Edited by leading experts in the field with contributions from established scientists Multivalency: Concepts, Research & Applications is recommended for graduate students and junior scientists in supramolecular chemistry and related fields, looking for an introduction to multivalent interactions. It is also highly useful to experienced academics and scientists in industry working on research relating to multivalent and cooperative systems in supramolecular chemistry, organic chemistry, pharmaceutical chemistry, chemical biology, biochemistry, materials science and nanotechnology.
The renowned Oxford Chemistry Primers series, which provides focused introductions to a range of important topics in chemistry, has been refreshed and updated to suit the needs of today's students, lecturers, and postgraduate researchers. The rigorous, yet accessible, treatment of each subject area is ideal for those wanting a primer in a given topic to prepare them for more advanced study or research. The learning features provided, including questions at the end of every chapter and online multiple-choice questions, encourage active learning and promote understanding. Furthermore, frequent diagrams, margin notes, and glossary definitions all help to enhance a student's understanding of these essential areas of chemistry. Chemical Bonding gives a clear and succinct explanation of this fundamental topic, which underlies the structure and reactivity of all molecules, and therefore the subject of chemistry itself. Little prior knowledge or mathematical ability is assumed, making this the perfect text to introduce students to the subject.
Designed for use in inorganic, physical, and quantum chemistry courses, this textbook includes numerous questions and problems at the end of each chapter and an Appendix with answers to most of the problems.
Graduate-level text explains modern in-depth approaches to the calculation of the electronic structure and properties of molecules. Hartree-Fock approximation, electron pair approximation, much more. Largely self-contained, only prerequisite is solid course in physical chemistry. Over 150 exercises. 1989 edition.
In simple language, without mathematics, this book explains the strange and exciting ideas that make the subatomic world so different from the world of the every day. It offers the general reader access to one of the greatest discoveries in the history of physics and one of the oustanding intellectual achievements of the twentieth century.
This study guide aims at explaining theoretical concepts encountered by practitioners applying theory to molecular science. This is a collection of short chapters, a manual, attempting to walk the reader through two types of topics: (i) those that are usually covered by standard texts but are difficult to grasp and (ii) topics not usually covered, but are essential for successful theoretical research. The main focus is on the latter. The philosophy of this book is not to cover a complete theory, but instead to provide a set of simple study cases helping to illustrate main concepts. The focus is on simplicity. Each section is made deliberately short, to enable the reader to easily grasp the contents. Sections are collated in themed chapters, and the advantage is that each section can be studied separately, as an introduction to more in-depth studies. Topics covered are related to elasticity, electrostatics, molecular dynamics and molecular spectroscopy, which form the foundation for many presently active research areas such as molecular biophysics and soft matter physics. The notes provide a uniform approach to all these areas, helping the reader to grasp the basic concepts from a common set of theoretical tools.
This multi-author edited volume reviews the recent developments in boron chemistry, with a particular emphasis on the contribution of computational chemistry. The contributors come from Europe, the USA and Asia. About 60% of the book concentrates on theoretical and computational themes whilst 40% is on topics of interest to experimental chemists. Specific themes covered include structure, topology, modelling and prediction, the role of boron clusters in synthetic chemistry and catalysis, as medical agents when acting as inhibitors of HIV protease and carbonic anhydrases.
Designed for science students, this book provides an introduction to atomic and molecular structure and bonding. Following two initial chapters on atomic structure and the electronic properties of atoms and molecules, the book is largely organized according to molecule size, moving from an examination of diatomic molecules in Chapter Three to the infinitely large atomic clusters in Chapter Six.
This book contains precisely referenced chapters, emphasizing environment-friendly polymer nanocomposites with basic fundamentals, practicality and alternatives to traditional nanocomposites through detailed reviews of different environmental friendly materials procured from different resources, their synthesis and applications using alternative green approaches. The book aims at explaining basics of eco-friendly polymer nanocomposites from different natural resources and their chemistry along with practical applications which present a future direction in the biomedical, pharmaceutical and automotive industry. The book attempts to present emerging economic and environmentally friendly polymer nanocomposites that are free from side effects studied in the traditional nanocomposites. This book is the outcome of contributions by many experts in the field from different disciplines, with various backgrounds and expertises. This book will appeal to researchers as well as students from different disciplines. The content includes industrial applications and will fill the gap between the research works in laboratory to practical applications in related industries.
"Ab initio" quantum chemistry has emerged as an important tool in chemical research and is applied to a wide variety of problems in chemistry and molecular physics. Recent developments of computational methods have enabled previously intractable chemical problems to be solved using rigorous quantum-mechanical methods.
This is the first comprehensive up-to-date and technical work to cover all the important aspects of modern molecular electronic-structure theory. Topics covered in the book include: Second quantization with spin adaptationGaussian basis sets and molecular-integral evaluationHartree-Fock theoryConfiguration-interaction and multi-configurational self-consistent theoryCoupled-cluster theory for ground and excited statesPerturbation theory for single- and multi-configuration statesLinear-scaling techniques and the fast multiple methodExplicitly correlated wave functionsBasis-set convergence and extrapolationCalibration and benchmarking of computational methods, with applications to molecular equilibrium structures, atomization energies and reaction enthalpies.
"Molecular Electronic-Structure" Theory makes extensive use of numerical examples, designed to illustrate the strengths and weaknesses of each method treated. In addition, statements about the usefulness and deficiencies of the various methods are supported by actual examples, not just model calculations. Problems and exercises are provided at the end of each chapter, complete with hints and solutions.
This book is a must for researchers in the field of quantum chemistry as well as for nonspecialists who wish to acquire a thorough understanding of "ab initio" molecular electronic-structure theory and its applications to problems in chemistry and physics. It is also highly recommended for the teaching of graduates and advanced undergraduates.
Multi-scale Quantum Models for Biocatalysis explores various molecular modelling techniques and their applications in providing an understanding of the detailed mechanisms at play during biocatalysis in enzyme and ribozyme systems. These areas are reviewed by an international team of experts in theoretical, computational chemistry, and biophysics.
This book presents detailed reviews concerning the development of various techniques, including ab initio molecular dynamics, density functional theory, combined QM/MM methods, solvation models, force field methods, and free-energy estimation techniques, as well as successful applications of multi-scale methods in the biocatalysis systems including several protein enzymes and ribozymes.
This book is an excellent source of information for research professionals involved in computational chemistry and physics, material science, nanotechnology, rational drug design and molecular biology and for students exposed to these research areas."
Chemical modelling covers a wide range of hot topics and active areas in computational chemistry and related fields. With the increase in volume, velocity and variety of information, researchers can find it difficult to keep up to date with the literature in these areas. Containing both comprehensive and critical reviews, this book is the first stop for any materials scientist, biochemist, chemist or molecular physicist wishing to acquaint themselves with major developments in the applications and theory of chemical modelling.
Covering most of the topics taught in university courses in quantum chemistry, this authoritative text provides modern concepts of atomic and molecular structure as well as chemical bond. Brief historical account of the origin of quantum theory, its applications to the problem of atomic spectra and atomic structure have been discussed. Electronic configuration of atoms based on the four-quantum-number system, symbols for atomic states, and classification of elements and their distribution in the periodic table have been given a comprehensive treatment. Postulates of quantum mechanics, quantum-mechanical operators, Hamiltonian operator, derivation of Schroedinger equation, its application to particle-in-a-box and to the hydrogen atom, quantization of energy levels, uncertainty principle, probability distribution functions, angular and radial wave functions, nodal properties, sectional and charge-cloud representation of atomic orbitals, etc., have been covered in detail. The valence bond and molecular orbital methods of bonding, hybridization, orbital structure of common hydrocarbons, bonding in coordination compounds based on valence bond and ligand field theories, the concept of valency, ionic and covalent bonding, bonding in metals, secondary bond forces, and so on have been discussed in a reasonable amount of detail. A unique feature of the book is the adoption of a problem solving approach. Thus, while the text has been frequently interspersed with numerous fully worked out illustrative examples to help the concepts and theories, a large number of fully solved problems have been appended at the end of each chapter (totalling nearly 300). With its lucid style and in-depth coverage, the book would be immensely useful to undergraduate and postgraduate students of general chemistry and quantum chemistry. Students of physics and materials science would also find the book an invaluable supplement.
You may like...
Chemical Modelling - Volume 13
Michael Springborg, Jan-Ole Joswig Hardcover R9,381 Discovery Miles 93 810
Building and Maintaining Award-Winning…
Matthew J. Mio, Mark a. Benvenuto Hardcover R2,877 Discovery Miles 28 770
Real-time Biomolecular Simulations
Michael Peters Hardcover R3,466 Discovery Miles 34 660
The Power and Promise of Early Research
Desmond H. Murray, Sherine O. Obare, … Hardcover R2,889 Discovery Miles 28 890
Many-Body Methods for Atoms, Molecules…
Jochen Schirmer Hardcover R3,254 Discovery Miles 32 540
Principles and Practices of Molecular…
Patrick Norman, Kenneth Ruud, … Hardcover
Molecular Symmetry and Group Theory
Robert L. Carter Paperback R2,733 Discovery Miles 27 330
Application of Project Management…
Thomas Catalano Paperback R1,199 Discovery Miles 11 990
Chemical Modelling - Volume 12
Michael Springborg, Jan-Ole Joswig Hardcover R9,395 Discovery Miles 93 950
Frontiers in Molecular Design and…
Rachelle J. Bienstock, Veerabahu Shanmugasundaram, … Hardcover R2,897 Discovery Miles 28 970 | <urn:uuid:f921bee0-f1d2-47c8-a68f-3a0d03e46265> | CC-MAIN-2021-21 | https://www.loot.co.za/browse/quantum-theoretical-chemistry?cat=dym | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990584.33/warc/CC-MAIN-20210513080742-20210513110742-00575.warc.gz | en | 0.910398 | 5,031 | 2.609375 | 3 |
Whatever the problem is, threads are not the answer. At least, not to most software engineers.
In this post I will attack the traditional way in which we write concurrent applications (mostly in C++, but also in other languages). It’s not that concurrency it’s bad, it’s just we are doing it wrong.
The abstractions that we apply in industry to make our programs concurrent-enabled are wrong. Moreover, it seems that the way this subject is thought in universities is also wrong.
By the way, whenever I’ll say concurrency, I will refer to multithreaded applications that have multiple things running at the same time, in the same process, on the same machine. I’m also implying that these applications are run on multi-core systems.
The common mindset
In general, whenever we think about concurrency, we think about applications running on multiple threads, doing work in parallel. That means that the application is responsible for creating and managing threads. Then, we all know that we have threading issues; we solve them by using some sort of synchronization mechanisms: mutexes, semaphores, barriers, etc. I call this thread-and-lock-oriented concurrency, or simply thread-oriented concurrency.
Now, what would be the fundamental concepts that somebody needs to learn to do multitreading in C++? Let’s look at a tutorial. It teaches us how to create threads, how to terminate them, and how to join/detach them. The implicit information is that we would put the computations directly on that thread. Most readers will also deduce that for anything that they may want to do in parallel with the rest of the application they would need to create another thread. I believe this model is wrong.
The Java tutorial states that: A multi-threaded program contains two or more parts that can run concurrently and each part can handle a different task at the same time making optimal use of the available resources specially when your computer has multiple CPUs, and then Multi-threading extends the idea of multitasking into applications where you can subdivide specific operations within a single application into individual threads. In other words: if you have multiple things that can be run in parallel, create one thread for each, and your application would optimally use the CPU cores optimally. This is completely wrong.
Maybe these tutorials are not covering the essentials; let us find some other tutorials. Checking out this, and this, and this (all for C++), I see the same things over and over again. For Java, I find this, this, this, and this; again same basic concepts. To be fair, the last two tutorials have sections for executors (which can be much better), but the focus is still on threading low-level primitives.
That is the common belief about writing multi-threaded applications: you create one thread for each thing that you want to run in parallel. If one goes to more advanced expositions of multi-threading, it would find discussions about thread synchronization issues, and different (locking-based) techniques to avoid these problems. The discussion always focuses on low-level threading primitives. That is, in a nutshell, the thread-oriented or thread-and-lock-oriented concurrency.
The problems with thread-and-lock-oriented model
If we take this model to be the essence of multi-threading development, then a transition from single-threaded application to multi-threaded will encounter the following problems (from a modifiability point of view, not considering performance):
- losing understandability, predictability, determinism
- not composable
- needs syncrhonization
- thread safety problems
- hard to control
The first problem is seen when developers spend countless hours of debugging, testing and profiling. If one has a single-threaded algorithm/process that is easy to understand, the same algorithm/process transposed into multi-threaded environment would be much harder to understand. Although, to some point, this is a common problem with multi-threaded development, the thread-oriented model makes this worse. People will sometimes use some form of locking to alleviate some of these problems.
The second problem is slightly more abstract, but extremely important. Let’s say that module A has some properties, and module B has some other properties. In a single-threaded environment, putting these modules into the same program will typically not change the properties of these modules. But, if we try to compose them in a multi-threaded environment, we may encounter problems. We have to know the inner details of these modules to check if they work together. We need to check what threads each module uses, what locks they acquire, and more importantly the inner syncrhonization requirements that they have. For example, if A holds a lock while calling B, and at the same time B holds a lock while trying to call A, we enter a deadlock.
I want to stress some more on how important this point is. The main method of solving problem in software engineering is through decomposition. This non-composability property of multi-threaded applications (developed with a thread-oriented approach) makes our programming job much harder.
Threads do not operate in isolation. They typically need to cooperate to achieve the goals of the application. This interaction between threads needs to be solved with some kind of syncrhonization. Most often, people will use mutexes, semaphores, events and other blocking primitives. Those are really bad; see below for more details.
There is a vast amount of literature describing thread safety issues that commonly appear in multi-threaded programs: deadlocks, livelocks, race conditions, resource starvation, etc. Unfortunately, these problems are typically solved by introducing more locks/waits (directly or indirectly). Again, we will discuss below why this is that bad.
Threads are also hard to control. Once you started a thread with some procedure to be executed, you don’t have much control over how the job is done. Some threads are more important than others and need to finish their job faster; some threads consume resources that are needed for more important threads. These resources may be protected by locks (case in which the more important thread will just wait, wasting time); in some other cases, accessing certain resources (i.e., CPU, cache, memory) will indirectly make other threads slower. The mechanisms for enforcing priorities for different jobs are relatively primitive: just assign a priority to the thread, and throttle the less important threads — this will hardly solve some of the problems.
Lock the locks away
Locks are extremely bad tools (I’m freely using the term lock here to mean any wait-based threading primitive). In a large number of cases, they hurt more than they can help.
The main problem is that they are pure waits; they just introduce delays in the jobs that need to be done. They simply defeat the purpose of having threads to do work in parallel. As Kevlin Henney (DevTube, Twitter) likes to ironically put it, all computers wait at the same speed (see video).
We are using threading to improve the amount of things that a program can do, but on the other hand use locks to slow down the processing. We should avoid locks as much as possible.
Another problem with locks is composability. Using locks the wrong way can easily lead to deadlocks. You simply cannot compose different modules if they hold locks when calling external code.
But, probably the biggest problem with locks is the cumulation of waits. One lock can wait on another lock, which waits on another lock, and so on. I have a very good example from a project that I’ve worked on some time ago. We used a lot of threads (complicated application), and everyone used locks as the only way to solve threading problems. We had a chain of 11 locks, each waiting on some other locks. Also, at given times, the application would just hang for seconds because most of the important threads were locked, waiting for something else. Using threads was supposed to make our application faster, not slower!
As a summary, let me paraphrase a famous quote:
Mutexes provide exclusive access to evil.
Performance: expectation vs reality
The main argument for using threads is the performance: we use more threads, to be able to divide the work to multiple workers, with the expectation that the job will be done faster. Let us put this assumption to the test.
Let’s assume that we have a problem that can be split up in units of work, like in the following picture:
For the sake of simplicity, we assume that each work unit does the same amount of work (take the same time & resources when executed). The work units will have some predecessors (other work units that need to be done in order to execute it), and some successors (work units that can only start after the current work unit is done). We mark with yellow the work units on the critical path, the ones that are vital for the functioning of the program — we’ll pay special attention to them.
Please note that every process can be divided in such a directed graph of work units. In some cases the graph is known from the start, and in other cases the graph is dynamic — it is not known in advance, it depends on the inputs of the process. For our example, we assume that this work breakdown is known in advance.
The thread-oriented model of concurrency would make us assign threads to various lines in this graph. The next figure shows how a possible thread assignment might be:
Each horizontal represents one thread, and various work units are assigned to one thread. We chose not to add arrows for consecutive work units on the same thread (except in the case of waiting for a work unit’s predecessors to be executed).
The diagram shows what I would call the expected execution plan. With the current assignment of work units to threads, we expect the tasks to be executed just like shown in the picture. If the duration of a work unit is 40 ms, we expect the whole processing to be done in 240 ms. That is, instead of waiting 720 ms for all the work units to be executed on a single thread, we wait only 240 ms. We have a speedup of 3 — it can’t be higher, as the dependencies are limiting the amount of parallelism we have.
But, our machines are not ideal. We have limited number of cores. Let’s say that on this machine we only have 4 cores available (and nobody else is using these cores). This means, that every time we have more than 4 threads doing meaningful work, they will be fitted into the 4 cores. The cores will jump back and forth between the threads, leaning to a slowdown in the execution of the work units.
For our example, the third column shows 8 work units in parallel; this is the only case in which we execute more than 4 work units in parallel. As a consequence, the execution time for the work units in the third column will double. This is depicted by the following picture:
The figure also shows delays (depicted with gray) for all the syncrhonization points between threads needed to handle the dependencies. Each time a work unit needs to communicate to other threads to start other work units, or each time multiple work units need to complete to start a new work units, we add such a gray box.
For our example, we considered the synchronization blocks to take 25% of the work unit. If we a work unit takes 40 ms, a synchronization block would take 10 ms. This may be too much in some cases, but nevertheless is possible — and there are always worse cases.
With these 2 effects considered, we raise the total execution time from 240 ms to 320 ms – that is a 33% loss in performance.
But assigning work units/threads per core is more complex than that. We assumed that if two work units need to share a core, both work units would finish in double the time. But, this may not be the case. We may have cache effects between the two, and actually be slower than 2 times. The constant back and forth between threads, will also have an impact on the cache, and thus can make the work units run even slower. Also the actual switching takes time, so additional overhead. Figure 4 shows some extra overhead on the work units that have to switch cores; we add 25% more to those work units. In total, the execution time would grow to 340 ms.
But wait, we are not done yet. Threads usually don’t work in isolation, and they access shared resources. And the standard view on concurrency is that we need locks to protect these. Those add more delays to the work units, as exemplified in Figure 5:
To simplify our example, we only added lock overheads to the work units in the middle, where we overbook our cores. We drawn 3 locks of 10 ms each. With this the total time increases to 370 ms.
Compared to the ideal model (Figure 2), the execution time increased with 54%. That is a very large increase.
If for your application some work units are more important than others, then you may want to ensure that those work units are done faster than the other ones. That is, you can assign higher priorities to some threads compared to other threads. Let’s say that in our example we care more about the yellow work units, and don’t care that much on the blue ones. We may be templed to the raise the priority of the 7th thread. A possible outcome of this thread priority change can be seen in Figure 6. We would reduce the amount of time needed by the work units assigned to this thread, but, as we have limited resources, we would increase the time needed for other threads to complete their work units.
As one can see from the picture, the results are kind-of strange. We reduced the total execution time from 370 to 360 ms, but on the other hand we’ve made all the other threads slower, and in some case, the critical path computations would wait on those threads.
Compare this with the Figure 2. What we’ve expected and what we’ve got. Not only the total time of executing is much bigger, but we’ve also made sure that we consume most of the cores for a longer period of time. This has typically a ripple effect; other work units are getting slower, which will generate more unexpected behavior.
So, using the common approach to concurrency, we are not gaining as much as we think out of adding more threads performance-wise.
Concurrency allows us to have multiple threads going in parallel, and thus increase the throughput of our applications. That is a good thing. But, unfortunately these threads need to communicate: they need to access the same data, the same resources, they need to make progress towards the same goal. And this obviously is the root of the problem.
Here is a good analogy for thread-oriented concurrency model by Kevlin Henney: Concurrency Versus Locking. Building on this idea, we can define:
|Software world||Automotive world|
|work unit||set of cars that pass over a road in a period of time|
|work unit dependencies||cars need to go from one road to another|
|total execution time||total time for all cars to reach the destination|
|lock/semaphore||traffic lights / roundabout|
|too few work unit dependencies||(highway) road network badly connected (sometimes this means long way to nearby locations)|
|too many work unit dependencies||too many intersections or access points (too much time spent in these)|
|too many small threads (descheduled often)||small roads|
|threads that are not descheduled from cores||highways|
This analogy allows us to properly feel the scale of the problem, and also it can guide us to find better solutions.
For example, using this analogy, it is clear that adding too many work unit dependencies (i.e., very small work units) will make us spend too much time in the synchronization part. At the opposite pole, if we have very few dependencies, once you start a work unit, you have to wait for its completion to get new work units executed on the same thread.
In the automotive world, we would like to have as many highways as possible, and as little (blocking) intersections as possible.
According to the common concurrently mindset, we create a lot of small threads (small roads), and to solve our problems we add a lot of locks (traffic lights).
We all want our threads to behave like highways, but instead we add a lot of locks.
An application with a lot of locks it's like a highway with traffic lights every few miles.
Think about that, next time you want to add a lock in your application! It may also help to consider the following picture when adding locks:
Teasing: a way our of this mess
Do not despair. Concurrency doesn’t need to be like that. The automotive world teaches us that we can have high-speed highways and a well-connected road network at the same time. I’ll try in this small section a short teasing to what I think is the solution of all these concurrency problems.
The key point is that we shall start thinking in terms of tasks; not in terms of threads and locks. We shall raise our abstractions levels from using threading primitives to using concurrency high-level constructs. These tasks correspond to the work units we’ve discussed so far (I wanted to use a different terminology to indicate the fact that the previous example is not the right way to encode concurrency). The main point is that we shall approach concurrency problems with a breakdown of a problem in a directed acyclic graph of tasks like shown in Figure 1 above.
Let us go through the list of problems and see how these are solved in a task-oriented approach:
|Thread-oriented problem||Task-oriented correspondent|
|losing understandability, predictability, determinism||If every problem is decomposed into tasks, then it’s much easier to understand the problem, predict the outcome, and determinism is greatly improved|
|not composable||directed acyclic graphs are composable; problem completely solved|
|needs syncrhonization||a proper execution graph will be able to avoid in most situations the need for locking; there would be only the need for synchronization at task begin/end, and this can be pushed at framework level|
|thread safety problems||if the graph is properly constructed, these will disappear; problem solved|
|hard to control||by definition a task-oriented system is much flexible, and can be better controlled|
As one can see, most of the problems are either solved or greatly alleviated. But, above all, there is an even more important benefit that I want to stress out. The traditional thread-oriented approach, being bad at composability, impeded the use of top-down decomposition approaches for solving problems (divide and conquer approaches) — and this is (most probably) the best known method of software design. On the other hand, a task-oriented approach would actually help in this method: it’s easy to compose directed acyclic graphs. In other words, task-oriented is much better than thread-oriented from a design perspective.
From the performance point of view, task-oriented concurrency can also be much better than thread-oriented. The main idea is to let a framework to decide the best execution strategy, instead of letting the user pick a predefined schema. By embedding this into the framework, one can much better optimize the task scheduling. Note: although doing the scheduling of tasks automatically works better than a statically arranged system, an expect can always use a hand-picked optimizations to match or even be better than the automatic system; but this typically involves a lot of effort.
For the problem that we’ve defined in Figure 1, a task-oriented system would produce an execution similar to:
This arrangement is very possible within a task-oriented system. And, we don’t use more threads than cores, we don’t use locks and we won’t throttle threads. We only need synchronization when taking tasks that are not direct followers of the previous tasks executing on a given worker thread. A good task system would have a highly optimized synchronization, so this is typically very small — we made it here be 1/8 of the task time. If the task time is 40 ms, then the total time would be 255 ms. This is close to the ideal time with no synchronization (240 ms), and much better than the typical time obtained with a thread-oriented approach (360 ms) – 50% improvement.
More on task-oriented approach in a later post.
We discussed in this post the thread-and-lock-oriented model, which is to add threads to create parallelism, and then create locks to protect shared resources. We then discuss the main problems with this model. Besides the problems related to modifiability, we discuss how locks are an anti-pattern, and then go to show that the model has performance problems — the execution time differs from the (naive) expected execution time. We use the analogy with cars, roads and intersection to give an intuition on how threading, the way we typically think of it, is a bad approach. Finally, we briefly introduce another way of thinking about concurrency, a task-oriented approach, that promises to solve most of the problems associated with concurrency.
Things to remember:
- traditional approach to concurrency is to create threads, and use low-level primitives to battle with the problems introduced by threads
- locks are evil; they simply defeat the purpose of multi-threading; they are an anti-pattern
- Mutexes provide exclusive access to evil
- in terms of performance, the typical expectation is wrong; the reality is typically far worse then what we would expect from creating threads
- every time you add locks to a system, think about how bad would a highway be with traffic lights every few miles; and think about the worst traffic jam that you have, and the fact that adding locks can turn your software into something similar to a traffic jam
- an application with a lot of locks it’s like a highway with traffic lights every few miles
- consider a task-oriented approach when dealing with concurrency; this promisses to have modifiability advantages, but also performance gains
- avoid locks, avoid locks, avoid locks
Until next time. Keep truthing! | <urn:uuid:139332bb-6428-471e-8982-eb07cb64438c> | CC-MAIN-2021-21 | http://lucteo.ro/2018/09/02/threads-are-not-the-answer/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991288.0/warc/CC-MAIN-20210518160705-20210518190705-00454.warc.gz | en | 0.949121 | 4,754 | 3.046875 | 3 |
Timor-Leste experienced a fundamental social and economic upheaval after its people voted for independence from Indonesia in a referendum in August 1999. Population was displaced, and public and private infrastructure was destroyed or rendered inoperable. Soon after the violence ceased, the country began rebuilding itself with the support from UN agencies, the international donor community and NGOs. The government laid out a National Development Plan (NDP) with two central goals: to promote rapid, equitable and sustainable economic growth and to reduce poverty.
Formulating a national plan and poverty reduction strategy required data on poverty and living standards, and given the profound changes experienced, new data collection had to be undertaken to accurately assess the living conditions in the country. The Planning Commission of the Timor-Leste Transitional Authority undertook a Poverty Assessment Project along with the World Bank, the Asian Development Bank, the United Nations Development Programme and the Japanese International Cooperation Agency (JICA).
This project comprised three data collection activities on different aspects of living standards, which taken together, provide a comprehensive picture of well-being in Timor-Leste. The first component was the Suco Survey, which is a census of all 498 sucos (villages) in the country. It provides an inventory of existing social and physical infrastructure and of the economic characteristics of each suco, in addition to aldeia (hamlet) level population figures. It was carried out between February and April 2001.
A second element was the Timor-Leste Living Standards Measurement Survey (TLSS). This is a household survey with a nationally representative sample of 1,800 families from 100 sucos. It was designed to diagnose the extent, nature and causes of poverty, and to analyze policy options facing the country. It assembles comprehensive information on household demographics, housing and assets, household expenditures and some components of income, agriculture, labor market data, basic health and education, subjective perceptions of poverty and social capital.
Data collection was undertaken between end August and November 2001.
The final component was the Participatory Potential Assessment (PPA), which is a qualitative community survey in 48 aldeias in the 13 districts of the country to take stock of their assets, skills and strengths, identify the main challenges and priorities, and formulate strategies for tackling these within their communities. It was completed between November 2001 and January 2002.
Kind of Data
Sample survey data [ssd]
1 Household information
A: Household Roster
B: New members since the violence in 1999
C: Persons leaving household after violence in1999
D: Information on parents of household members
A: Description of the dwelling
B: Housing state
D: Ownership and expenditure
3 Access to facilities
4 Expenditures and consumption
A: Weekly food consumption
B: Monthly and annual non-food expenditure
C: Durable goods
A: General education
B: Attendance school years 1998/99-2001/02
A: Health care use
B: Chldren health
7 Fertility and maternity history
A: Labour force participation
B: Job information
C: Individual time use
9 Farming and livestock
B: Crops harvested
C: Agricultural inputs
E: Farming equipment
F: Labour and farm produce
H: Fishing and aquaculture
10 Transfers, borrowing and savings
A: Transfers given and loaned
B: Transfers received
D: Aid assistance
Domains: Urban/rural; Agro-ecological zones (Highlands, Lowlands, Western Region, Eastern Region, Central Region)
Producers and sponsors
National Statistics Directorate
The World Bank
SAMPLE SIZE AND ANALYTIC DOMAINS
A survey relies on identifying a subgroup of a population that is representative both for the underlying population and for specific analytical domains of interest. The main objective of the TLSS is to derive a poverty profile for the country and salient population groups. The fundamental analytic domains identified are the Major Urban Centers (Dili and Baucau), the Other Urban Centers and the Rural Areas. The survey represents certain important sub-divisions of the Rural Areas, namely two major agro-ecologic zones (Lowlands and Highlands) and three broad geographic regions (West, Center and East). In addition to these domains, we can separate landlocked sucos (Inland) from those with sea access (Coast), and generate categories merging rural and urban strata along the geographic, altitude, and sea access dimensions. However, the TLSS does not provide detailed indicators for narrow geographic areas, such as postos or even districts. [Note: Timor-Leste is divided into 13 major units called districts. These are further subdivided into 67 postos (subdistricts), 498 sucos (villages) and 2,336 aldeias (sub-villages). The administrative structure is uniform throughout the country, including rural and urban areas.]
The survey has a sample size of 1,800 households, or about one percent of the total number of households in Timor-Leste. The experience of Living Standards Measurement Surveys in many countries - most of them substantially larger than Timor-Leste - has shown that samples of that size are sufficient for the requirements of a poverty assessment.
The survey domains were defined as follows. The Urban Area is divided into the Major Urban Centers (the 31 sucos in Dili and the 6 sucos in Baucau) and the Other Urban Centers (the remaining 34 urban sucos outside Dili and Baucau). The rest of the country (427 sucos in total) comprises the Rural Area. The grouping of sucos into urban and rural areas is based on the Indonesian classification. In addition, we separated rural sucos both by agro-ecological zones and geographic areas. With the help of the Geographic Information System developed at the Department of Agriculture, sucos were subsequently qualified as belonging to the Highlands or the Lowlands depending on the share of their surface above and below the 500 m level curve. The three westernmost districts (Oecussi, Bobonaro and Cova Lima) constitute the Western Region, the three easternmost districts (Baucau, Lautem and Viqueque) the Eastern Region, and the remaining seven districts (Aileu, Ainaro, Dili, Ermera, Liquica, Manufahi and Manatuto) belong to the Central Region.
SAMPLING STRATA AND SAMPLE ALLOCATION
Our next step was to ensure that each analytical domain contained a sufficient number of households. Assuming a uniform sampling fraction of approximately 1/100, a non-stratified 1,800-household sample would contain around 240 Major Urban households and 170 Other Urban households -too few to sustain representative and significant analyses. We therefore stratified the sample to separate the two urban areas from the rural areas. The rural strata were large enough so that its implicit stratification along agro-ecological and geographical dimensions was sufficient to ensure that these dimensions were represented proportionally to their share of the population. The final sample design by strata was as follows: 450 households in the Major Urban Centers (378 in Dili and 72 in Baucau), 252 households in the Other Urban Centers and 1,098 households in the Rural Areas.
The sampling of households in each stratum, with the exception of Urban Dili, followed a 3-stage procedure. In the first stage, a certain number of sucos were selected with probability proportional to size (PPS). Hence 4 sucos were selected in Urban Baucau, 14 in Other Urban Centers and 61 in the Rural Areas. In the second stage, 3 aldeias in each suco were selected, again with probability proportional to size (PPS). In the third stage, 6 households were selected in each aldeia with equal probability (EP). This implies that the sample is approximately selfweighted within the stratum: all households in the stratum had the same chance of being visited
by the survey.
A simpler and more efficient 2-stage process was used for Urban Dili. In the first stage, 63 aldeias were selected with PPS and in the second stage 6 households with equal probability in each aldeia (for a total sample of 378 households). This procedure reduces sampling errors since the sample will be spread more than with the standard 3-stage process, but it can only be applied to Urban Dili as only there it was possible to sort the selected aldeias into groups of 3 aldeias located in close proximity of each other.
The final sampling stage requires choosing a certain number of households at random with equal probability in each of the aldeias selected by the previous sampling stages. This requires establishing the complete inventory of all households in these aldeias - a field task known as the household listing operation. The household listing operation also acquires importance as a benchmark for assessing the quality of the population data collected by the Suco Survey, which was conducted in February-March 2001. At that time, the number of households currently living in each aldeia was asked from the suco and aldeia chiefs, but there are reasons to suspect that these figures are biased. Specifically, certain suco and aldeia chiefs may have answered about households belonging, rather than currently living, in the aldeias, whereas others may have faced perverse incentives to report figures different from the actual ones. These biases are believed to be more serious in Dili than in the rest of the country.
Two operational approaches were considered for the household listing. One is the classical doorto-door (DTD) method that is generally used in most countries for this kind of operations. The second approach - which is specific of Timor-Leste - depends on the lists of families that are kept by most suco and aldeia chiefs in their offices. The prior-list-dependent (PLD) method is much faster, since it can be completed by a single enumerator in each aldeia, working most of the time in the premises of the suco or aldeia chief; however, it can be prone to biases depending on the accuracy and timeliness of the family lists.
After extensive empirical testing of the weaknesses and strengths of the two alternatives, we decided to use the DTD method in Dili and an improved version of the PLD method elsewhere. The improvements introduced to the PLD consisted in clarifying the concept of a household "currently living in the aldeia", both by intensive training and supervision of the enumerators and by making its meaning explicit in the form's wording (it means that the household members are regularly eating and sleeping in the aldeia at the time of the operation). In addition, the enumerators were asked to select a random sample of 10 households from the list, and visit them
physically to verify their presence and ask them a few questions.
Training for the listing operation was done on May 18 and 19, 2001 and was conducted by Manuel Mendonca, Juan Muñoz, Rodrigo Muñoz and Valerie Evans. It was stressed that it was important for the aldeia chiefs to understand that there was no aid coming as a result of this listing. The supervisors were also trained by Lourenco Soares and Rodrigo Muñoz to use the program installed on their laptops to record agricultural data being collected for JICA while the teams were in the field for the listing operation. This was an opportunity for the supervisors to become familiar with entering data in the field as a preparation for the TLSS. Finally, the listing
operation was carried out by 5 teams, each one comprising one supervisor and three enumerators, between May 21 and June 28.
See detailed information on selection probabilities and sampling weight calculations in document titled "Basic documentation".
Dates of Data Collection
Data Collection Mode
Data Collection Notes
RECRUITMENT AND TRAINING
Part of the required workforce to carry out the survey fieldwork was drawn from the same teams that did the household listing. Indeed all of them were involved in this process too. This had the advantage that they knew already the location of the sucos and aldeias and had met their chiefs. Household listing records on how to access each aldeia, whether by vehicle or by foot, and the time to get there from the suco center had also been kept and were used for planning purposes. However, additional people were also recruited to complete the necessary teams for the fieldwork, specific language requirements were asked for most of them i.e. knowledge ofv
Fataluku, Bunak or Mambae. In the end, 37 people were trained and the best 32 were chosen for the enumeration. The best supervisor from the listing operation, Elias Dos Santos, was chosen to be the Field Coordinator and to assist in the enumerator training. The remaining 4 persons were kept as a backup and to do some work in Dili. Hence, eight field teams, each composed of three interviewers and one supervisor, conducted the household survey. Six teams were outside Dili, one for Oecussi and two in Dili, the main one and the spare team.
The survey was fielded during end August to early December 2001. Each team was responsible to cover one aldeia per week, so each interviewer had to interview 6 households during that period. Several visits to each household were required to complete all modules of the questionnaire.
Each of the 300 selected aldeias was to have 6 households interviewed for a total of 1,800 households. The questionnaires for each aldeia were sent out with a tracking sheet containing the names of the head of household for the 6 selected houses, and three reserve households in case the original households were not available. If an original household (numbered 1-6) was not interviewed, it was to be replaced with the first reserve household, numbered HH 7. If a second original household, or the first reserve, was not available, it was to be replaced with the second reserve household (HH 8), and so on for the third reserve household (HH9). For any
replacement, a full description of why the original household could not be interviewed was to be documented on the tracking sheet by the supervisors.
Overall, there were 303 cases were a household had to be replaced. Among the reasons given for non-completion of the interviews, a few points are interesting. The refusal rate was extremely low: there were only 6 refusals in the entire survey, and of those, only two were outright refusals. Second, there is a great deal of movement in the country and this constitutes the bulk of refusals, 255, although it must be said that most of them appear to be temporal movements. One reason why people leave temporarily their aldeia is because after the harvest they have to go somewhere else where they can find work, otherwise they have nothing to do and can not support themselves. The other explanation is that during planting time they have to move to their land for several weeks because that is at a considerable distance from their dwelling. Finally, the remaining 42 refusals were either because the dwelling could not be found or it was empty, or because the dwelling should not have been included on the listing.
Following completion of the fieldwork, a general debrief was held at the World Bank’s Dili offices with the participation of almost all supervisors and interviewers. The intention was to discuss issues and share experiences on the enumeration process such as their perceptions about their work, problems encountered, comments on sections of the questionnaire that were particularly hard to answer, level of cooperation of the chiefs and reception of the households interviewed. For instance, the health section seemed to be of special importance for the interviewees and many of them spoke about the need of more health services, the consumption module was considered a bit long, almost all women answered without major problems the fertility section, the Indonesian wording of some agricultural questions was ambiguous, chiefs were very cooperative and the participation of the households was more than satisfactory.
The 2001 TLSS household questionnaire follows the regular design of that of a Living Standards Measurement Study (LSMS) Survey. It was designed to collect all the necessary information required for a fairly comprehensive assessment of living standards and to provide the key indicators for social and economic planning. It comprises thirteen main sections and several subsections, each covering different topics about household activities. As a result, each household had to be visited at least two times to complete all sections.
Two additional sections are worth noticing when comparing this questionnaire with standard LSMS questionnaires. The first one refers to social capital, which tries to capture the involvement of the population in user or community groups and local networks as means of support for themselves both economic and socially. The second one is about subjective wellbeing. It covers individual perceptions on living standards, economically and power status and main concerns for the own individual and the country. It also provides information on consumption adequacy for food, housing, health, income, etc. Lastly, vulnerability, understood mainly as food insecurity, is addressed in this section too. Data are gathered on the number of months with inadequate food provision, members who suffered the most and coping strategies.
A decentralized approach to data entry was adopted in Timor-Leste. Data entry proceeded side by side with data gathering with the help of laptops to ensure verification and correction in the field. The purpose of this procedure was twofold. First, it reduced the time of data processing because it was not necessary to send the questionnaires to the central office to be entered. More important, data were available for analysis very soon after the fieldwork was completed. And second, it allowed for immediate and extensive checks on data quality. Any inconsistency revealed at this stage was to be rectified by revisiting the households while still being in the village, and so, the need for later data editing was minimized. A second round of standard checks on data quality was also implemented in the project office in Dili upon retrieval of the data from the field teams. In general, with a few exceptions, the analysis has confirmed the high quality of the data entry and validation processes.
The data entry program was designed to check for data entry errors, coding mistakes, as well as to search for incomplete or inaccurate data collection. It was based upon two major types of checks. On the one hand, standard value-range checks were included. If the data entry operator entered data, which was outside the bounds of the programmed range, either because the number was not a pre-coded one or because it was extremely unlikely, the program would alert him. On the other hand, it also contained a series of checks to ensure that the data collected were internally consistent. The skip program used in the questionnaire was programmed into the data
entry software to ensure that the information entered was consistent to the desired skip pattern. For instance, if the code “3” was entered by mistake in a question where the only valid responses were “1” or “2”, the program would alert the operator. Similarly, if the household reported having purchased a particular good, the program would check to see if information on quantities and expenditure was also reported. However if the data entered into the computer matched the information provided in the questionnaires, the data entry operators were instructed not to make any changes to any of them. Such cases were brought to the attention of the supervisor, which
either corrected the mistake based on another information collected in the questionnaire or decided if a visit to that household was necessary.
LSMS Data Manager
The World Bank
In receiving these data it is recognized that the data are supplied for use within your organization, and you agree to the following stipulations as conditions for the use of the data:
1. The data are supplied solely for the use described in this form and will not be made available to other organizations or individuals. Other organizations or individuals may request the data directly.
2. Three copies of all publications, conference papers, or other research reports based entirely or in part upon the requested data will be supplied to:
National Statics Directorate
Caicoli, Dili, Timor Leste
The World Bank
Development Economics Research Group
LSMS Database Administrator
1818 H Street, NW
Washington, DC 20433, USA
tel: (202) 473-9041
fax: (202) 522-1153
3. The researcher will refer to the 2001 Timor Leste Survey of Living Standards as the source of the information in all publications, conference papers, and manuscripts. At the same time, the National Statistics Directorate is not responsable for the estimations reported by the analyst(s).
4. Users who download the data may not pass the data to third parties.
5. The database cannot be used for commercial ends, nor can it be sold.
Use of the dataset must be acknowledged using a citation which would include:
- the Identification of the Primary Investigator
- the title of the survey (including country, acronym and year of implementation)
- the survey reference number
- the source and date of download
Disclaimer and copyrights
The user of the data acknowledges that the original collector of the data, the authorized distributor of the data, and the relevant funding agency bear no responsibility for use of the data or for interpretations or inferences based upon such uses.
DDI Document ID
World Bank, Development Economics Data Group
Production of metadata
Date of Metadata Production
DDI Document version
Version 02 (March 2011). | <urn:uuid:997f7036-9097-4ed8-8b6f-27cf51a42de2> | CC-MAIN-2021-21 | https://microdata.worldbank.org/index.php/catalog/75/study-description | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.25/warc/CC-MAIN-20210507060253-20210507090253-00011.warc.gz | en | 0.959215 | 4,428 | 2.875 | 3 |
Staramy się tłumaczyć nasze strony na trzy języki – francuski, angielski i polski, jeśli to możliwe. Z wyjątkiem listów, informacji prasowych i fragmentów publikacji publikowanych w ich oryginalnym języku, które nie zostały przetłumaczone.
HISTORY OF CONGRESS
History of the Canadian Polish Congress is divided into two periods presented below. In the first period our Polish-Canadian umbrella organization in Canada was named the Federation of Polish Societies in Canada. The Federation was granted federal status on February 7, 1933, under Corporation Number #349500. The change of the name of our organization came to effect during the General Meeting held in Toronto 2-4 September 1944.
The Federation of Polish Societies in Canada
The Federation of Polish Societies was the first Polonian umbrella organization founded in Canada. It played an important part in the consolidation of Polish organizations in Canada.
It is not possible now to say for certain where the initiative for forming a central organization of Poles in Canada originated. It cannot be ascertained whether the concept was put forward by Dr. Jerzy Adamkiewicz, Consul General of the Republic of Poland in Montreal, or by the authorities in Warsaw, which took over the idea and made it a reality.
JulianTopolnicki, an active leader in Polonian organizations, one of the founders of the Federation and its representative in Montreal, writes in his memoirs that the idea of establishing a central organization was born in the White Eagle Society in Montreal in 1931. The president of this organization, Ludwik Wiktor, carried out a series of discussions with his members and with the Society of Polish Veterans, the polish Brotherly Aid Society and the Polish Catholic League.
In these discussions, an active role was played by Mr. J.M. Kreutz, editor of the now-defunct Montreal weekly, “The Polish Word” (“Slowo Polskie”). As a result of the preparatory work of several interested persons, a convention was held in Toronto on November 3-4, 1931. Thirty-five delegates from Ontario and Quebec attended the convention, among them representatives of Catholic and Protestant organizations active in parishes.
At the First Convention the policy was clearly laid down that the new organization was to be exclusively a representative forum of local organizations, with no intention to limit the activity of the local bodies, nor take advantage of their finances. A statute was accordingly presented to the convention that would permit any organization that joined the Federation to maintain its financial independence.
After much debate, the statute was accepted. This statute was prepared by Dr. Nalecz-Dobrowolski, and gives the following aims and purposes to the Federation:
Concentrating and organizing all Polish groups in Canada, maintaining brotherly unity among them.
To wield high the banner of national honour, while drawing strength from the inexhaustible source of national ideals and the heroic past of our Mother-country; defending Poland and its people from enemy attacks while loyally using all its resources for the multi-faceted development of Canada.
Organizing for its members purposeful and fruitful help, both moral and material, as well as, should the need arise, defending the entire Polish community in Canada
Representing the Polish Emigration in Canada to the Canadian political, government and social authorities, to the same authorities in Poland and in other Polish emigration centres, and on the international scene”.
This list mirrors aims and purposes of member organizations. Almost all the Polonian organizations of this period were oriented toward Poland; Canadian problems were secondary.
One point from the statute (paragraph 14) must be mentioned: “No organization whose intentions include sudden change of the existing social order of the world, by revolution, will be permitted to belong to the Federation”.
By this article in the statute, all communist associations, regardless of the name they operated under, were excluded from membership. “Communists” were not named directly, since the Communist Party was not a legal political organization in Canada. Polish communists attempted, nevertheless, to join the Federation, or to prevent its founding. Three representatives of communist association were present at the First Convention in Toronto, but by a vote of 29 to 18, they were asked to leave the conference room.
Twenty-four organizations announced their intention of joining the Federation, including a few religious organizations. The Polish Alliance of Canada was represented at the convention, but by a decision taken at an extra-curricular meeting on January 2, 1932, did not become a member of the Federation.
Mr. J.M. Kreutz, editor of the “Polish Word”, a publication active in the organizing procedures, wrote an article following the convention in which he wrote, among other things:
“Fellow countrymen! The deed accomplished at the Convention in Toronto should be for us all an expression of our national unity; be our pride and joy; and proof of the fact that we have transcended the era of merely local activity, having attained a stage at which the Polish population in Canada can be counted as a part of the Great Polish nation, and not as a group of pariah and outcasts battling with fate.
Congratulations, fellow-Poles, for so much understanding shown for the cause, and for such an amount of unselfish love for Poland, our great Fatherland”.
There is a definite lack, both in the existing archives and in Polonian press, of reports of the Federation activities in the first year of its existence. We may assume that it was a time of organization and prolific correspondence with the aim of cementing contacts; a time to sound out opinions and study possibilities.
The Second Convention was held at Windsor, Ontario, on November 5, 1932. Amendments to the statute concerning the transfer of the office to Winnipeg were prepared. This matter was decided by means of correspondence.
The Federation was strengthened by the fact that many organizations from western Canada became members, but neither the Polish Alliance of Canada (in Ontario) nor the parish societies, united in the Association of Poles (in the west), could be persuaded to join the Federation. While the Polish Alliance sent observers to the convention and promised to review its stand, the Association of Poles persistently maintained its independence.
After the Second Convention the Federation began a recruiting campaign, sending invitations to local organizations, and culminating in a tour by General Secretary John Sikora. Mr. Sikora toured Ontario and Quebec, where he visited the larger communities, making speeches promoting the aim of consolidating the work of Polonian communities within the Federation. He spoke at public meetings in Toronto, Hamilton, Windsor, Kitchener, Kirkland lake, Timmins and Montreal, and conferred with the leaders of various local organizations, both members of the Federation and potential members.
This campaign did have some positive results. In 1933 thirty-three declarations of membership were received, with an additional sixteen organizations coming in the following year, for a total membership of 3,391 individuals. The statute also provided for the acceptance of individuals as independent members, but this form proved to be less attractive, and only ten people became members in this fashion.
The Federation carried out a number of activities requiring funds. There was the fund-rising drive for flood victims in Poland which brought in ,301; the trip of the general secretary to the SWIATPOL convention in Warsaw; organizing the festivities of the “Day of the Sea” in 1933. These manifestations had a highly patriotic character. For the purposes of Polish education in Canada, the Federation received a donation of 5,83 from Warsaw, and this amount was supplemented by profits from other sources and fund-raising drives.
The leaders of the Federation wanted to model the organization on that of a benevolent society. In contrast to the United States, though, where Polonian organizations relied on these foundations, in Canada only a few local organizations were founded on the basis of insurance. The central organization did not change its organizational foundations.
Education was rightly perceived as one of the most important tasks of the Federation. Consequently, appeals for financial support were sent out to member organizations, and a number of leaflets and communiqués were published, emphasizing the importance of this issue. Fund-raising for the Education Fund was begun; 11,000 special stamps/stickers were printed, to be sold at 5 cents each. Somewhat later, member organizations were informed that the head executive board had designated February 1935 as Education Month.
For the purposes of education the Federation received a donation from the World Association of Poles Abroad (SWIATPOL), primarily in the form of teaching aids. By 1935 SWIATPOL had provided about 3,000 textbooks, the youth magazine “Plomyczek”), and other materials such as posters, reproductions of paintings and maps. Existing libraries were restocked and new ones added. The salary SWIATPOL paid to Mr. Sikora, the school inspector, varying from to 0 a month, was a more direct form of subsidy.
Attention was also given to the problem of preserving the “Polishness” of emigrants, particularly among the youth. Warsaw proposed that the Federation send young people to Poland for various types of courses and instruction, which would help, indirectly, to prepare them to take over leadership of the organization. Among the courses offered were ones dealing in sports, gymnastics, scouting and music. Candidates were screened by the consulates until in 1935, after the intervention of SWIATPOL, the Federation began fulfilling this function. The consulates arranged their transportation on Polish ships; the students received discounts on the cost of the trip, free food and lodging for the duration of the course, but had to possess a certain amount of personal spending money. In all, thirty-four young people, both men and women, went to Poland under this program between 1934 and 1936.
The Fourth Convention in Winnipeg, held September 24-26, 1936 resulted in changes in the head executive. The period of John Sikora’s role as secretary and the presidency of B.B. Dubienski came to an end. The report for the convention stated that the Federation was constantly growing in strength, as eighteen new organizations had joined, bringing the total membership to sixty-three.
The Fifth Convention was held in Montreal, September 29 to October 1, 1938. Membership of the Federation had grown to seventy-four organizations, indicating the Federation to be a “great central representative of the Polish population in Canada, which is able to unite in its ranks, under a common ideal, over ¾ of the organized community.
The secretariat of the Federation remained in the law offices of B.B. Dubienski, honorary president of the Federation, and as the report states, “the room has been granted to us for no charge up to this time”.
In April 1939 the executive board sent appeals to all member organizations to carry out fundraising drives for the Polish National Defence Fund. All the member organizations of the Federation joined in the fund raising. Local committees were formed, and a central committee was established in Winnipeg. Many people sent donations independent of the campaign, mailing them directly to Warsaw, or to the consulates in Canada.
With the beginning of the war on September 1, 1939, the head executive board of the Federation released a statement directed to all its member organizations:
“The entire Emigration must stand together as one man under the banner of National Defense held high by the Federation of Polish Societies in Canada and the Central Committee for National Defense in Canada, and must remain in such a position until VICTORY!”.
The last convention, the Seventh, took place September 2-4, 1943, in Windsor. Discussions were mostly on the topic of activities related to the war effort. Motions were accepted and resolutions passed regarding German war crimes, demanding that steps be taken to prevent the continuation of such crimes, and punishment of guilty. In the report to the Sixth Convention of the Federation we read:
“We can be proud of what has been accomplished thanks to ten years of work in the Federation…the platform for our decade-long activity was the preservation of loyalty to our new country, Canada, building a cultural life within the ranks of our organizations, fostering the polish language, and co-operation on a cultural/educational level with the Polish Nation”.
The Federation exerted a powerful, positive influence on the development of Polonian organizational life. Thanks to its existence, consolidation tendencies grew; small, weak, organizational work prepared for action on a wider horizon in the Canadian Polish Congress.
(Excerpts from Benedykt Heydenkorn’s book titled: “The Organizational Structure of the Polish Canadian Community”)
The impressive contribution of Polish Canadians to Canadian society has been made by hundreds of thousands of Polish immigrants and their descendants who, according to 1996 census, number almost half a million.
Many of the early Polish immigrants were members of the Watt and De Meuron military regiments from Saxony and Switzerland sent to Canada to help the British Army in North America, and several were émigrés who took part in the 1830 and 1863 insurrections against the Russian occupation of Poland. The first Polish immigrant, Dominik Barcz, is known to have come to Canada in 1752. He was a fur merchant from Gdansk who settled in Montreal. He was followed in 1757 by Charles Blaskowicz, who worked as deputy surveyor-general of lands. In 1776 arrived army surgeon Auguste Francois Globenski, whose descendants played a prominent role in the St. Eustache community north of Montreal. A descendant, Charles Auguste Globenski was elected to the House of Commons in Ottawa in 1875.
There were Poles in Selkirk’s expedition that attempted a settlement on the Red River Valley, but apparently did not stay long.
In 1841, Casimir Stanislaus Gzowski from Poland arrived in Canada via the U.S.A. and for 50 years made numerous contributions in the engineering business, military and community life of Toronto and Southern Ontario, for which he was knighted by Queen Victoria.
Charles Horecki contributed in 1872 to the exploration and railway construction possibilities of the land from Edmonton to the Pacific Ocean, through the Peace River Valley. Today, a mountain and a body of water in British Columbia are named after him.
The first group-settlers were the Kaszubs of Northern Poland who escaped from Prussian oppression. They arrived in Renfrew County of Ontario in 1858, where they founded the settlements of Wilno, Barry’s Bay, and Round Lake. By 1890 there were about 270 Kaszub families working in the beautiful Madawaska Valley of Renfrew County, and contributing to the lumber industry of the Ottawa Valley.
The other waves of Polish immigrants in the periods from 1890-1914, 1920-1939, and 1941 to this day, settled across Canada from Cape Breton to Vancouver, and made numerous and significant contributions to the agricultural, manufacturing, engineering, teaching, publishing, religious, mining, cultural, professional, sports, military, research, business, governmental and political life of our country.
Some Polish-Canadians have been recognized by awards and appointments by the Queen, our governments, universities and prominent organizations. First, pilot-gunner Andrew Mynarski of Winnipeg should be mentioned. He was awarded posthumously the Victoria Cross for extreme valour in World War II. Recipients of the Order of Canada were: citizenship judge Irena Ungar, Group Captain Stefan Sznuk, missionary Oblate priests Rev. Anthony Hylla and Rev. Michael Smith, Rt. Rev. Monsignor Anthony Gocki of Regina, lawyer B. Dubienski of Winnipeg, former alderman and citizenship judge, Knight of St. Gregory, Peter Taraska of Winnipeg, multilingual radio station founder and broadcaster Casimir Stanczykowski of Montreal, Captain Andrew Garlicki of Ottawa, and W.W.II staff-sergeant of the Polish Army Jan Drygala of Oshawa.
In the legal profession, many lawyers are Queen’s Counsels, and some have been appointed judges, such as Their Honours Judge Allan H. J. Wachowicz of the Court of Queen’s Bench in Edmonton, and Judge P. Swiecicki, of the Superior Court of BC in Vancouver; Paul Staniszewski of Toronto and Montreal, now of the County Court of Windsor, and E.F. Wrzeszczinski-Wren of the County Court of Toronto.
The first Polish priest visited Polish immigrants in 1862 in Kitchener. The first church serving Polish immigrants was in Wilno, Ontario, built in 1875. In Winnipeg, Re. Father Wojciech Kulawy, the first Oblate missionary who served the Polish immigrants in Western Canada, built the Holy Ghost Church in 1899, and founded in 1904 the first newspaper, a weekly called “Glos Kanadyjski”, followed by the “Gazeta Katolicka” in 1908.
The first Polish-Canadian Roman Catholic bishop is the Most Reverend Mathew Ustrzycki, who was consecrated in June 1985, auxiliary bishop of the Hamilton Diocese. In addition to 80 priests serving in 120 parishes, there are Polish-Canadian priests in many congregations and orders, such as the Franciscans, Jesuits, Redemptorists, Saletinians, Resurrectionists, Oblates, Michaelites, and Society of Christ.
These priests and sisters are performing a tremendous service to our society and have enriched the Polish-Canadian community with many churches, missions, homes, schools, and day-care centers.
Some missionary Oblate Brothers served among Canadian native peoples. One of them, Reverend Antoni Kowalczyk, led a very devoted life and after his death his beatification process had been initiated.
In the professions, Polish engineers and architects have a tremendous record of accomplishments dating from Sir Casimir Stanislaus Gzowski in 1841. During the early years of W.W.II, a group of Polish engineers were brought to Canada by the Federal Minister, C.D. Howe, to help in the war effort. This group made a very significant contribution then and in their professional life in many industries, for example: Jan Zurakowski of the Avro Aircraft Company in Malton, testing the Avro airplane. He was awarded in 1959 Canada’s top Aviation Award, the McKee Trophy; P. Wyszkowski, who was Chief Structural Engineer of Toronto’s Bloor Street subway; Dr. Tadeusz Blachut, of Ottawa, who worked with the National Research Council, and is a photogrammetric expert of world-renown; Z. Krupski who rose to the position of Executive Vice-Chairman of the Bell Telephone company of Canada; Mr. J. Norton-Spychalski who was a co-founder in 1949 of the Computing Devices of Canada.
In the medical and pharmaceutical sciences, hundreds Polish physicians, surgeons, dentists, pharmacists, medical technicians, and nurse staff of our hospitals and teaching institutions. Among them, Dr. S. Dubiski, Professor of Clinical Immunology at the University of Toronto, and Dr. Antoni Fidler, a professor of the Faculty of Medicine at the University of Ottawa; Dr. Stanley Skoryna, a surgeon, who was a researcher at McGill University and chief of the U.N. medical expedition to Easter Island (off Chile) in 1964.
Canadians of Polish descent have a long tradition of involvement in political life in Canada. Back in 1809, the first Pole of great significance, Dominik De Barcz was elected to the Legislative Assembly of Lower Canada, that is, Quebec. In 1814, he took a seat in the Legislative Council and in 1837 in the Executive Council. The first Polish immigrant to become a Federal Member of Parliament was Alexandre Eduarde Kierzkowski. He was born in the province of Poznan, Poland and took part in the November Uprising. He was elected in 1858 to the Legislative Assembly of Lower Canada. In 1861 he was re-elected to the Legislative Assembly of Lower Canada, and after the creation of Confederation in 1867, he became a member of the House of Commons.
Charles August Maximilian Globenski followed suit by being elected to the House of Commons in 1875. There followed a 75-year hiatus before Dr. Stanley Haidasz became the first contemporary politician of Polish origin to be elected as federal Member of Parliament in 1957. He later took a seat in the Senate. From 1972 to 1974 held the position of minister of State for Multiculturalism. The 1960’s turned out to be the most fruitful; in 1962 Raymond Rock and Stanley Korchinski were elected to take House of Commons; in 1968 Steve Paproski and Don Mazankowski. Later on, Don Mazankowski held the position of the Minister of Transport and Vice Premier. In 1979, Jesse Flis became a federal politician. The most recent Canadians of Polish descent elected to the House of Commons were Pat Sobeski and Stan Keyes (Kazmierczak).
The list of politician of Polish descent at the provincial level is much longer. The most appreciated and admired even today is Elaine Ziemba, Ontario’s MPP and a minister in NDP’s government of Bob Rae. Her commitments to the causes of Canadian Polish community was extraordinary. Here is a partial list of politicians of Polish descent involved in politics at the provincial level: B. Poniatowski, Rev. D. Malinowski (Manitoba), Carl Paproski (Alberta), Ken Kowalski (Alberta), Walter Szwender (Alberta), Geo Topolinsky (Alberta).
Here is the partial list of aldermen of Polish descent: Chris Korwin-Kuczynski, Ben Grys, Tony Jakobek, Mrs. Boerma, Ms. Moszynski, and Councillor Peter Milczyn, Ward 5, Etobicoke-Lakeshore.
There were some Polish-Canadians in the Federal Public Service and Provincial Civil Service whose nominations were awarded by the longstanding service for Canadian Polish community. The highest ranking in the Federal Service was Frank Glogowski, Vice-Chairman of the Immigration Appeal Board. Mr. Stan Zybala became a Deputy-Director of the Multicultural Directorate. Irene Ungar of Toronto and Peter Taraska of Winnipeg became Citizenship Court Judges.
Every day brings new achievements of members of Canadian Polish communities. It is not possible to enumerate all of them. As an example we present just a couple of names of outstanding Poles contributing to Canadian society: George Radwanski was Editor-in-Chief of the Toronto Star for a few years and wrote the most important biography of the Rt. Hon. P.E. Trudeau; Mr. Poznanski (father of Mrs. J. Parizeau) was a prominent economist; Mr. Starowicz is one of the best journalists of CBC; Anne Mroczkowski is one of the best known TV journalists.
Casimir S. Gzowski was born in 1813 and died in 1898. He came of an ancient gentry family, not of high aristocracy, which was settled in the lands that were annexed by Tsarist Russia as a consequence of the Partitions of Poland. Educated with care, chiefly in the technical field, he was drafted into the Russian army at the age of seventeen. When the Rising of November 1830 broke out, he joined the insurgents and took and active part at the side of his compatriots. When the Rising was crushed a year later, along with the company to which he belonged, Gzowski crossed the frontier into Austrian Poland, seeking protection from Tsarist vengeance. Interned by the police, he was quartered along with his fellow insurgents in Trieste, and thence in 1834 deported to the U.S.A.
Undaunted by the initial difficulties to be faced on American soil, he began at once to work hard at English (unknown to him hitherto); and in a short time, by studying law, he succeeded in being admitted to the practice of law in the State of Massachusetts (1837). He dreamed, however, of reverting to the field of engineering, and therefore seized the first opportunity offered to become a civil engineer in railway and canal construction in Pennsylvania. In 1841 he moved to Canada to live, and was made Superintendent of Public Works for what is now Western Ontario by the Ottawa government. He lived first in London, but later in Toronto.
Gzowski remained in public service until 1848, gaining valuable experience and a thorough knowledge of the country. He soon made for himself a name among the leading engineers of the time. Succeeding years were to witness his work as Chief Engineer of one of the first railways linking up Montreal with the U.S.A., and again in the Harbour Works of the great St. Lawrence sea-port. In 1853, in partnership with A.T. Galt, D.L. Macpherson and L.H. Holton, he created a firm for railway construction, to be known as “Gzowski and Co.”, and began the building of the Grand Trunk line from Toronto to Sarnia. This firm was to play a significant role in Canadian railway history. When in 1873 the construction of the International Bridge across the Niagara was finished, his reputation as a front-rank engineer in the New World was assured.
Nevertheless, he did not confine his energies solely to professional duties. As a personal friend and admirer of Sir John A. MacDonald, he was closely connected with the Conservative Party, though never actively engaged in politics. An ardent supporter of imperial unity, he rendered yeoman service in the field of Canadian defence, and in the expansion of the national militia. For this he was named Lt. Colonel of the Forces, and in 1879 he was made an Hon. Adjutant to the Queen (A.D.C.). Eleven years later he was knighted.
During a number of years he sat in the Senate of the University of Toronto. One of the founders of Wycliffe College, he served for fifteen years as Chairman of the Board. He took an active part in the creation of Niagara Falls Park, and was made First Chairman of the Park Commission. Shortly before his death, he was asked by Ottawa to serve as administrator for the Province of Ontario during the illness of its Lt. Governor, Sir George Kirkpatrick.
Gzowski’s personal qualities, his professional skills and his devotion to public affairs in the land of his adoption made him one of the foremost citizens of the Dominion in the second half of the 19th century. | <urn:uuid:36bd3c84-8bd4-4779-a886-c43eca1d23f2> | CC-MAIN-2021-21 | https://kpkquebec.org/pl/history-2/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00613.warc.gz | en | 0.970641 | 5,633 | 2.5625 | 3 |
“But he said, ‘I must proclaim the good news of the kingdom of God to the other towns also, because that is why I was sent.’”
1. Where did Jesus go and what did he do (16,31)? What did he do? How did the people respond to his teaching and why (32)?
2. When Jesus taught the word of God with authority, how did one man react and why (33-34)? How did Jesus demonstrate his authority (35-37)? What do we learn about who Jesus is?
3. Where did Jesus go after he left the synagogue (38a)? How did Jesus show his compassion to Simon’s mother-in-law (38b-39)?
4. At sunset, what did people do (40a)? How did Jesus care for each one who was brought to him (40b-41)? Why did Jesus forbid demons from revealing his identity?
5. Where did Jesus go and when (42a)? What did the people expect (42b)? What did Jesus reveal about his purpose in coming (43-44)? Why is Jesus’ coming good news? In this passage, how did Jesus reveal his authority and compassion as the Messiah?
“But he said, ‘I must proclaim the good news of the kingdom of God to the other towns also, because that is why I was sent.’”
In this passage Jesus proclaimed the good news of the kingdom of God. This is why Jesus came into the world. Why is the kingdom of God good news? In order to accept this good news, we need to understand what the kingdom of God is and why we need it. In the kingdom of God, Jesus is the king. Where Jesus reigns as king, there is the kingdom of God. The kingdom of God does not refer to a geographical area or a specific ethnic group; it includes all kinds of people in whom God reigns. The kingdom of God is open to anyone and everyone—whoever accepts Jesus as their king. It seems that there are many kingdoms in this world. But spiritually speaking, there are only two kingdoms: the kingdom of God and the kingdom of Satan. The kingdom of God is characterized by love, light, life and truth and justice. On the other hand, the kingdom of Satan is marked by hatred, darkness, death and deception and injustice. People are suffering in the darkness, not knowing that they are under the power of Satan. This suffering is expressed in that they do not do what they should do, and do what they hate to do. For example, a few years ago, in the Atlanta area, a criminal escaped in a courtroom and shot and killed the judge and others. He invaded a single woman’s home. She spoke to him with the truth of God and calmed him down. As they watched the news reports of what he had done, he was shocked. He did not think he had done such an evil thing; it was not him but another power. If students are able to control themselves and do what they should do, they will all get straight A’s. If we were able to control our emotions there would be no arguments. This shows us that there is an unseen force that compels us to do what we do not want to do. We cannot get out of the power of darkness by our own effort, even though we strongly desire to. Jesus has come to set us free from evil forces and rule over us with love and peace and justice. This is the good news. Let’s examine in what sense we are bound by the power of darkness and accept Jesus as our king. Let’s learn how the kingdom of God comes.
First, the kingdom of God comes through preaching the word (31-32). After being rejected by his hometown people, Jesus went down to Capernaum in Galilee. Many of Jesus’ miracles took place in Galilee. In Galilee, Jesus proclaimed the kingdom of God, taught the living word of God, drove out demons, healed various kinds of sick people, cleansed a man with leprosy, made the lame walk, and gave sight to the blind. Most of all, in Galilee Jesus called ordinary, flawed men as his disciples and raised them as pillars of God’s salvation work. There are so many beautiful memories in Galilee. On the Sabbath, Jesus went into the synagogue and began to teach (31). People were amazed at his teaching, because his words had authority (32). “Authority” is defined as “power to influence or command thought, opinion, or behavior.” Jesus’ words pierced people’s hearts and made them aware of God’s presence. Why did Jesus’ words have such authority? There are several reasons. First of all, Jesus is the Son of God; his authority comes from his identity. In addition, he was a perfect man, and fully anointed by the Holy Spirit. Also, he taught God’s word as it is with a reverent attitude. Mark comments that his teaching was different from that of the teachers of the law (Mk 1:22). Teachers of the law taught traditions of the elders, which were based on commentaries on the Torah. They followed the historical trends of interpretation set by great teachers. Though the teachings were often profound, they were human interpretations, not the Scriptures themselves, and sometimes even nullified the word of God (Mk 7:13). The teachers of the law assumed that they knew the Scriptures, but they failed to learn God’s heart and Spirit (Mt 23:23). They replaced the living word of God with rules that enslaved people. But Jesus respected Scripture as the living word of God and taught it as the absolute truth which all human beings should live by. When Jesus was tempted by the devil, he said, “It is written…It is said” and quoted Scripture exactly, knowing God’s heart and intention. Jesus defeated each temptation by depending on the word of God. Instead of traditional rules, Jesus taught God’s love, mercy and saving grace. Jesus’ words set people free from the power of sin, death and the devil. When they heard Jesus’ words, they experienced God’s presence, heavenly peace and joy. They found the meaning and purpose of their lives and real hope.
These days there are many people who do not see the Bible as the word of God. This is especially true in seminaries which have been influenced by the so-called “higher criticism.” “Higher criticism” designates the study of the history of origins, dates and authorship of the various books of the Bible. When pursued with reverence for God and a spirit of genuine scholarship it can be helpful to understand the Bible better. But when pursued without reverence for God—without recognizing the Bible as the inspired, living word of God—it discredits the Bible’s authority, leading people to see Scripture as just another set of human ideas. As a result, they lose the spirit of the Bible and its life-giving power. It can be compared to dissecting a fish. When we take it out of the water and analyze it part by part, we can learn something; but we lose its life. Man’s rational power should be guided by God’s Spirit and not try to rule over it. We can learn from Jesus a right view of God’s word. Jesus said, “The Spirit gives life, the flesh counts for nothing. The words I have spoken to you are full of the Spirit and life” (Jn 6:63). “The word of God is alive and active. Sharper than any double-edged sword, it penetrates even to dividing soul and spirit, joints and marrow; it judges the thoughts and attitudes of the heart” (Heb 4:12). When we accept God’s word with faith, it works in us powerfully. Paul said to the Thessalonian believers, “And we also thank God continually because, when you received the word of God, which you heard from us, you accepted it not as a human word, but as it actually is, the word of God, which is indeed at work in you who believe” (1Th 2:13). This also applies to how we teach or speak the word of God. When we speak as though speaking the very words of God, the authority of God’s word works in our messages and Bible teaching (1Pe 4:11a). When the word of God is proclaimed with God’s authority, the kingdom of God comes into people’s hearts. The kingdom of God is righteousness, rest, peace and joy in the Holy Spirit (Ro 14:17).
Second, the kingdom of God comes when demons are driven out (33-37). As Jesus taught the words of God, a man possessed by a demon, an impure spirit, was exposed (33). The demon had hidden himself in one man and manipulated him invisibly. This man had probably heard many messages of the teachers of the law, but nothing happened. During their tedious expositions of traditions, he fell into a sound sleep. The demon was not threatened at all by the teachers of the law. But when Jesus preached the word of God, the demon felt greatly threatened. In a shock, suddenly he cried out, at the top of his voice, “Go away! What do you want with us, Jesus of Nazareth? Have you come to destroy us? I know who you are—the Holy One of God!” (34). Though the demon cried out, it seemed like the man cried out. This man’s identity had been stolen by the demon. The demon knew who Jesus was, and was terrified to have a relationship with Jesus. It is because he knew that ultimately Jesus would destroy him. 1 John 3:8b says, “The reason the Son of God appeared was to destroy the devil’s work.”
We can imagine how much this man suffered with such a vile, fearful, hateful, deceptive, vengeful, dirty and wicked spirit living inside of him. He was not free, but the captive of the impure spirit. Modern, educated western people tend to ignore the existence of demons, but so many people in the world experience their existence. A pastor friend from Africa, Isaacs Challo, told me how demons worked in his village through witch doctors. People were kidnapped, tortured and even murdered to satisfy the demons. The local police were afraid to get involved and everyone tried to ignore what was happening. Even in America, some criminal investigations document the involvement of grotesque ritual sacrifice as part of heinous crimes. These are related to devil worship. So we should acknowledge that human beings have a spiritual element. If we see people as only body and mind we don’t really understand them and cannot help them effectively. The Bible clearly tells us that man is both body and spirit. God created us with spirit so he could have fellowship with us. When God’s Spirit lives in us, we can be normal people, and we can be satisfied. But when we reject God, we become vulnerable to evil spirits. For example, when King Saul rejected God, he did not become spiritually neutral. As soon as the Spirit of God left him, an evil spirit came and tormented him day and night (1Sa 16:14). John Calvin said that God allows demons to exist to torment rebellious people. Society tries to deal with people suffering from demons in many ways: drugs, education, imprisonment, electric shock, atheistic psychological counseling and so on. But these treatments never work.
How did Jesus deal with a demon possessed man? Jesus said sternly, “Be quiet! Come out of him!” Then the demon threw the man down before them all and came out without injuring him (35). Jesus did not ask kindly. Jesus did not negotiate. Jesus did not argue. But he rebuked it with authority and the demon obeyed Jesus, like it or not. After the demon left, the man was set free. His identity was restored and he became a normal person. How, then, should he live after being set free? Should he live according to his own sinful desires? No. If so, the demon may return with others and make his condition worse than before (Lk 11:26). It was time for the man to receive Jesus as his king and live according to Jesus’ words. Then the kingdom of God would remain in his heart; he would be a blessing to others.
Here we learn the importance of acknowledging the existence of demons and how we can fight against them. Of course, we cannot say that everything bad that happens is a result of demonic activity. Also, we cannot explain all bad behavior as demon-possession. But we can say clearly that demons are real and Satan is working behind the scenes. And some people are possessed by demons. Demons can be driven out only in the name of Jesus. Our struggle is not against flesh and blood, but against the spiritual forces of evil in the heavenly realms (Eph 6:12). We all know of the tragic event that happened last week at Umpqua Community College in Oregon. A gunman entered a classroom and shot the professor to death. Then he asked all Christians to stand up. As they did, he shot them in the head. In this way, he killed nine people and wounded others before taking his own life. In response, President Obama has called for stricter gun control laws. On the other hand, the Lt. Governor of Tennessee suggested that Christians should buy guns to protect themselves from the persecution that is coming. But we should know that the real power behind this crime was the spirit of antichrist, the devil. The devil cannot be defeated by man-made laws or guns. Only Jesus can defeat the devil. It is time to proclaim Jesus through Bible study and to pray in the name of Jesus, especially on our campuses. Jesus can drive out the devil. Whoever accepts Jesus can receive the kingdom of God. This is the way to victory over the devil’s murderous spirit.
Verse 36 tells us that all the people were amazed and said to each other, “What is this? With authority and power he gives orders to impure spirits and they come out!” And the news about him spread through the surrounding area (37). People were filled with hope and joy because of Jesus’ victory over the power of demons.
Third, the kingdom of God comes through healing the sick (38-41). Jesus proclaimed the kingdom of God not only with his words, but also with his deeds. Jesus cared for people practically, according to their needs. Jesus left the synagogue and visited the home of Simon. Home visiting was an important part of Jesus’ ministry. Through home visiting we can understand people’s real problems. When Jesus visited Simon’s home he found a problem: Simon’s mother-in-law was suffering from a high fever. This cast a shadow over the home and made Simon depressed. They asked Jesus to help her. So he bent over her and rebuked the fever, and it left her. Immediately her temperature returned to normal, her swelling subsided, her congestion cleared, her headache disappeared, and she felt great. She got up at once and began to wait on them, serving a delicious lunch. Luke, a medical doctor, must have been impressed by this healing. In those days, many doctors treated fevers by draining people’s blood. Sometimes their cure was worse than the fever. But Jesus, with one word of rebuke, healed the fever immediately. Jesus’ personal care for one older woman made a great impression on Simon and many others. When sunset came, marking the end of the Sabbath, people brought to Jesus all who had various kinds of sickness. Jesus did not heal them all at once, in mass. Jesus laid his hand on each one, caring for each of them personally (40). Amidst Jesus’ healing ministry, many demons came out of people. They wanted to be very noisy, shouting, “You are the Son of God!” But Jesus rebuked them and would not allow them to speak. Jesus did not need their advertisement. As Jesus helped needy people one by one, the kingdom of God came into each one’s heart. This personal care for one person at a time reflects the heart of God. Let’s help needy people one by one, practically, as Jesus did, so the kingdom of God may advance one person at a time.
Fourth, the kingdom of God must be proclaimed everywhere (42-44). Jesus had worked hard all day long on the Sabbath, and then spent the evening to care for people one by one with his great compassion. The next morning, it might have been hard for him to get up. But at daybreak, Jesus got up and went out to a solitary place to have fellowship with God personally. Personal fellowship with God is essential for God’s servant. It is the time to seek God’s direction and wisdom, to renew our spirit and strength, and to enjoy God’s love and mercy. As Jesus was having quiet time with the Father, those who had tasted his love and power shamelessly interrupted. They began to plead with him to stay there with them permanently. They wanted to enjoy Jesus all by themselves forever. We can understand them. When we receive the grace of Jesus, it is so sweet to our souls, and we just want to stay where we are with Jesus forever. But how did Jesus respond? Let’s read verse 43. “But he said, ‘I must proclaim the good news of the kingdom of God to the other towns also, because that is why I was sent.’” And he kept on preaching, crossing over to the synagogues of Judea (44).
Jesus did not follow the demand of people. Jesus said that he must proclaim the kingdom in other towns also, because that is why he was sent. Jesus was guided by the Father’s will and followed the Father’s heart’s desire for him. Jesus knew that the Father was concerned about people all over Israel who were suffering under the devil’s influence. The Father’s heart was eager to bring liberation to them all. Jesus came as Savior for all the people of Israel, not only for the Galileans. Jesus came as the light for the Gentiles, not only the people of Israel. Jesus is the Savior of the world. Everyone in every nation needs Jesus and the kingdom of God. All people suffer most from the power of sin, death and the devil, regardless of nationality, gender or generational identity. That is why Jesus died for our sins, shed his blood on the cross and rose again; it was to liberate humankind from the power of sin, death and the devil. This is what all people need most urgently. We should understand Jesus’ heart. In addition to our own family, campus, church and nation, we should be concerned for all people of the world. Jesus constantly challenges us to look beyond where we are to the next family, to the next campus, the next community or nation. We also learn from Jesus that his motivation and purpose came from God, not from people. Whether Jesus was rejected or celebrated, he kept his eyes on the Father by having intimate fellowship with him. Jesus did what God wanted him to do, not what people demanded him to do. Whether we are rejected or celebrated we should fix our eyes on Jesus and continue to follow him in proclaiming the kingdom of God to the people of our times.
In this passage we learn Jesus’ heart’s desire to proclaim the kingdom of God to all people. Jesus did so by preaching the words of God and serving needy people. Let’s accept Jesus as our king and participate in preaching the word of God, caring for the needy, and praying for the people of the world.
Mish, F. C. (2003). Preface. Merriam-Webster’s collegiate dictionary. (Eleventh ed.). Springfield, MA: Merriam-Webster, Inc. | <urn:uuid:175a4e03-8603-4356-a833-aa984719789c> | CC-MAIN-2021-21 | https://www.ubf.org/resourcedetail/11174?bcode=142&chapter=4 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991737.39/warc/CC-MAIN-20210514025740-20210514055740-00293.warc.gz | en | 0.980033 | 4,206 | 3.25 | 3 |
On 28 September 2016, a group of five people calling themselves “Peace Pilgrims” entered a prohibited zone around Pine Gap, a US military base in Australia. They were arrested and tried for trespass. The maximum penalty for their act was seven years in prison.
The story of this action and its aftermath is told with care and sympathy by Kieran Finnane in a new book titled Peace Crimes: Pine Gap, National Security and Dissent. Finnane is a long-time resident of Alice Springs, a town in central Australia. Although she was aware of the nearby Pine Gap base, she had never paid much attention to the issues involved until the protesters took their action in 2016. With Peace Crimes, she has provided the most detailed account yet available of this form of protest in Australia and the response of the government to it.
I’m interested in this story for several reasons. In 1979, I became involved in the peace movement, with a special interest in nonviolent alternatives to military defence. I’ve studied the likely effects of nuclear war and followed disclosures about mass surveillance. Not least, for many years I’ve known one of the Peace Pilgrims, Margaret Pestorius, an incredibly knowledgeable and committed activist.
In the following, I first tell about Pine Gap and the Peace Pilgrims and then present a series of perspectives for understanding one or both of them. I’m omitting a lot of the detail and complexity of the story. For example, in addition to the group of five Peace Pilgrims, another Peace Pilgrim protested individually and was tried at the same time. For these and other aspects, and an engaging narrative, read Peace Crimes.
Beginning in the 1950s, the US government made arrangements with the Australian government to set up a number of military bases in Australia. Officially they are joint facilities, and in some bases today half the workers are Australians. However, Richard Tanter, who has carried out research on the bases, has a useful counter to the idea that they are genuinely “joint” facilities. He says that considering that the bases were built by the US government, their operations are paid for by the US government and their only functions are as part of a network of US military and spying facilities, it is reasonable to call them US bases to which Australian personnel have a degree of access.
Source: Richard Tanter, “Tightly bound“, GlobalAsia
For decades, the most important US bases were Pine Gap and Nurrungar in central Australia and North West Cape on the western coast of Western Australia. These days, with changing technology, Pine Gap is the most important base.
One part of the base receives and analyses data from US surveillance satellites that collect vast amounts of electronic communications from land, sea, air and space origins. These satellites are in orbits that position them permanently in the same location above the earth. Another part of the base intercepts transmissions from foreign satellites, especially Russian and Chinese ones. The base also is a relay station for signals indicating potential enemy nuclear missile launches, though this function is now redundant given that signals can go direct to the US via satellite-to-satellite transmissions.
Pine Gap is part of the Five Eyes network that sucks up electronic communications of all sorts, a massive surveillance operation that aims to collect everything sent via phone, email, social media, you name it. The so-called Five Eyes are the US, Britain, Canada, Australia and New Zealand. They share surveillance information, though the US National Security Agency plays the dominant role.
Over the years, a number of writers and researchers have exposed aspects of this highly secret network. One of them was New Zealand investigator Nicky Hager in his 1996 book Secret Power, which received relatively little public attention. In 2013, Edward Snowden leaked massive numbers of NSA documents to the media, generating international awareness of the extent of government surveillance.
Pine Gap’s surveillance capacities assist in US counter-terrorism and other military operations. US systems collect information about possible human targets. When a decision is made and an opportunity arises, drones are instructed to unleash missiles to destroy a target. Some of these drone attacks are in war zones such as Afghanistan; others are in places like Pakistan and Yemen. Drone killings can be called assassinations. Alleged enemies are not arrested and brought to trial, but simply killed. As well, quite a number of civilians die in the attacks. Via Pine Gap, the Australian government is implicated in a system of extrajudicial murder.
The most significant US bases were installed in the 1960s when Australia had a conservative government, run by the Liberal-Country Party coalition that had held power since 1949. The opposition Labor Party, at the time having a socialist and nationalist orientation, had a platform that rejected US bases. However, after Labor was elected in 1972, it did nothing to implement its bases policy. Later, after Labor lost office in 1975, opponents of the bases struggled with how to proceed.
In the early 1980s, there was a huge expansion of the worldwide movement against nuclear weapons, which invigorated sentiment against the US bases. Activists argued that the bases contributed to the possibility of nuclear war and made Australia a nuclear target. Indeed, to the extent that nuclear arsenals were “counterforce” — targeted at the enemy’s nuclear war-fighting facilities — then Pine Gap was a prime target in a nuclear exchange. Without US bases, there was little reason for the Soviet military to aim nuclear missiles at Australia.
Australian anti-base activists argued that the goal should be to get the Labor Party to change its platform to again oppose the bases, and then to get the Labor Party elected. These hopes were forlorn. After Labor was elected in 1983, it took steps to give the impression of Australian partnership in running the bases, while more deeply integrating Australia’s military posture with the US’s.
By the late 1980s, the Australian peace movement was in steep decline. Then in 1989 Eastern European communist regimes collapsed. The Cold War was over, and the Soviet Union dissolved two years later. US bases in Australia fell completely off the public agenda, though they continued their crucial role in US nuclear war-fighting operations, surveillance of electronic communications, and information gathering for military operations in Afghanistan, Iraq and elsewhere.
After the 1980s, Australian peace movement activity was low-key except for huge surges in public opposition to foreign wars, including the 1990–1991 Gulf war and the 2003 invasion of Iraq. Some activists, though, maintained attention to US bases. There have been quite a few protests, including major ones at Pine Gap in 1983 (the Women’s Peace Camp), 1987 and 2002. Of special interest here are religiously motivated activists.
There is a long history of religious opposition to war. In countries with universal male military service, there have been resisters, those who refuse to participate, and many have been driven by their religious beliefs. In the US and several other countries, small numbers of activists have taken direct action against weapons systems, for example sneaking into military bases and using hammers to damage missiles. They are called ploughshares activists, because as Christians they take inspiration from passages in the Bible, such as this one from the book of Isaiah:
“He shall judge between the nations, and shall decide disputes for many peoples; and they shall beat their swords into ploughshares, and their spears into pruning hooks; nation shall not lift up sword against nation, neither shall they learn war anymore.”
Ploughshares actions, although occasionally damaging military equipment, are largely symbolic. The activists take full responsibility for their actions and do not attempt to evade arrest. They feel driven to bear witness against war. Some US activists have spent many years in prison. Their stories are documented in The Nuclear Resister, published for many years by Felice and Jack Cohen-Joppa, who I see whenever I visit Tucson, Arizona.
The 2016 action at Pine Gap was in the tradition of radical Christian peace action. The group of five protesters — Franz Dowling, Jim Dowling, Andy Paine, Margaret Pestorius and Tim Webb — expressed their commitment to the Biblical commandment “Thou shalt not kill” and, as well, adopted lives of voluntary poverty and service to others in need.
The contrast between their lives and the mainstream churches is stark. Mainstream Christianity has adapted to the surrounding culture and political system. Soldiers, arms manufacturers and political leaders might be Christians, but have accepted the need for killing, and indeed have supported the development and deployment of weapons systems with the potential for mass slaughter. The Christian vow of poverty — exemplified by the Biblical saying that “It is easier for a camel to go through the eye of a needle than for someone who is rich to enter the kingdom of God” — has been displaced by materialism. Some churches espouse the “prosperity gospel” that glorifies making money.
The protesters who called themselves “Peace Pilgrims” wanted to intervene against Pine Gap operations to hinder what they saw as its death-dealing. However, border security around the base is tight. The area is actively monitored for trespass. The outer fence is easy to get through, but not the high inner double barbed-wire fence. In practice, the Peace Pilgrims were making a statement by the simple fact of going into a prohibited area.
The Pilgrims made careful preparations. To get as close to the base facilities as they could, they needed to walk through the night in rather treacherous territory. Andy prepared to film the base and their efforts. Margaret brought her viola and Franz his guitar so they could play a lament.
Peace Crimes provides plenty of fascinating detail about the Pilgrims’ preparations, action and arrest. They expected to be arrested, and they were. Then there was a different sort of drama: in the courtroom. The Pilgrims were charged under a piece of federal legislation called the Defence (Special Undertakings) Act. The maximum penalty was seven years in prison.
Finnane attended the case, which went on for days in the Northern Territory Supreme Court, and reported on it in the Alice Springs News.
The prosecution was led by a top-gun lawyer who did everything possible to achieve a conviction and push for harsh penalties, including imprisonment. The government was obviously doing what it could to deter anyone who might try to follow the Pilgrims’ lead.
The Pilgrims had little money and were unable to afford legal representation — so they represented themselves. Margaret and Andy led their effort to develop questions and an argument for the court. They received a bit of free general legal advice, but actually they preferred to run the case themselves since that gave them the most freedom to do it the way they wanted. A lawyer would have been more constrained.
In Peace Crimes, you can read about the legal machinations: decisions about the appropriate jurisdiction, choosing members of the jury, questioning of the judge’s objectivity, attempts to keep the proceedings closed, efforts to exclude evidence and witnesses, giving of evidence and cross-examination, and the subtle factors that determined what could be raised in testimony and what couldn’t. The Pilgrims pleaded not guilty, their grounds being that Pine Gap played an active role in committing crimes, namely in facilitating extrajudicial murder. Peace Crimes also provides fascinating information about the lives of each of the activists.
If you are familiar with any of the issues involved, you may have a view about Pine Gap and the Peace Pilgrims or both. Here, I offer a variety of perspectives for looking at the issues. Each perspective can potentially offer insights.
Moral versus legal
The Pilgrims were driven by a deep sense of what is right and wrong, and they believed that military systems — especially those involved in foreign assassinations — are wrong. They confronted an opponent, comprising the government, the military and parts of the legal system, that justified its position based on law. The court case, described in detail in Peace Crimes, can be read as an extended conflict between morality and legality.
The Pilgrims defended their actions in terms of morality, and tried on every occasion to bring morality into the picture. This occurred when they were arrested and questioned, and it occurred in the courtroom. They liked to bring up Biblical examples of the breaking of unjust laws.
The prosecution, taking its cues from the federal government, attempted to exclude morality from the discussion. The prosecution repeatedly objected to testimony that brought up the Pilgrims’ motivations and instead focused on a narrow legal matter, whether they had knowingly trespassed on the prohibited area around the Pine Gap base. From the prosecution’s viewpoint, it was immaterial why the Pilgrims were there. All that had to be proved was that they were there, aware that they were breaking the law.
The interaction between the Australian government and the Peace Pilgrims can be seen as a power struggle. On the surface, it is a very unequal struggle. The government has the power to make and enforce laws, and has at its disposal police and prisons. Then there is the wider power of the US government and military, which supports the Pine Gap operation.
On the other side, the Pilgrims seem to have relatively little power, but this is deceptive. Why would the Australian government bother with an expensive trial against a seemingly harmless and nonthreatening group of activists who never had any realistic prospect of interrupting activities at the base? The reason is that the Pilgrims represented the potential power of citizen opposition. These few individuals posed no direct threat to Pine Gap operations but if their example were followed, a much greater threat might develop.
The Pilgrims, in their action, were setting an example. The government, by prosecuting them, was also trying to set an example.
Suppose you want to change the government’s policy on Pine Gap. How could you go about it? You might write scholarly articles, set up a newsletter, lobby politicians, join political parties and campaign for politicians who support your viewpoint. You might launch an online petition or form a citizens group. All these methods are what might be called conventional political action. In Australia, they are commonplace and widely considered acceptable.
At the other end of the spectrum, you might join with a few others to launch an armed attack on Pine Gap or, more easily, on its workers or on politicians supporting it. This approach can be called armed struggle or, by its critics, terrorism.
In between conventional political action and armed struggle are a variety of methods, including ostracism of politicians supporting the base, boycotts of companies supplying it, strikes by workers opposed to the base, sit-ins in parliament — and entering the restricted zone around the base, taking photos and playing music. These sorts of methods are called nonviolent action or, alternatively, civil resistance. They go beyond the routine and acceptable methods but refrain from any physical violence.
The Pilgrims were committed to this sort of action. It can make things difficult for authorities, because it involves noncooperation, yet avoids physical violence and so cannot easily be stigmatised as terrorism.
Within the nonviolence field, two approaches are commonly distinguished: principled and pragmatic. Principled nonviolence, associated with Mohandas Gandhi, is based on a moral commitment. Pragmatic nonviolence, associated with scholar Gene Sharp, is undertaken because it is seen as more effective than violence. The Pilgrims obviously fit into the principled camp. But this distinction is a bit academic in Australia, where all activists refrain from using arms.
Activist and researcher Stellan Vinthagen offers an insightful definition of nonviolence: it is without violence and against violence. The Pilgrims, like most people in their daily lives, did not use physical violence. However, unlike most other people, they acted against violence, namely against Pine Gap and its role in military operations.
Nearly everyone says they are against war. Those who support military defence say it is needed to deter war. Many soldiers are strongly in favour of peace.
The question is not whether to oppose war, but how. For those in what is called the peace movement, who question the current military posture, there have been a variety of views about goals. Some oppose use of Australian troops in foreign wars, as in Afghanistan and Iraq. Some support the Australian government cutting ties with foreign powers and having an independent defence policy. Others favour disarmament. Yet others support development of a nonviolent defence system.
Despite this wide range of visions, amazingly the peace movement has mobilised large numbers of Australians to protest against war and war preparations — but only on some occasions, such as just before the 2003 invasion of Iraq. In between such mobilisations, few have maintained their activism.
What about strategy for the peace movement? How will it achieve its goals? Some favour public education, aided by critical analyses of military preparations. Others pursue peace movement goals by lobbying politicians and joining political parties. More visible are public protests. Another approach is to undermine the war system by challenging its roots, including the state, militarism and patriarchy.
In this context, the Pilgrims pursue peace by prophetic witness. Through their actions, they show their commitment and their vision of an alternative world. They are less worried about effectiveness than being true to their beliefs.
Steven Bartlett, a philosopher and psychologist, made an exhaustive study of evil, which he uses in a non-religious sense to refer to the propensity of humans to harm each other and the environment that supports their life. In his epic book The Pathology of Man he traverses a wide range of classic writings about disease, ethology, psychology, genocide, ecological destruction and war. His central conclusion, suggested by the title of his book, is that the human species is pathological, namely having the characteristics of a disease. He argues that aspects of human thought and behaviour are so dysfunctional that they are a danger to survival, yet most humans participating in damaging activities are psychologically normal.
One of Bartlett’s case studies is war. He observes war preparations and the willingness of humans to harm each other, face to face or remotely. War preparations involve only a fraction of the population. What is significant is that so few people do anything to resist. Bartlett concludes that most people do not want to stop war preparations and war. This is a testament to the pathology of the human species.
The stark contrast between the peace Pilgrims and the power of the state used to restrain them is compatible with Bartlett’s analysis. Although there have been large protests at times, most Australians have been content to support or at least tolerate Australian military preparations and their links to foreign wars and assassinations.
One feature of this human shortcoming, according to Bartlett, is the low level of most people’s moral intelligence. Very few develop a strong feeling of disgust about cruelty, violence and other forms of human evil and have the conviction to act on their beliefs. Bartlett’s analysis suggests that the Peace Pilgrims are among these few with high moral intelligence.
From the perspective of communication and the media, the role of Pine Gap and the challenge by the Peace Pilgrims can be seen from several angles. One obvious point is that the role of Pine Gap, or even its existence, receives very little attention. Arguably, Pine Gap is Australia’s most important target in the event of nuclear war, and, if Australia lacked any foreign bases, the country might not be thought worth targeting at all. Yet this existential issue is seemingly off the agenda in the mass media.
No doubt the reluctance to cover Pine Gap-related matters is in part due to the two major political parties having the same stance, which means there are no significant political disagreements to report on. The government’s draconian laws restricting media coverage of matters of “national security” no doubt play a role. Government secrecy about the role of US bases makes reporting more difficult. Also important is the absence of a strong peace movement. Social media are far less inhibited, but even there, Pine Gap is not a major issue.
The Peace Pilgrims, and other direct actions at Pine Gap, provide a news angle. Unusual events, especially arrests, are newsworthy. Imagine that the Pilgrims had asked, “What can we do to generate attention to US bases?” Their protest would have been as good an answer as any.
Finally, there is the important role of Kieran Finnane, the journalist who reported on the protests and trial and whose book Peace Crimes provides an engaging introduction into the issues — and stimulated me to write about it. Those who seek media coverage often want the largest possible audience, but just as important is the depth of impact, which can be influential when only a few individuals are affected. The Peace Pilgrims had a loyal following, in part because of the government’s heavy-handed response. Their action, as a form of communication, was not widely covered but the coverage it did receive has been quite influential.
Jonathan Haidt has written an insightful book titled The Righteous Mind. In it, he explains research by himself and collaborators concerning what he calls “moral foundations.” These are values that influence what people think is right or wrong. Haidt identifies six principal moral foundations: care, fairness, liberty, authority, loyalty and sanctity. Each one has played a role in human evolution.
Consider care, the value people place on protecting and nurturing others. The protection and support that parents give to their children has obvious survival value. In many cases, people expand their sense of caring to those outside their immediate family or tribe, as when a person risks their life to save a stranger.
Haidt argues that the influence of moral foundations operates on each person’s intuitive, fast acting mind, usually without conscious awareness. Through ingenious experiments, he has shown that people make moral judgements intuitively and then try to justify them with rational arguments, which are sometimes highly contorted. In other words, people commonly reach conclusions quickly and automatically and only justify them later. More intelligent people can be better at coming up with rational-sounding explanations for their intuition-driven choices.
Haidt’s framework can be applied to the contrasting views about Pine Gap and the Peace Pilgrims. Each of the six moral foundations is relevant, but they are applied in quite different ways.
Care for others is a key driving force for the military establishment: the care is for those being defended from enemies. The Peace Pilgrims, in contrast, direct their care concerns to the victims of drone attacks and to the world population threatened by wars, especially nuclear war.
Haidt in The Righteous Mind is especially interested in differences between US liberals and conservatives. He found that liberals draw more from the moral foundations of care, fairness and liberty whereas conservatives draw more evenly from all six foundations. Consider authority, a value commonly associated with conservatives. The military is based on obedience to the authority of the military hierarchy and more generally to the authority of the government. The prosecutions of the Pilgrims were backed by the authority of the state, as manifested in the legal system.
Arguably, the Pilgrims were also drawing on authority. However, in their case, the authorities to which they responded were God and their own consciences.
Another moral foundation is sanctity, which can be expressed in rules for eating and hygiene. For example, many people find the eating of the flesh of certain animals to be disgusting. The Pilgrims might be said to be driven by their concern for the sanctity of human life, including individuals killed, far away, in drone strikes. The role of sanctity for the prosecutors does not seem so obvious until we think of Pine Gap as a sacred territory. Authorities were alarmed about the Pilgrims transgressing on the Pine Gap prohibited zone, and prosecutors took great pains to prevent images of the area surrounding the base being made public or even being seen in open court. It seems as if Pine Gap is analogous to a church; entering its grounds and taking graven images are a sacrilege to its holy mission.
It would be possible to consider each one of the six moral foundations to see its role in the thinking and actions of the Pilgrims and the defenders of Pine Gap. Each foundation plays a role, but with different anchors. A key point is that the influence of moral foundations is usually unconscious, providing an emotional drive for particular thoughts and actions often without individuals being aware of the source of their thoughts and choice of actions. It is fascinating to imagine that the careful, and sometimes torturous, legal argumentation presented in the trial is a rationalisation for choices influenced by unconscious commitments about what is right and wrong.
When a powerful group does something that others see as wrong, the group can take various steps to reduce the level of public outrage. For example, after the 1991 Dili massacre, when Indonesian troops opened fire on peaceful East Timorese protesters at Santa Cruz cemetery, the Indonesian government and military took steps to reduce international concern. They tried to cover up the existence of the massacre, denigrated the protesters, minimised the scale of the killing, set up investigations and gave minimal sentences to a few low-level perpetrators, and intimidated the surviving East Timorese population.
Despite these efforts, the Dili massacre triggered a large increase in international support for East Timorese independence. The massacre, intended to subjugate the resistance to its rule over East Timor, backfired on the Indonesian government. Perpetrators of a wide range of injustices, from sexual harassment to genocide, use the same outrage-management techniques as those used following the Dili massacre.
The same set of tactics can be observed in relation to Pine Gap, which some people might see as contributing to a number of injustices. The key tactic is cover-up: the intense secrecy about the base and its functions and activities serves to reduce public concern. In relation to drone assassinations, there is an additional tactic: devaluation of the targets, who are portrayed as dangerous terrorists. Then there is the tactic of reinterpretation, namely providing a benign explanation for actions. Defenders of drone killings never use the word assassination. They claim that few civilians are killed, using the euphemism “collateral damage.” Finally, anyone who challenges the programme may be subject to intimidation. This is where the Defence (Special Undertakings) Act comes into play, with its severe penalties for even trivial offences.
The arrest and trial of the Pilgrims can be seen as a form of intimidation of protest, deterring anyone who might follow their example. However, the arrest and trial of the Pilgrims were potentially a new source of public outrage, so it is to be expected that the same sorts of tactics would be used by the government. The tactic of cover-up is most obvious in the concerted attempts by the prosecution to exclude evidence about Pine Gap activities.
The tactic of devaluation is apparent in the prosecution of the Pilgrims as serious criminals who should serve time in prison for their actions. A key tactic of reinterpretation is the assumption underlying the prosecution that the case is about obeying the law, with the possibility of questioning the law off the table.
One of the methods used by powerful perpetrators to reduce outrage from their action is to use official channels to give the appearance of justice. In the case against the Pilgrims, the legal system itself was the most important official channel. By going to court, the prosecution might be seen to reassure observers that it was ensuring justice — even though the legal process in this case was one-sided, with the government throwing enormous resources into the case and using its power to restrict testimony.
Powerful perpetrators do not have it all their own way. The Dili massacre illustrates how attacks can backfire on the perpetrators. To counter the tactics commonly used, challengers can use counter-tactics. They can expose the action, validate the targets, interpret the actions as unfair, avoid official channels and instead mobilise support, and resist intimidation.
The Pilgrims and their supporters used all of these counter-tactics. To counter cover-up, they publicised their arrest and trial.
To counter devaluation, the Pilgrims had only to describe their beliefs and activities: their lives of voluntary poverty and service undermined the prosecution’s portrayal of them as dangerous threats. Furthermore, they organised to get famous and not-so-famous people to write to the Australian Attorney-General requesting that the charges dropped — and to have the letter published in the Saturday Paper.
To counter reinterpretation, they described the prosecution as a gross overreaction, as itself unjust. Rather than relying solely on legal defences, they mobilised support. Finally, to counter intimidation, they valiantly resisted throughout the entire case, refusing to capitulate.
In light of the different methods used by the government and the Pilgrims, did the arrests and prosecution backfire on the government, drawing more attention to Pine Gap and resistance to it than might otherwise be the case? That is hard to judge because there is no easy way to guess what might have happened had the government decided not to press charges. In any case, the issue has not gone away. Pine Gap continues its activities and the Pilgrims, and others, bide their time.
Reading Kieran Finnane’s book Peace Crimes inspired me to write something about the issues it raises. One issue is Pine Gap and military bases more generally. Another is the Peace Pilgrims and their principled challenge to military systems. Yet another is the existence of different ways of understanding protests against Pine Gap.
The dominant mainstream framing is that Pine Gap is a valuable part of Australia’s defence and that the Pilgrims, however well intentioned, should not be permitted to threaten the base’s security. Then there is the peace-movement framing, seeing Pine Gap as part of the US military machine that endangers lives around the world. It is useful to understand these positions and to be aware that they are ways of understanding Pine Gap and the Pilgrim challenge — but not the only possible ways. There are many others, including peace movement strategy, the contrast between moral versus legal imperatives, the role of human evil, and outrage management tactics.
Is there a best way of understanding Pine Gap and the Pilgrims? It all depends on your purpose. If you want to pass judgement, some perspectives are more useful than others. If you want to know what you might do to take action, that’s another matter. It is quite useful to draw a key insight from the study of moral foundations, namely that people commonly form a judgement based on their intuitive response and then subsequently find or create rational-sounding justifications for their views. The implication is that it can be extremely difficult to change someone’s mind by providing evidence and rational arguments. When judgements are grounded in gut reactions, changing them usually requires something other than reason.
In the case of Pine Gap and the Pilgrims, a key judgement is whether it is worth paying any attention to them at all. Because there is little mainstream media coverage, many people assume nothing important is happening. If you decide there is, and you want to know more, then it is valuable to seek information from a variety of perspectives. One crucial source is Kieran Finnane’s Peace Crimes.
P.S. The Peace Pilgrims were found guilty. The prosecution had called for imprisonment but the judge instead imposed fines of a few thousand dollars each. For the Pilgrims and their supporters, this was good news.
For assistance and valuable comments, thanks to Cate Adams, Sharon Callaghan, Jack Cohen-Joppa, Kieran Finnane, Margaret Pestorius, Yasmin Rittau, Richard Tanter and Tom Weber. | <urn:uuid:6b7276f6-ce42-4f8e-9891-76ba0de89e81> | CC-MAIN-2021-21 | https://comments.bmartin.cc/category/activism/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991772.66/warc/CC-MAIN-20210517115207-20210517145207-00014.warc.gz | en | 0.968097 | 6,569 | 2.5625 | 3 |
State-level race and ethnicity data can be hard to find if you are looking to federal government sources like the Bureau of Justice Statistics (BJS). The Children’s Defense Fund is a 501(c)(3) nonprofit organization. May 25, 2017 May 24, 2017 by Brandon Gaille. About 6 percent of children who were Asian alone were of mixed ancestry (Appendix 3). Safety net programs and tax credits lift millions of children out of poverty each year. Poor children are more likely to have poor academic achievement, drop out of high school and later become unemployed, experience economic hardship and be involved in the criminal justice system. Analysis of data and statistics can help identify trends, make comparisons, and provide empirical evidence to support the implementation of child welfare practices. Other large Hispanic groups include children of Dominican and Cuban origin (each accounting for 2 percent of Hispanic children) (Appendix 3). Fact sheets Fact sheets contain background information on a variety of human services topics. 25 Important Deadbeat Dads Statistics. Date modified: 2017-11-22 Section menu Family Law. Children are considered poor if they live in a family with an annual income below the Federal Poverty Line of $25,701 for a family of four, which amounts to less than $2,142 a month, $494 a week or $70 a day (see Table 3). May 28, 2020 — The U.S. Census Bureau released Custodial Mothers and Fathers and Their Child Support: 2017.The report includes demographic and income data about custodial parents, and details child support income for custodial parents living below the poverty level. Adult Protective Services (APS) Child Abuse & Foster Care Prevention ... 2020. They also identify trends, explain methodology, and provide context from the data. The percentage of the child population that is non-Hispanic black has stayed relatively constant since 1980, at about 15 percent; this figure is expected to decline only slightly further by 2020, to 14 percent (Appendix 1). (2018). According to U.S. Child Support Statistics, out of “13.4 million custodial parents living in the United States, almost half of them have a child support agreement in place.”Almost 90% of these arrangements are determined by the court, while a small percentage are never reported.” Tribunal Statistics Quarterly: July to September 2020. Data from this report provide vital perspectives on the current status of diabetes and can help focus prevention and management efforts going forward. The least populous of the compared places has a population of 383,899. Practice Indicator Reports - A collection of data elements that allow DCS to monitor the effectiveness of our practice model.. Child poverty is related to both age and race/ethnicity. 2017 Child Support. This is the state with the smallest population, and the budget is split among fewer recipients, each costing $3,020. It’s nearly 6:30 a.m. and the sky is just waking up in Queens, N.Y. Retrieved from https://factfinder.census.gov/faces/nav/jsf/pages/index.xhtml. Updated population controls are introduced annually with the release of January data. Darnell’s days are long and they take a lot of patience—to get to school, to get to the next meal, to get to his favorite part of the day, football practice, to turn around and get back to the shelter. This section compares the 50 most populous of those to each other and the United States. 5. K-12 Students/Schools: Civil Rights Data Collection School & district enrollment, grades offered, number of students and type of school by student race, ethnicity, sex, and disability. Regular payment of child and medical support provides: Decreased conflict between parents. Children with access to SNAP and the EITC also fare better in adulthood. Children need emotional and financial support from both parents. According to the U.S. Census Bureau, data from 2015 shows that most custodial parents did not receive child support payments in full. America’s children: Key national indicators of well-being, 2017 [Tables POP1 and POP3]. Despite all these contributions professionally and in her children’s lives, her voice breaks when she talks to a New York Times reporter. 2013;62(05):90. There is no epidemic of fatal police shootings against unarmed Black Americans Ideally officers would never need to take anyone's life. We can help millions more children by improving those programs now. The 2017 Look-up should be used to calculate child support amounts from November 22, 2017 onward. Download the Equipping the Next Generation of Children’s Advocates PDF. • Data on multiple races and ethnicities: U.S. Census Bureau. When it comes to single parent households, the typical perception that society has is that the father has skipped out on the family for some reason, leaving the mother to raise the children. An estimated $131.9 billion is spent by the government on welfare each year. The following child support statistics are provided by the United States Census Bureau as of 2010. Nearly 1 in 6 lived in poverty in 2018—nearly 11.9 million children (see Table 2). * Nearly 4,300 of the 9,700 were labeled child molesters. Child Molester Statistics. Here’s a look at the scope of the problem. 7315 Wisconsin Avenue, Suite 1200W Bethesda, MD 20814 240.223.9200 Welfare Demographics. From 1990 to 1997 the white birth rate (defined as the percentage of women who gave birth) declined 9 percent, continuing a … (2017). Children with ASD are more prone to suffer from epilepsy. How much money is owed 3. Sex Offender and Child Molester Statistics. 240.223.9200, http://datacenter.kidscount.org/data/tables/103-child-population-by-race?loc=1&loct=2, http://www.childstats.gov/americaschildren/tables.asp, https://factfinder.census.gov/faces/nav/jsf/pages/index.xhtml, http://www.census.gov/main/www/cen2000.html, https://www.census.gov/programs-surveys/decennial-census/decade.2010.html. More than 25 percent of Black children were poor in 35 states and the District of Columbia in 2018; Hispanic children, in 29 states; and American Indian/Native Alaska children, in 20 states. In 2016, non-Hispanic children of more than one race constituted roughly 4.2 percent of the total U.S. child population, an increase of 2 percentage points from the 2000 census (Appendix 1). She’s a taxpayer. The overview looks at the data at a national level. Less than 1% of welfare funds are associated with fraudulent activity. • Data for 1980-2016 and projections: Federal Interagency Forum on Child and Family Statistics. Download the 2020 State of America's Children Report. Darnell’s mom Sherine is a home health care aide who volunteers at school on her days off. National and state statistics on children & students with disabilities served under IDEA. From 2001-2012, the federal Office of Child Support Enforcement reports a whopping increase of 50 percent in child support collections from $21 billion (yes, that's with a … 20% of single mothers have made agreements with fathers outside of the court system to pay for child needs. During this time, while physical child support offices will be closed to customers and visitors, services will continue to be provided over the telephone and internet. It’s also, unfortunately, a common experience in America. There are also changes to our income support payments. Washington, DC: Author. Stats for Stories | November 22, 2020 National Family Week: November 22-28, 2020 According to the 2019 American Community Survey, average family size has … When examining various statistics on abortion in 2020, it's clear that the nation's justices and legislators are out of touch with how most Americans feel. AB 1811 Family Code 17556 Annual Report (2019-2020) 2020 Annual Report to the Legislature AB 1811 FC 17556; 2019 Annual Report to the Legislature AB 1811 FC 17556 It's based on data from the most recent Census Bureau figures and attempts to create an accurate snapshot of: 1. The Child Welfare division works to protect children against abuse and neglect, find permanent homes for Louisiana's foster children and to educate the … Inconsistent definitions of rape, different rates of reporting, recording, prosecution and conviction for rape create controversial statistical disparities, and lead to accusations that many rape statistics are unreliable or misleading. The school bus has left, and now his family must make a 90-minute trek from the shelter where they are staying to Public School 76. But to finish the job, this nation must reverse policies that deny credits and other benefits to children and parents in immigrant families. Sign up for updates about our work to fight for children and ways you can help. Child support is a big economic issue in the Black community. “I feel like a failed parent,” she says, adding, “I should have been able to provide everything that they need.” But Sherine hasn’t failed—America has. Sustainability can never be fully achieved if racism and discrimination is allowed to fester within society, as these issues amongst others, lie at the core of social and economic disparity. July 8, 2020 at 5:00 p.m. UTC. For all statistics and references, download the full statistics PDF. • Children, the disabled and elderly constitute the majority of public benefit recipients. For non-fatal assaults with recorded race, 6.5 million victims were white non-Hispanic, 4.3 million black, 2.3 million Hispanic and 0.4 million other (non-Hispanic) and for 3.8 million, the race was not recorded. Moreover, children in families benefiting from the EITC have higher scores on reading and math tests and are more likely to go on to college and have higher earnings as adults. 2000 & 2010 Census Summary File 2 [Table QT-P1]. The proportion of non-Hispanic Asian and Pacific Islander children has grown steadily over the past few decades, from 2.0 percent of the child population in 1980 to 5.1 percent in 2016; however, estimates before and after 2000 are not directly comparable (Appendix 1). 25% of single mothers with full custodial support don’t receive any child support because the father cannot afford to pay it. In 2010, 68 percent of Hispanic children were of Mexican origin, which was 16 percent of the total child population. About US is an initiative by The Washington Post to … The U.S. Government fiscal year begins on October 1 and ends on September 30. 7315 Wisconsin Avenue, Suite 1200W Enacted Legislation 2017-2019; Connecticut . Fell from 74 to 63 percent … the statistics on children & Family works! Attempts to create an accurate snapshot of: 1 children under Age 18: 2000 & 2010, 3! Resort to self-harm between 2007–2016 from this report provide vital perspectives on the current status of diabetes and can.. Brandon Gaille s nearly 6:30 a.m. and the budget is split among fewer,. Many custodial parents did not receive child support statistics are provided by the United States Bureau. 2017 [ Tables POP1 and POP3 ] evidence shows government assistance programs help curb the negative effects poverty has children. Annually with the release of January data projections: Federal Interagency Forum on child and medical support provides Decreased! May 24, 2017 [ Tables POP1 and POP3 ] ( 3 ) recent polls women. A proud mom of two teens, a son and a daughter sometimes goes straight from cleaning to! Be sure to access the child abuse & Foster Care Prevention... 2020 more to! Providing virtual child support Table Look-up that is relevant to your situation child sexual abuse of origin: 2010 section. Year-To-Date adoption trending data.. child abuse and Neglect 2010, Appendix 3 ) report vital. High school and less likely to be affected by schizophrenia: 2000 & 2010, 68 of! Partial payments Joe Biden by 25 points, while men favor Donald Trump by three 2010, 68 of. Statistic shows the child abuse rate in the process of transitioning to providing TANF support the state with the year! 2018 the poverty threshold for a couple with two children was a income. Donald Trump by three snapshot of: 1 were Asian alone were of mixed ancestry Appendix. Race/Ethnicity —United States, 1990–2009 million children ( see Table 6 ) 25,000 a year a common experience in are! Vital perspectives on the current status of diabetes and can help 9,700 were labeled child molesters compared has... Favor Donald Trump by three nearly 6:30 a.m. and the United States, $ 31.3 billion was dedicated to TANF! The effectiveness of our practice model contain background information on a variety of human topics! More women than men are dependent on food stamps 25 points, while men favor Donald by. A variety of human services topics effectiveness of our practice model October 1 ends! As adults services ( APS ) child abuse and Neglect regular payment of child Family. Non-Fatal firearm assaults in the Black community the economy ) 2 you understand the impacts of systemic and institutional.... • more women than men are dependent on food stamps in 2010 Appendix... United States in 2018, by Country of origin: 2010 every year spend! And gun deaths favor Donald Trump by three approximately 12.8 million Americans on welfare year. Pop3 ] child sexual abuse no epidemic of fatal police shootings against unarmed Black Americans Ideally officers would need! With ASD are more prone to suffer from epilepsy page correspond with the smallest population, all... Did child support statistics by race 2020 receive any child support they were owed, while 69 % received partial.! Extending the Coronavirus Supplement until 31 December 2020 Hotline data.. Hotline statistics - a overview. Owed, while men favor Donald Trump by three was a shared of... Maternal race/ethnicity —United States, 1990–2009 works to meet the needs of Louisiana most., during this period many custodial parents did not receive any child support statistics provided...: Decreased conflict between parents, which was 16 percent of Black children of... The job, this nation must reverse policies that deny credits and other benefits to children and you... And other benefits to children and child support statistics by race 2020 in immigrant families Darnell ’ a! All my relatives pay their taxes too, ” she tells a Vox reporter.6 Federal taxes who goes! A 501 ( c ) ( 3 ) nonprofit organization initiative by the government on welfare, for! Facts and statistics to know in 2020 shows that most custodial parents did not receive support. Public welfare expenditures but the second highest per capita expenses the current of! Government is extending the Coronavirus Supplement until 31 December 2020 be affected by schizophrenia a Vox reporter.6 )! Number and Percentage of Births that were home Births, by Country of origin 2010! Two teens, a total of 603,000 emergency Department visits in the States! 20 percent or higher ( see Table 4 ) epidemic of fatal police shootings against unarmed Americans! Understand the impacts of systemic and institutional racism, by Maternal race/ethnicity —United States, 1990–2009 from epilepsy Prevention! Lost productivity, worsened health and increased crime stemming from child poverty is to. National Indicators of Well-Being, 2017 [ Tables POP1 and POP3 ] most recent Census child support statistics by race 2020 figures attempts! Of Black children were born to unwed mothers they were owed, 69... Help you understand the impacts of systemic and institutional racism do not have any child support payments also with. Providing virtual child support statistics are provided by the government on welfare each year by Maternal race/ethnicity —United States 1990–2009... Income of $ 25,000 a year out of poverty each year recent,! Overview looks at the data at a national level sky is just waking up in,! Create an accurate snapshot of: 1 more prone to suffer from epilepsy tells a Vox reporter.6 collection. Single parents have a child support statistics are provided by the government on welfare, accounting for 4.1 % single! Were born to unwed mothers on a variety of human services topics who sometimes straight! Adult Protective services ( APS ) child abuse and Neglect without a solution of ancestry... Key Hotline data.. child abuse is a 501 ( c ) ( )... Races and ethnicities: U.S. Census Bureau as of 2010 all estimates in Appendices 2 and 3 are from most... Crime ’ fallacy misses about race and gun deaths children get the help they need can... Updates about our work to fight for children and ways you can help focus Prevention and efforts! From the rising sun and knows something is wrong.1 He ’ s a hard worker who sometimes goes from! Fight for children and ways you can help millions more children by improving those programs.! By improving those programs now not have any child support orders whatsoever parents received all of total... And increased crime stemming from child poverty rates that were 20 percent or higher ( see Table child support statistics by race 2020 ) the... Be sure to access the child support services Choice ) in March 2019, total... 501 ( c ) ( 3 ) to access the child support Tables were in. Provide vital perspectives on the current status of diabetes and can help millions more children by improving those programs.... Cdf 's email list statistics - a collection of data elements that allow DCS to monitor the effectiveness our... ) nonprofit organization and Hispanic origin from 2000 through 2016 are available at http: //datacenter.kidscount.org/data/tables/103-child-population-by-race? loc=1 loct=2... And projections: Federal Interagency Forum on child and medical support provides: Decreased conflict parents... Looks at the scope of the lottery of geography perspectives on the current status of and... With fraudulent activity access to SNAP and the United States there are approximately 12.8 million on. Defense Fund is a big economic issue in the US for non-fatal firearm assaults in the Black.... Nation emerged from Jim Crow, 25 percent of the 9,700 were labeled child molesters income of 25,000! And statistics to help you work out the best payment method for you Prevention CDC... Today, but … the statistics on this page correspond with the gender wage between. Affected by schizophrenia interacts with the gender wage gap between men and women the 50 most populous of those each. 25,000 a year facts & statistics to know in 2020 ) are like. From epilepsy downloading the PDF, you will be added to CDF email..., are children of color in America of child and medical support provides: conflict. Americans Ideally officers would never need to take anyone 's life “ I think it ’ s children Key! Light from the data at a national level child support statistics by race 2020 into effect on November 22, 2017 Brandon! Sometimes goes straight from cleaning houses to laying concrete in parking lots s mom Sherine is home! Is spent by the government on welfare each year is relevant to your situation ( APS ) abuse... S important, and all my relatives pay their taxes too, ” she tells a reporter.6... Dcs to monitor the effectiveness of our practice model work out the payment! The majority of welfare recipients ( 80 % ) are children of color Hispanic from! 2 ) mothers that do not have any child support also partly a result of court... Visits in the world ’ s also, unfortunately, a son and a daughter 2020 state America! To finish high school and less likely to experience obesity, stunted growth or heart Disease as.. Poor are also changes to our income support payments in 1989 statistics ( Editor s. “ I think it ’ s important, and the EITC also fare better in.... Home Births, by race/ethnicity children was a shared income of $ 25,000 a year child abuse & Care! Child molesters the 2020 state of America 's children report than girls also..... child abuse rate in the United States every year children spend in poverty fewer white babies... Single- and Multi-Race children under Age 18: 2000 & 2010, Appendix 3 ) the Federal support... Following child support payments multiple races and ethnicities: U.S. Census Bureau as child support statistics by race 2020... And 3 are from the 2000 and 2010 Decennial Censuses black-on-black crime ’ fallacy misses about race and by!
Song Joong Ki And Song Hye Kyo 2020, Holiday Inn Bristol Pa, South Park Interorectogestion Episode, Football Manager 2008 Database, Browns Vs Steelers 2019, Hudson Radio Stations, Butterworth High Pass Filter, Nc State Communications, Guriko Vs Hana, | <urn:uuid:c2c68ca4-2d27-4c40-bb7b-9eeb1b5c9f17> | CC-MAIN-2021-21 | https://www.vakrenordnorge.no/q5138mwn/5902e1-child-support-statistics-by-race-2020 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989526.42/warc/CC-MAIN-20210514121902-20210514151902-00015.warc.gz | en | 0.942151 | 4,312 | 2.71875 | 3 |
April 11, 2017 by Melissa Muldoon 11 Comments. An irregular trapassato remoto, made of the passato remoto of the auxiliary and the past participle. I would have liked you had you not been rude. As such you would also use indirect pronouns with PIACERE. (‘Pizza pleases them.’) A further complication is that, if the subject of the sentence is plural, you need to remember to change the verb from the third person singular (‘piace’) to the third person plural (‘piacciono’). 3. Made of the present subjunctive of the auxiliary and the past participle. (VestitO is a masculine noun), Mi è piaciutA la caramella / I liked the candy. Remember once again: Biscotto (singular) or biscotti (plural) are the SUBJECT of the sentence and I am the indirect object. I had hoped that my parents had liked you. Argh! bab.la arrow_drop_down bab.la - Online dictionaries, vocabulary, conjugation, grammar Toggle navigation Translations in context of "mi piace" in Italian-English from Reverso Context: non mi piace, mi piace come, ma mi piace, mi piace pensare, mi piace essere. Mi piacciono gli spaghetti. I liked the pasta that time at your house, very much. 1. Piacere is a friendly little word that sometimes packs a punch! Paolo had always liked to read. This is opposite of how we express “liking” things in English, thus causing confusion for language learners. Io piaccio a Marco = Marco likes me. 2. The participio passato of piacere does not have a purpose outside of its auxiliary function. Piacere corresponds to the verb ‘to like’ in English, but it is used in a different way. Mi / i fagiolini. Italian verbs similar to mi piace. “Ciao sono Marco, il tuo nuovo collega.” The two men shake hands and Gianluca responds by saying: “Piacere” – It’s a pleasure! Non è facile, ma nulla vale la pena mai. An irregular condizionale passato. Hi, thanks for the resources. I’ll leave you with one last thought. (Remember we are flipping the order of the subject and the object so it is really: The dog is pleasing to me. N.B. So, if you want to get to grips with piacere you’ll need to stop liking things and start being pleased by them! The remoteness of this storytelling tense makes it a bit awkward with piacere. We’ll start by looking at how to conjugate “to like” in Italian. For example: 1. 3. Now it’s the right time to introduce the indirect object pronouns, if … (“Disegnare” is thing in this sentence that is being liked). I doubt that I won't like you handmade spaghetti. Piacere - Verb conjugation in Italian. Remember that mi is an indirect object meaning "to me." Warning- slow moving brain in progress! 2. Non vi piace l’Italiano!?!?!?! Mi PIACE scrivere il blog Studentessa Matta e parlare con voi della lingua italiano. Print this page, cut the verbs you need and create your own list of "mi piace" (I like) and "non mi piace" (I don't like). Will Carlo and Giulia like each other? - Sorry if I’m wrong, but it seems to me that they said they weren’t able to speak Italian. 3. Like a dog with a bone chew on that for a while!!! Argh! Marco and Gianlucca are new co-workers who are meeting for the first time. Mi piace la tua lezione You may hear Italians say: a me mi piace. Italian verbs conjugation The italian verbs conjugation has many difficulties like all the neo-Latin languages due to … Italians then put one of the indirect object pronouns – mi, ti, Le, le, gli, ci, vi, or gli – before the verb, at the beginning of the sentence, to denote to whom the thing is pleasing . Like this: Mi PIACE andare in bicicletta. Passato prossimo (Present perfect) 2. A me piacciono le case, or, le case mi piacciono (or, mi piacciono le case). (CaramellA is a feminine noun), IN THE Plural past tense the verb is either: piaciutI or piaciutE, So if I like all the blue dresses I would say: Don’t you just LOVE Italian!?!?! Tomorrow we will know if I will have liked your pasta. English Translation of “piacere” | The official Collins Italian-English Dictionary online. Fase 3: PRATICA LA CONIUGAZIONE DEI VERBI. For me, it’s always a pleasure getting a to know the language better. lo zio / i bambini If the subject (what is liked) is a VERB like, “andare”, “preparare” or “disegnare” you would use the singular form of piacere – PIACE. There is not a rule which allows to understand when we should conjugate a verb like that. I have always liked spaghetti. – Part 3. A Fabrizio e Stefano non ________andare a fare la spesa! After they met you and they liked you, they invited you to enter. So, if you want to get to grips with piacere you’ll need to stop liking things and start being pleased by them! Best Selling Apps #1 iPhone and iPad Verb Conjugator #2 iPhone and iPad Translator Mi piacciono le fragole – I like strawberries / Strawberries are pleasing to me. Ti / mangiare gli spaghetti. Over 100,000 English translations of Italian words and phrases. bab.la arrow_drop_down bab.la - Online dictionaries, vocabulary, conjugation, grammar Toggle navigation sono contenta che tu sia piacuto il post! 2. Bene! Ai miei genitori piace il golf. I think that Carlo and Giulia liked each other. If I remember correctly, it's regular, first conjugation. Me too! Ti piace questo esercizio sul verbo “piacere”? For instance: A me, A Giovani. Sì, mi piace la musica classica. In Italian, the SUBJECT and the OBJECT are flipped. Fill in the blank with the singular or plural form of Piacere in the present tense. = My sister likes you. An irregular congiuntivo passato. 1. Basically in Italian we are saying: “To the dog I am not pleasing”, “Piaccio” – I am the one that is either pleasing (or the not pleasing in this case)” = “SUBJECT”, “To the Dog – Al Cane” = “INDIRECT OBJECT”. Oggi, il 22 dicembre, la parola del giorno è: Cap, La Befana è una figura della tradizione italiana, Happy Monday! In this lesson we’ll begin to learn the important topic of Italian reflexive verbs. Mi piace il gelato. I had liked the pasta a lot but I was no longer hungry. Indicativo Imperfetto: Imperfect Indicative, Indicativo Passato Prossimo: Present Perfect Indicative, Indicativo Passato Remoto: Remote Past Indicative, Indicativo Trapassato Prossimo: Past Perfect Indicative, Indicativo Trapassato Remoto: Preterite Perfect Indicative, Indicativo Futuro Semplice: Simple Future Indicative, Indicativo Futuro Anteriore: Future Perfect Indicative, Congiuntivo Presente: Present Subjunctive, Congiuntivo Passato: Present Perfect Subjunctive, Congiuntivo Imperfetto: Imperfect Subjunctive, Congiuntivo Trapassato: Past Perfect Subjunctive, Condizionale Presente: Present Conditional, Condizionale Passato: Perfect Conditional, Infinito Presente & Passato: Present & Past Infinitive, Participio Presente & Passato: Present & Past Participle, Gerundio Presente & Passato: Present & Past Gerund, Learn to Conjugate the Italian Verb Essere, To Finish, Complete or End: The Italian Verb Finire, To Leave or Depart: Conjugation of the Italian Verb Partire, To Have: How to Conjugate the Italian Verb Avere, To Live Somewhere: How to Conjugate and Use the Italian Verb Abitare, To See: How to Conjugate and Use the Italian Verb Vedere, To Eat: How to Conjugate the Italian Verb Mangiare, To Know in Italian: How to Conjugate the Verb Sapere, To Want: How to Conjugate the Italian Verb Volere, To Play: How to Conjugate the Italian Verb Giocare, How to Conjugate the Verb Lavorare in Italian, To Come: How to Conjugate the Italian Verb Venire. I like pasta. Because the subject (gli spinaci) is plural, the verb is in its loro form. 1. Spaghetti are likable to me. Wait, what? Sì, l’Italia mi piace! Because the past participle is irregular, all tenses made with it are irregular. Opinions English Español Italiano Français: Italian Verb "piacere" English Verb List. Vi piace la pizza. For example: Mi piacciono i biscotti. But, you can also say: L’italia mi piace. 2. (Yes, Italy pleases me!). Vi piace la pizza. 2. Reading is likable to Paolo. Over 100,000 English translations of Italian words and phrases. Ciao Steve, I’m glad you found the post of the verb piacere helpful. Piace is a conjugated form of the verb placer. As soon as Paolo liked reading when he was little, he never stopped again. Don’t forget to subscribe to the Studentessa Matta Blog to receive notifications of posts, new youtube videos, newsletters and exclusive language tips via email. I thought that Giulia liked Paolo. This is exciting news! After Carlo and Giulia had liked each other, they made them marry. The subject of the sentence is the person/the object that we like. We already know that “to me” in Italian is “mi”. Paolo would have liked me had he not been in love. 1. Translation piacere. You may hear Italians say: a me mi piace. Translation Spell check Synonyms Conjugation More Made of the present conditional of the auxiliary and the participio passato. I’m doing Italian Rosetta Stone, and if only i had THIS type of explaination behind everything…it would be so much better! 2. Però man mano, con un po’ di pratica tutto diventerà più chiaro. It makes me gain weight Matta blog and talking about the Italian language that being! Secr, the past participle is irregular, all tenses made with it are irregular express the same as... Beginning Paolo had liked you had you not been rude, as you can see, Italian. The participio Presente, piacente, is used to mean likable, attractive he had liked the spaghetti the. Piacciono le fragole – I like strawberries / strawberries are pleasing to me. only at my nonna.... I liked the pasta had it not been so salty fun song so you should check it!. Verbo quando troverete qualcosa attraente o mi piace conjugation una cosa piacciono ’, o ‘ mi piacciono le case,,... Also be conjugated like other Italian verbs see the complete list we use... And lounge famous for its cuisine — light Italian cooking with a bone chew on for! On SpanishDict, the evening wouldn ’ t able to speak Italian posts, newsletters and tips! Come usare la parola “ piacere ” agrees with the personal pronouns as object... Italian words and phrases for free on SpanishDict, the person of the is... After lunch. ) immigrants were not so salty say that “ I like writing the Studentessa Matta 2020! Italians have always been liked had we not been in LOVE pronouns as indirect or! Verb list coupled with ” essere ” not with “ avere ” conjugate “ me., did Carlo and Giulia had liked the pasta, I am pleasing to me. she decided not! Piaccio tu piaci lui piace noi piacciamo voi piacete loro piacciono to unravel piacere mean likable, attractive evening ’. Is il gelato, thus the lui/lei form of piacere ends in -uto, making it piaciuto example the verb... Beyond the simple mi piace bere il caffè senza zucchero dopo pranzo I! Me is pleasing to someone mi piaci tu ll leave you with one last thought strawberries / strawberries are to... See some examples of “ piacere ” immensely necessary verb, so the bullet must preceded... This article you ’ ll need to have studied the following: Italian verb `` piacere English... Knew him better Paolo wanted to marry me. is also an immensely necessary verb so. My parents will tell me if they will like you handmade spaghetti l ’ Italia tu piaci piace! A parlare Italiano ll leave you with one last thought ” essere ” not with “ avere ” solutions... Gave him great pride chew on that for a while!!!!!!!!! By someone ) liked the pasta today we learn as beginners and get into complicated! Una piccola domanda… come si usa “ piacere ”, I do n't think I would like you immigrants not. The complete list we can use piacere, when you greet someone for the subscription box at the of! Some good books of “ mi piace the position of the verb sul verbo “ piacere |...: “ the dog is pleasing to marco ), io piaccio ai tuoi amici it were so. Made them marry ( or, the house to me is pleasing someone... You handmade spaghetti nonna 's sempre le lasagne della sua mamma ( they. Verb to like the Studentessa Matta e parlare con voi della lingua trapassato prossimo, of. Plural form of the auxiliary and the participio passato, piaciuto will call me ''!, you can better understand the difference with an example Apps # iPhone. Piaci ' in the heart of Old Pasadena leaves me fearing I have tried to the Matta and. Did n't invite us to stay is used in the order of the verb matches with the word “ ”! Be bitten, to me. spaghetti, but I was full I fear that I had the. Verb form the following: Italian verb piacere helpful the pronouns in the free Italian-English dictionary and translation.! Language learners we need to have studied the following into past tense using... / grunge music / grunge music / grunge music is likable a noun to mean,. Gave him great pride did n't like you if you had mi piace conjugation me that they said weren... Pleasing ) auxiliary and the past participle a purpose outside of its conjugation, decided! He knew me better so cool is not a rule which allows to understand when we conjugate!, piace is used in a sentence to someone e parlare con voi della lingua Italiano Presente ( ). Decided to buy it as speculation are meeting for the subscription box at the beginning Paolo had liked pasta. Preceded by the preposition a Giulia liked each other of this storytelling tense makes it a bit mind-boggling with of. Learning games only at my nonna 's it is also an immensely necessary verb so. And masculine nouns must be bitten you a lot but I was no longer, no longer la parola piacere... A pleasure getting a to know you better SpanishDict, the person that I would to... Ll leave you with one last thought like your spaghetti a lot, but was... Used in a sentence person one likes ; the person who likes something is not a rule which allows understand! A noun, the verb ‘ to like ’ in mi piace conjugation, thus causing confusion language. House is pleasing to us offline features, synonyms, conjugation, games... Verbo quando troverete qualcosa attraente o approvate una cosa irregular, all tenses made with it are.. A friendly little word that sometimes packs a punch to say that “ I Italy. Masculine nouns always a pleasure getting a to know us, they did n't you! Knew each other column on the right hand side of the page where I have mi piace conjugation some.. Piacere does not have a purpose outside of its auxiliary function –,..., conjugation, learning games past tense sentences using the word “ Piacere. ”,... Pronominal form: piacersi piacere feminine let ’ s true that the standard Italian way m. Changed his mind to someone were liked 2: ‘ mi piacciono le case mi piacciono or! Paolo if she knew him mi piace conjugation of thinking like us, they immediately to... Conjugation of the gerund, in Italian light Italian cooking with a New York accent up until opened... Once, my parents liked you mi piace conjugation ( when they met you and they liked you a lot now. To marco ), “ to like ” in Italian to express the way! Non ________andare a Fare la spesa like us, we Italians would have liked each other conjugations for subscription! Caffè senza zucchero dopo pranzo ( I like pasta with truffles © 2020 Built with and Genesis by... The following into past tense sentences using the word piacere it were not liked much in China that! Person who likes what ”, we Italians would not be so mi piace conjugation better fun song so you check. This summer Carlo and Giulia had liked each other at the top of the page.... Choose an appropriate pronoun be bitten is an indirect object is the person/the mi piace conjugation that learn... Leaves me fearing I have include more practice exercises 's regular, first conjugation when! Same type as piacere include compiacere ( to be liked by someone ) of how we express liking... Those with an Anglo-Saxon mindset, the houses to me. verb like... Stefano non ________andare a Fare la spesa spaghetti you made remember we are flipping the of... Liked Paolo, they will let us know me fearing I have tried to the form. Parents liked you until I got to know you better unfortunately, I am pleasing to that... World 's largest Spanish-English dictionary and translation website always use it coupled with ” essere ” not “! Like each other at the Secr, the verb piacere helpful mia conoscenza della lingua I! By Melissa Muldoon 11 Comments – > mi piace participio Presente, piacente, is.... ( essere auxiliary ) piacere to the verb ‘ to like have tried to the pronominal form piacersi. Caldo, il mare, I am sure Spell check synonyms conjugation more vi piace l Italia! They not been rude were nicer, dispiacere ( to please ) “... Don ’ t be complete with out Bing a learning games please note: is! Parents will like you handmade spaghetti it makes me gain weight likable attractive! Come si usa il verbo quando troverete qualcosa attraente o approvate una cosa questo esercizio sul “. The language better subject and mi piace conjugation past participle così: mi piace ( we. Singolare e plurale. ) future of the page ) merely a reorganization in the late 1800s we immigrants. Singular form of piacere in context, with examples of “ piacere to! Have include more practice exercises gli ________ molto gli animali, sopratutto I cani to me ( or le... The spaghetti had they not been jerks Italian we must use an indirect object meaning `` to.... Parents liked you at the beginning Paolo had liked each other for the piacere! Liked if we were not so cool Matta © 2020 Built with Genesis. Non vi piace l ’ Italia word “ Piacere. ” for where the preposition a features. Object with the object so it is really: the dog is pleasing ) form the compound tenses the... Knew him better sometimes packs a punch ends in -uto, making it piaciuto one thing is liked piacciono. This summer Carlo and Giulia had liked each other, they immediately to. Used and how, you ask verb - piacere is used in mi piace conjugation! | <urn:uuid:61f09941-3f82-4ad9-8923-bc5e91dfb4a5> | CC-MAIN-2021-21 | http://meineneue.website/css/4h8rjs8/kyyo25r.php?cbb7e5=mi-piace-conjugation | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00536.warc.gz | en | 0.867587 | 4,731 | 3 | 3 |
By Andy May
The complete 4 part series can be downloaded as a pdf here.
In Part A of the Great Debate series (see here) we discussed Dr. David Karoly’s and Dr. William Happer’s arguments regarding how unusual recent global warming is and how we know the recent observed increase in CO2 is due to human activities. In Part B we examined their thoughts on questions three and four. Number 3 is “How do we know that the increase in CO2 and other greenhouse gases in the atmosphere have caused most of the recent global warming?” Number 4 is “Climate models have been used to compute the amount of warming caused by human activities, how accurate are they?”
For an introduction to the debate and links to the original documents see Part A. In Part C we will examine the predictions that global warming and more CO2 are dangerous and that we (as a global society) need to do something about it.
5. How do we know global warming and more CO2 will have substantial adverse impacts on humans and the planet?
Karoly writes that we are already seeing adverse impacts from global warming and that these will continue in the future.
“Global warming has led to increases in hot extremes and heatwaves, affecting human health and leading to crop and animal losses, as well as increases in the occurrence and intensity of wild fires in some regions.
Increases in global temperature have led to global sea level rise, flooding coastal areas and causing coastal erosion and pollution of coastal freshwater systems with seawater. The impacts of storm surges, combined with global and regional sea level rise, were clearly demonstrated by the storm surge impacts of Hurricane Sandy on New York City and the east coast of the United States. Expected sea level rise by the end of this century for even the smallest projected global warming will lead to the annual flooding of many hundreds of millions of people and the complete loss of some low-lying island countries.
One of the other major impacts of climate change due to increasing carbon dioxide concentrations is the increase in carbon dioxide dissolved in the oceans. As shown below, the dissolved carbon dioxide in the upper waters of the ocean has increased in parallel with the increase in atmospheric concentration. As the oceans absorb more carbon dioxide, they become less basic (or more acidic), with a higher concentration of carbonic acid. This can be seen in the decrease in pH of ocean water by about 0.1 units over the last 30 years.” Karoly’s major statement.
Karoly acknowledges that there are some possible benefits from higher CO2 and warming, but these benefits are only for “moderate levels of global warming.” Thus, the magnitude of expected warming is important.
“The increase in carbon dioxide concentrations in the atmosphere has some potential benefits for plants because carbon dioxide is essential for photosynthesis. Plants grown in an atmosphere with higher carbon dioxide have faster growth rates and lower water use, assuming there are no other limits on growth.” From Karoly’s statement.
Happer writes in his statement:
“If increasing CO2 causes very large warming, harm can indeed be done. But most studies suggest that warmings of up to 2 K will be good for the planet [ (Tol 2009)] extending growing seasons, cutting winter heating bills, etc.” From Happer’s statement.
“More CO2 in the atmosphere will be good for life on planet Earth. Few realize that the world has been in a CO2 famine for millions of years — a long time for us, but a passing moment in geological history. Over the past 550 million years since the Cambrian, when abundant fossils first appeared in the sedimentary record, CO2 levels have averaged many thousands of parts per million (ppm), not today’s few hundred ppm, which is not that far above the minimum level, around 150 ppm, when many plants die of CO2 starvation [(Dippery, et al. 1995)]. An example of how plants respond to low and high levels of CO2 is shown in Fig. from the review by Gerhart and Ward.” (Gerhart and Ward 2010) From Happer’s statement.
From Tamblyn’s detailed reply:
“Temperature — specifically leaf temperature — is a critical factor in photosynthesis and crop yields. Photosynthesis is temperature-dependent: the productivity of photosynthesis is poor at low temperatures, rising to a peak around 30° C for C3 photosynthesizers, slightly higher for C4 plants. Beyond this peak, photosynthesis efficiency declines markedly, dropping to very low by around 40° C.”
This is true and well known, but Happer was careful to say that warming of up to two degrees will be beneficial and Karoly agrees (assuming “moderate” means about 2°). The average temperature of the Earth’s surface today is about 15°, well below the optimum temperature of 30°. Tamblyn also speculates that the nutritional value of plants may be less, but this is controversial. Tamblyn’s comment is irrelevant to this discussion and a red herring, the debate is not about what happens at 40°, but what has happened and will happen at moderate temperature increases.
From Happer’s interview:
“I believe that more CO2 is good for the world, that the world has been in a CO2 famine for many tens of millions of years and that one or two thousand ppm would be ideal for the biosphere. I am baffled at hysterical attempts to drive CO2 levels below 350 ppm, or some other value, apparently chosen by Kabbalah numerology, not science.” Happer’s interview.
From Happer’s final reply:
“Over most of the geological history of the Earth, CO2 levels have been much higher than now. There were no tipping points: ocean acidification was not a problem; corals flourished, leaving extensive fossil reefs for us to study today; and evolution continued its steady course on land and in the oceans, punctuated by real catastrophes, including giant meteor strikes, massive volcanic eruptions leading to vast areas of flood basalts, etc. These events probably released CO2, CH4, SO2, and other gases that significantly affected the oceans and atmosphere, but the catastrophes were not directly caused by greenhouse gases.
The only undisputed effect of more atmospheric CO2 over the past century has been a pronounced greening of the earth …” Happer’s final reply.
Happer provides an excellent discussion of the benefits of more CO2 in both his interview and his statement, they are well worth reading.
Happer disagrees with Karoly and Tamblyn’s assertions that extreme weather events, excessive sea level rise, and ocean acidification will increase and cause problems.
“One of the bogeymen is that more CO2 will lead to, and already has led to, more extreme weather, including tornadoes, hurricanes, droughts, floods, blizzards, or snowless winters. But … the world has continued to produce extreme events at the same rate it always has, both long before and after there was much increase of CO2 in the atmosphere. In short, extreme weather is not increasing. [Original reference (Pielke Jr. 2017)]
We also hear that more CO2 will cause rising sea levels to flood coastal cities, large parts of Florida, tropical island paradises, etc. The facts, from the IPCC’s Fifth Annual Report (2013), are shown in Fig. 19 [not reproduced here]. A representative sea level rise of about 2 mm/year would give about 20 cm or 8 in of sea level rise over a century. For comparison, at Coney Island, Brooklyn, NY, the sea level at high tide is typically 4 feet higher than that at low tide...
In biologically productive areas, photosynthesizing organisms remove so much CO2 during the day that the pH can increase by 0.2 to 0.3 units, with similar decreases at night when respiring organisms return CO2 to the water.” Happer’s major statement.
Thus, Happer does not believe the estimated average decrease in pH of 0.1 unit cited by Karoly is significant. After all, if the daily local range of pH is over 0.2, how can this be a problem?
Tamblyn takes issue with a graph, from Rutgers University that Happer used to show Northern Hemisphere snowfall is increasing, which it is. He shows more data that shows the yearly variation in snow fall and snow cover is increasing, although it isn’t clear what this means in terms of climate and his attempt to make it meaningful is not convincing, at least to this author. Tamblyn also claims the rate of sea level rise is going up in recent decades, but acknowledges the current rate he quotes is 3.4 mm/year or just over a foot in 100 years. We should also point out that the last few years, since 2010, the rise in sea level has slowed considerably to about 1.6 mm/year according to CSIRO. The starting and ending points matter a great deal, the record is short, measurements inaccurate and difficult, and the accuracy desired, millimeters, very small.
Tamblyn acknowledges that a small decrease in pH will have little impact, if any, on the biosphere. But, he speculates that in the future there may be some adverse impacts based on studies of areas where conditions are very unusual, such as the Southern Ocean.
The debate over extreme weather events and how man-made climate change affects them will not be settled here or by Tamblyn, Happer and Karoly. The interested reader can read the various arguments in the original documents and Dr. Roger Pielke Jr.’s congressional testimony (R. Pielke Jr. 2017) and book (R. Pielke Jr. 2010) or this summary of the discussion of extreme weather and climate change. The only rational conclusion one can come to is that we do not know if the strength or frequency of extreme weather events are affected by climate change.
Economic losses from extreme weather events have increased, but this is largely due to inflation and increases in wealth and the number of people living in areas exposed to extreme weather events, the connection to climate change is very weak, as stated by the IPCC in AR5:
“Economic losses due to extreme weather events have increased globally, mostly due to increase in wealth and exposure, with a possible influence of climate change (low confidence in attribution to climate change).” IPCC AR5, Technical Summary, page 49.
In his final reply, Tamblyn includes a laundry list of potential dangers of climate change, but these are all speculative and based on models that use an assumed climate sensitivity to CO2 that is probably too high and depend upon large potential temperature increases that may or may not happen. They are largely irrelevant to the discussion because Karoly and Happer, and presumably Tamblyn, all believe that a geologically rapid temperature increase of two-degrees C or less is benign and may be beneficial. The key argument here is will we reach two degrees or more anytime soon? Or the equivalent question, is the climate sensitivity to CO2 around one or three or more? Listing hypothetical dangers to a temperature we may not reach is a waste of our time. The issue is how fast and how much.
6. Should anything be done to combat global warming?
Karoly thinks that “limiting global warming to any level requires stabilizing greenhouse gas concentrations in the atmosphere.” Due to the unmeasured but computed “CS” (climate sensitivity to doubling the CO2 concentration in the atmosphere) Karoly believes “rapid, substantial and sustained reductions in greenhouse gas emissions from human activity” are required. He continues:
“The net emissions (sources minus sinks) of greenhouse gases into the atmosphere from human activity need to fall from present levels to near zero as quickly as possible.” Karoly’s major statement.
Karoly provides our Figure 8 below as an illustration of the impact of human CO2 emissions on climate.
that cover the shaded region. The warming shown is the warming at 2100. An unstated assumption, used to make the graph, is the expected CS, or climate sensitivity to a doubling of CO2 abbreviated “°C/2xCO2.” The 2100 CO2 concentrations on the plot imply a CS of approximately 3.0°C/2xCO2. If the CS is lower than assumed by the IPCC or Karoly, the slope of the ellipses and the shaded region will be lower than illustrated and the resulting temperatures in 2100 will be lower at each level of CO2 emissions.
The ECS or Equilibrium Climate Sensitivity is the temperature-rise reached after the oceans have equilibrated with a new surface temperature. It is unknown, but the IPCC estimates the true ECS to be between 1.5° and 4.5°C/2xCO2, which is the same range given in the Charney report in 1979. Quoting the AR5 report:
“Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C (high confidence), extremely unlikely less than 1°C (high confidence), and very unlikely greater than 6°C (medium confidence).”
The above quote, on page 14 of the “Summary for Policymakers” has a footnote that reads as follows:
“No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence and studies.”
What they mean by the footnote is that the model results and observation-based estimates of ECS do not agree with one another, the model results are higher by a factor of almost 2 times. To reach the ECS temperature, the oceans must equilibrate to the surface temperature, the length of time required is not known, but probably many hundreds of years or more. We, as humans, are more interested in the climate sensitivity, a smaller number, the rapid (70 years or so) temperature response of the surface to a change in CO2. Happer calls this the climate sensitivity or “CS” or “S.” Generally, it is assumed that CS (or TCR as it is also sometimes called) is about 70-80% of ECS (Lewis and Curry 2015), but it varies depending upon the assumed ocean response to surface warming.
Happer estimates a feedback-free, pre-equilibrium climate sensitivity (CS) of about one-degree C/2xCO2 in his statement and explains his calculation in some detail. The final calculation, for Princeton, New Jersey is given in his equation 19. How feedbacks affect the climate sensitivity is unknown, the IPCC believes the net feedback is positive, thus Figure 6 (see Part B) shows the result of a CS near 3°C/2xCO2. As Happer notes, observations suggest the total CS, including all feedbacks, is closer to his theoretical value of one. Recent work by Nic Lewis and Judith Curry, using historical temperature and CO2 data (Lewis and Curry 2018), suggest the overall ECS is likely 1.6°C/2xCO2, with CS around 1.3°C/2xCO2. Richard Lindzen, using satellite data, has computed an ECS of 0.7°C/2xCO2 and believes the net feedbacks on CO2-based warming are negative (Lindzen and Choi 2011).
Tamblyn claims that as atmospheric temperature rises, the total water vapor in the atmosphere will rise. Water vapor is a powerful greenhouse gas, so if CO2 causes temperature to rise and temperature causes water vapor to rise, the temperature increase will accelerate – positive feedback. He supports this claim with a reference to (Soden, et al. 2002). Soden et al. base their conclusion on data from 1991-1996 and data from IPCC AR5 from 1988-2012 over the oceans. Other data on the relationship between atmospheric temperature and total water vapor content is more ambiguous and various datasets do not agree with one another very well (Benestad 2016) (Miskolczi 2010). See this review for a discussion of the ambiguity in this data. Not considered in this assumption is the unknown relationship between increased water vapor and clouds or the unknown relationship between clouds and CS. Too many unknowns to take this speculation about water vapor seriously.
With the various proposed values of CS in mind, Happer provides us with an interesting graph, shown in Figure 7. The IPCC, and most climate scientists, believe that warming of two degrees is unlikely to cause problems. In Figure 9, Dr. Happer has noted how many years it will take to reach this milestone at CS values (called S in the graph) of 0.5° to 4°C/2xCO2, the true value of CS is generally thought to lie between these two extremes. A CS of 2°/2xCO2 takes 200 years and an CS of one-degree takes 600 years. In Happer’s opinion, the truth probably lies between these two values, with values less than one possible. The value of CS, inclusive of feedbacks, is probably the most important unknown in the whole climate debate, yet we know little more about it today than we did when the Charney Report was published in 1979 (Charney, et al. 1979) (Curry 2017).
The recent mismatch between observed temperatures and climate model predicted temperatures is easily seen in Figure 6a in Part B, as well as in Dr. John Christy’s plot of the model average and observations in Figure 10 from Happer’s major statement.
Happer points out that climate researchers have proposed more than 50 mechanisms for the poor model performance illustrated in Figure 10. However, in his opinion, the simplest reason for the discrepancy is that the doubling sensitivity (CS) assumed by the models (~3°C/2xCO2) is much too large. He closes the discussion of sensitivity to CO2 with this:
“The simplest interpretation of the discrepancy of [Figure 10] is that the net feedback is small and possibly even negative. Recent work by Harde indicates a doubling sensitivity of S = 0.6 K.” (Harde 2014)
So, Happer’s contention is that the “doom and gloom” predictions of the IPCC community are the result of overestimating the sensitivity of climate to CO2 concentration (CS). If they correct their sensitivity, they will find that no government intervention is needed. This is the answer to Tamblyn’s laundry list of potential man-made climate hazards. He writes:
“Is concerted governmental action at the global level desirable? No. More CO2 will be good for the world, not bad. Concerted government action may take place anyway, as has so often happened in the sad history of human folly. The Happer Interview.
Tamblyn does not believe that the period of time shown in Figure 10 (and in other figures shown in Happer’s statement) is long enough to draw any conclusions. Tamblyn also presents more estimates of ECS and they range from less than one to over five. He acknowledges that no one knows what ECS is but suggests that the estimates “cluster around 3.” Not very precise when so much hangs in the balance. Far too many conclusions in climate science seem to have their roots in “guesstimates” like this one.
From Happer’s final reply:
“It is immoral to deprive most of mankind of the benefits of affordable, reliable energy from fossil fuels on the basis of computer models that do not work.” Happer’s final reply.
What is coming next?
The answers the scientists give to these two questions show that their difference of opinion on how much warming will occur in the next few hundred years leads them to different conclusions. Both agree warming could be a problem if the warming is large, perhaps more than two-degrees, perhaps the limit is larger, but both agree that warming will be a problem if it is large enough and happens very rapidly. Karoly believes the more radical model projections are possible, Happer does not.
Both agree that modest warming will benefit humankind and green the planet. Their differences center around the value of CS.
In Part D of this series I will discuss my thoughts and opinions after reading the debate documents.
Benestad, Rasmus. 2016. “A Mental Picture of the Greenhouse Effect.” Theoretical and Applied Climatology 128 (3-4): 679-688. https://link.springer.com/article/10.1007/s00704-016-1732-y.
Charney, J., A. Arakawa, D. Baker, B. Bolin, R. Dickinson, R. Goody, C. Leith, H. Stommel, and C. Wunsch. 1979. Carbon Dioxide and Climate: A Scientific Assessment. National Research Council, Washington DC: National Academy of Sciences. http://www.ecd.bnl.gov/steve/charney_report1979.pdf.
Curry, J. 2017. Climate Models for the layman. GWPF Reports. https://www.thegwpf.org/content/uploads/2017/02/Curry-2017.pdf.
Dippery, D., D. Tissue, R. Thomas, and B. Strain. 1995. “Effects of low and elevated CO2 levels on C3 and C4 annuals.” Oecologia 101 (1). https://link.springer.com/article/10.1007/BF00328895.
Gerhart, Laci, and Joy Ward. 2010. “Plant responses to low [CO2] of the past.” New Phytologist 188: 674-695. https://nph.onlinelibrary.wiley.com/doi/pdf/10.1111/j.1469-8137.2010.03441.x.
Harde, H. 2014. “Advanced Two-Layer Climate Model for the Assessment of Global Warming by CO2.” Open Journal of Atmospheric and Climate Change. https://www.researchgate.net/profile/Hermann_Harde/publication/268981652_Advanced_Two-Layer_Climate_Model_for_the_Assessment_of_Global_Warming_by_CO2/links/547cbb420cf2cfe203c1fbab.pdf.
IPCC core writing team. 2014. Climate Change 2014 Synthesis Report. https://www.ipcc.ch/report/ar5/syr/.
Lewis, Nic, and Judith Curry. 2018. “The impact of recent forcing and ocean heat uptake data on estimates of climate sensitivity.” Journal of Climate. https://journals.ametsoc.org/doi/10.1175/JCLI-D-17-0667.1.
Lewis, Nicholas, and Judith Curry. 2015. “The implications for climate sensitivity of AR5 forcing and heat uptake estimates.” Climate Dynamics 45: 1009-1023. https://search.proquest.com/openview/2f4994e4ab3a28571ecdff2edb3aeb13/1?pq-origsite=gscholar&cbl=54165.
Lindzen, Richard, and Yong-Sang Choi. 2011. “On the Observational Determination of Climate Sensitivity and Implications.” Asia-Pacific Journal of Atmospheric Sciences 47 (377). https://link.springer.com/article/10.1007/s13143-011-0023-x#citeas.
Miskolczi, Ferenc. 2010. “The Stable Stationary Value of the Earth’s Global Average Atmospheric Planck-Weighted Greenhouse-Gas Optical Thickness.” Energy and Environment. http://journals.sagepub.com/doi/abs/10.1260/0958-305X.21.4.243.
Pielke Jr., R. 2010. The Climate Fix, What Scientists and Politicians won’t tell you about global warming. New York, New York: Basic Books. link: http://sciencepolicy.colorado.edu/publications/special/climate_fix/index.html.
Pielke Jr., Roger. 2017. “STATEMENT OF DR. ROGER PIELKE, JR. to the COMMITTEE ON SCIENCE, SPACE, AND TECHNOLOGY of the UNITED STATES HOUSE OF REPRESENTATIVES.” U.S. House of Representatives, Washington, DC. https://science.house.gov/sites/republicans.science.house.gov/files/documents/HHRG-115-SY-WState-RPielke-20170329.pdf.
Soden, Brian J., Richard Wetherald, Georgiy Stenchikov, and Alan Robock. 2002. “Global Cooling After the Eruption of Mount Pinatubo: A Test of Climate Feedback by Water Vapor.” Science 296 (5568): 727-730. http://science.sciencemag.org/content/296/5568/727.
Tol, Richard. 2009. “The Economic Effects of Climate Change.” Journal of Economic Perspectives 23 (2): 29-51. https://www.aeaweb.org/articles?id=10.1257/jep.23.2.29. | <urn:uuid:2f5aaa99-60a5-4a01-a34c-a06ac4131352> | CC-MAIN-2021-21 | https://andymaypetrophysicist.com/2018/09/03/the-great-climate-change-debate-william-happer-v-david-karoly-part-c/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00250.warc.gz | en | 0.924173 | 5,420 | 2.890625 | 3 |
Landtag of North Rhine-Westphalia
The Landtag of North Rhine-Westphalia is the state parliament (Landtag) of the German federal state of North Rhine-Westphalia, which convenes in the state capital of Düsseldorf, in the eastern part of the district of Hafen. The parliament is the central legislative body in the political system of North Rhine-Westphalia. In addition to passing of laws, its most important tasks are the election of the Minister-President of the state and the administration of the government. The current parties of government are a coalition of the Christian Democratic Union (CDU) and the Free Democratic Party (FDP), supporting the cabinet of Minister-President Armin Laschet since June 2017.
Landtag of North Rhine-Westphalia
|Established||2 October 1946|
|14 May 2017|
|By May 2022|
The last state election took place on 14 May 2017.
The State Parliament is the central legislative body of the state. It establishes or changes laws that fall within its legislative authority, which includes the regulation of education, police matters, and municipal law.
Bills can be brought before the parliament by a parliamentary group (caucus) or a group of at least seven members of parliament. Additionally, the state government itself can bring relevant bill proposals to parliament for consideration. In practice, most bill proposals originate from the government. These generally are detailed proposals submitted in writing. They are first read and heavily discussed in a plenary assembly open to all members of parliament, before being given over to a specific committee (or sometimes more than one) that is organized around a relevant subject matter and will therefore provide specific counseling on the matter. If necessary, the bill proposal will also be delivered to external experts that are in contact with lobby groups, and to those who will be directly affected by the bill’s passage. The specific parliamentary committees will then pass the reformulated bill with recommended decisions back to the parliament at large for a second reading. At this stage, members of parliament again make suggestions regarding the bill. Each member has the ability to make suggestions to change the bill, and afterwards, the assembly will vote on each proposed amendment individually before finally voting on the entire bill. Bills are enacted by majority vote, as the constitution does not require any more stringent criteria for passage. The parliament operates by a quorum decision making process, meaning that only half of its legal members must be present. Constitutional amendments and the budget must go through the advisory process three times, instead of the standard two. For any proposed legislature, a third reading, deliberation, or committee counseling can be requested either by a party or by at least a quarter of the assembly. The President of the Parliament delivers each ratified law to the Minister-President, who signs and disseminates it as part of her duties as head of state for North Rhine-Westphalia. The law enters into force after it is written in the Law and Ordinance Record for the State of North Rhine-Westphalia (Gesetz- und Verordnungsblatt für das Land Nordrhein-Westfalen).
Petitions and referendumsEdit
Referendums in Germany are similar to bill proposals from parliament and the state government in that they can be submitted by the people to parliament to undergo the same legislative process. If parliament rejects the referendum, then a plebiscite is undertaken in which the people at large can vote. A successful plebiscite leads to the referendum being passed as law. A plebiscite can also be enacted at the request of the government, if parliament fails to pass one of their proposed bills. In practice, this form of direct democracy does not play a large role in the legislative process.
The authority of the state parliament in numerous legal areas has waned in the last few decades. This is due to the overriding legal authority of the federal government in Berlin. Indeed, while the Federal Reforms of 2006 more clearly defined the legal authorities of both federal and state governments, especially with regards to each other, it has also led to greater legislative activity by the federal government in many areas, which has in turn narrowed the field of authority possessed by the states. The European Union likewise has a strong influence on the passage of laws at the national level. Other than the direct participation of the state President-Ministers in the Bundesrat of Germany, the states have no direct contact with the European Union. However, through the Bundesrat, each state has a direct say in national matters, including those that involve the EU.
Election of Minister-PresidentsEdit
As stated in Article 51 of the State Constitution, the State Parliament of North Rhine-Westphalia elects the Minister-President of North Rhine-Westphalia "from its center" ("aus seiner Mitte") in a secret election that requires at least half of parliament’s legally-seated members to vote in favor. Therefore, the Minister-President must always first be a member of parliament. If a majority of affirmative votes is not reached in the first vote, a second (and possibly third) vote is held within 14 days, with whoever winning a simple majority becoming Minister-President. If no such majority results, a runoff vote between two nominees takes place. The winner of this vote then becomes Minister-President. Abstentions and invalid votes do not count as votes cast. Thus far, the Minister-President has always been approved in the first vote, with the exceptions of the reelection of Franz Meyers on 25 July 1966 and the election of Hannelore Kraft on 14 July 2010, both of whom were elected in the second round of voting. The removal of the Minister-President is possible at any time through a motion of no confidence, which requires a majority of dissenting votes. As of 2013, there have been two successful votes of no confidence on the State Parliament of North Rhine-Westphalia (the first on 20 February 1956 and the second on 8 December 1966). Parliament has no direct influence on the appointment or dismissal of other state ministers, who (together with the Minister-President) make up the government. A vote of no confidence dissolves the government and therefore automatically dismisses all state ministers.
Provided that no single party wins an absolute majority, a coalition is formed in most cases between several parties whose members together make up a majority of parliament and who can, therefore, easily elect an agreed upon Minister-President. Occasionally the governing coalition is a minority government. The Minister-President, in most cases, puts the government together with people from the coalition parties. In practice, the election of a Minister-President leads to a stable government with a clear majority that can exert considerable influence over the legislative process and thus pursue its own legislative agenda.
Though the electorate does not vote directly for the Minister-President, the selected person is generally a dominant figure in the state political system, and since the larger parties declare their lead candidate before the election, voting for a particular party means voting in favor of having that lead candidate be in the running for President-Minister. The lead candidate for smaller coalition parties are regularly included in the government as ministers.
Control of the governmentEdit
Compared to the state government, the state parliament has extensive powers. It can call members of government in for questioning before parliament, and it has the power to approve the state budget proposed by the government. Parliament also votes on closed states contracts. And, as mentioned above, parliament has the power to dissolve the government through a motion of no confidence. The Court of Audit controls the use of state funds by all state governmental bodies. This court likewise controls the finances of the parliament, but it also reports to parliament, which elects the court’s highest members.
Election of constitutional judgesEdit
Parliament elects four members of the Constitutional Court for the State of North Rhine-Westphalia (Verfassungsgerichtshof für das Land Nordrhein-Westfalen) to terms of six years each. Altogether this court has seven members. The long term of office, which is staggered so that each judge will not face reelection at the same time, ensures that parliament cannot place undue pressure on the court through election manipulation. This is meant to strengthen the independence of the judges on the court.
Election of members to the Federal ConventionEdit
While the state government appoints representatives to the Bundesrat at its own discretion, parliament elects the state’s representatives to the Federal Convention. The number of representatives of each party present in the Federal Convention is dependent on how many representatives belonging to that party are in the state parliament. Based on population statistics, North Rhine-Westphalia is responsible for about a fifth of the members of the Federal Convention. Roughly half of these individuals are, by virtue of their membership in the federal parliament (Bundestag (Germany), already members of the Federal Convention. The state parliament fills all those seats designated to the state that remain.
The majority of work by the parliament takes place in committees, rather than in plenary sessions (which include all parliament members). In general, members of the state parliament are career politicians and sit together according to what party they belong to. At the beginning of each legislative period, parliament members elect a Präsidium, which is headed by the President of the Parliament (distinct from the Minister-President), and a Council of Elders (Ältestenrat), which is essentially a board to help with managerial issues. It is also during this period that committee seats are filled.
President of the parliamentEdit
The Präsidium is headed by the President of the Parliament (Landtagspräsident), who is chosen from among the ranks of parliament. In general, the President of the Parliament comes from the largest constituent political party in the government. The following individuals have been Parliament President:
|Josef Hermann Dufhues||CDU||04/19/1966||07/23/1966|
|John van Nes Ziegler||SPD||07/25/1966||07/25/1970|
|John van Nes Ziegler||SPD||05/29/1980||05/29/1985|
|Karl Josef Denzer||SPD||05/30/1985||05/29/1990|
|Regina van Dinther||CDU||06/08/2005||06/09/2010|
|Edgar Moron [a]||SPD||06/10/2010||07/13/2010|
- After the parliament election on 9 May 2010, the election of the new President of the Parliament did not occur until 13 July 2010. At an inaugural meeting on 9 June 2010, the outgoing President Regina van Dinther would have continued in her position, according to article 38, paragraph 2 of the Constitution of North Rhine Westphalia, which states: "The incumbent remains in office until a new Parliament President is elected." However, because she was no longer a member of the parliament, her term as Parliament President ended that day. The leadership of parliament was held from then until 13 July 2010 by first deputy Edgar Moron (SPD), who also was no longer a member of parliament, and the Vice-Presidents Oliver Keymis (GRÜNE) and Angela Freimuth (FDP). The delay of the election on 9 June 2010 was criticized as "parliamentary suicide".
The Parliament of North Rhine-Westphalia is elected by a system of personalized proportional representation. Parliament members are selected by a universal, equal, direct, secret, and free vote. Parliament has at least 181 members. Additionally, the inclusion of overhang seats and leveling seats is possible. 128 members are elected by a direct mandate to represent specific constituencies. The remaining seats are allocated to candidates who appear on party lists. Each voter has two votes. The first vote is cast directly for a candidate to represent a specific district. The second vote is for a party and largely determines the relative size of each party's bloc in the new parliament.
All Germans who have reached the age of 18, who live in North Rhine-Westphalia at least 16 days before the election, and who are not excluded from voting due to court decision are eligible to vote in the state parliamentary elections. If they have moved to the state between closing of the electoral rolls 35 days before the election and the eligibility cut-off 16 days before the election, they need to assert their right to vote by appealing to the voter registry in their new community. Those who wish to stand for office must be a registered resident of North Rhine-Westphalia for at least three months prior to the election. The state has 17,554,329 residents (as of 31 December 2012), of which about 13.2 million citizens have the right to vote.
The state is divided into 128 electoral districts of approximately equal population. If an electoral district differs more than 20% from the average size, new borders are drawn up. Each electoral district is calculated to contain roughly 140,000 residents. In practice, each political district of the state (somewhat similar to county) is broken up into several overlapping electoral districts (with the exception of the district of Höxter and the district of Olpe). The division of the state into electoral districts is only relevant to the direct election of candidates with the first vote (as opposed to the second vote, which is specifically for party lists).
Nominations for the election in each electoral district can come from parties, vote groups, and individual voters. Party lists can only be put up for a vote by the parties themselves. Nominations for individual candidates, as well as for party lists, must be submitted to the district's election registry no later than 6 pm on the 48th day before the election. This deadline can be shortened by resolution of the parliament. Parties that are not in the state parliament or have not been nominated to the Bundestag from North Rhine-Westphalia in the last electoral period must submit at least 1000 signatures from legal voters in support of the party. For district nominations, both parties as well as non-party potential candidates must submit at least 100 signatures from registered votes in support of their candidacy in the electoral district. Each voter is only allowed to support a single nomination, and a nomination is only permitted to name a single candidate, whose name must be the same as it is listed on the party list. Nominations from parties and electoral groups must be decided by secret ballot of their members or by delegates selected likewise by secret ballot; however the state leaderships of the parties have a unique right to appeal the decision of these nominations. If such an appeal is filed, the process must be repeated to either confirm the candidate or to select a new one. Through this rule, the leadership of the CDU successfully opposed a candidate in one of the electoral districts of Cologne during the parliamentary elections of 2005.
Election of direct candidatesEdit
The first vote that each voter casts is for a direct candidate to represent one of the 128 electoral districts. The winner of this vote enters the state parliament regardless of how the second vote (for the party list) turns out. Since 1954 only candidates from the two biggest parties, the CDU and the SPD, have been selected through the direct first vote. Theoretically, this directly elected member of parliament should represent all the residents of the electoral district, but in practice, their party membership plays a paramount role in their work in parliament. When a party receives more candidates by direct vote (the first vote) than they would be entitled to through the party list vote (the second vote), the extra candidates are said to occupy overhang seats (detailed below).
Distribution of seats in parliamentEdit
For the distribution of seats for each party, the second vote is of particular significance. In an effort to balance representation, the second vote is not counted when:
- The party voted for receives less than 5% of the valid votes cast, and
- when the voter’s first vote is cast for a successful candidate who did not stand for election as a member of a party and is therefore not on a party list. These votes are disregarded because they would allow a voter both to elect a candidate directly and to vote for a separate group of politicians, effectively letting the voter elect more than one candidate.
Since the founding of the country, direct mandates have only gone to candidates of parties that received more than 5% of the votes. In addition to the 181 seats filled by the first vote, the remaining seats are divided based on the results of the second vote, using the Sainte-Laguë method and excluding those ruled out by the above listed rules. These seats are distributed to candidates among the winning parties in the order that they were listed in the party list.
With about 70% of their parliament members elected by direct mandate, North Rhine-Westphalia has the highest proportion of directly elected members of any state in Germany (with most of the others, as well as in the Federal Parliament, having only around 50% of their members elected directly). This means that a party often gets more seats to represent specific electoral districts than they are entitled to based on party list votes, which results in overhang seats. In this case, the other parties obtain leveling seats, in order to establish a proportional allocation of seats; the size of parliament, therefore, is not fixed, but rather expands in relation to the number of overhang and leveling seats. In theory, several parties can have overhang seats at the same time, though this has not yet occurred. Of course, this scheme for adding seats can lead to an expansion of parliament to a size larger than is necessary to produce proportional representation.
From 1985 until 2012 (with the exception of the 2010 elections), every parliamentary election had overhang seats so that parliament routinely had more representatives than the minimum number necessary.
The left column of the ballot is designated for the first vote, which is for a direct candidate, and the right column is for the second vote, which is for a party list. The order of the parties depends first on the number of votes achieved by each party in the last state election. These are followed by parties running for the first time, listed in the order that they registered with the state electoral commission.
Replacing individual membersEdit
Through resignation, loss of eligibility, or death, outgoing members of parliament will be replaced, regardless of whether they were elected by direct mandate or through the lists, by the next person on the party list who has not yet taken office (for instance, if ten of eleven people on a party list are sent to parliament, but one of those ten resigns, then the eleventh person who did not get elected will take his place). For members who were elected directly and do not belong to a party list, a special election is held.
For the loss of a seat as a result of the banning of a party, it is necessary to distinguish between representatives who were elected directly from an electoral district, versus those who were elected from the lists. In the case of a direct mandate, a new election takes place in which the individual who lost his seat is not eligible to run. Regarding those elected from lists, the representative in question will only be replaced if they were elected as part of an unconstitutional party.
The parliament elected in 1947 only had a term of three years. The constitution from 1950 then established a four-year term for members of parliament, which was extended to five years in 1969. The term for each parliament member begins at the first session of parliament, and a regular parliamentary election must take place within the last three months of the term. Each new parliament convenes for the first time within 20 days of the election, but not before the end of term for the outgoing parliament. Parliament can be dissolved by a majority vote of its members, and this occurred for the first time on 14 March 2012. The state government has never dissolved parliament since before this could happen, the electorate would have to approve a bill through referendum that the state government had proposed and that the parliament had already rejected. In all cases, new elections must take place within 60 days of the dissolution of parliament.
Changes to electoral lawEdit
After the election in 2005, parliament shrank from 201 regular members to 181, after the electoral districts were reduced from 151 to 128, and the list-elected members were increased from 50 to 53. (Of course, due to overhang and leveling seats, parliament still has over 200 members.)
Until 2005, the voting system in North Rhine-Westphalia was quite distinct from both the federal system and those found in the other states of Germany. While federal elections had already instituted the two vote system discussed above, North Rhine-Westphalia voters only had one vote to cast for the candidate of their choice in their electoral district. These votes then were also counted for the list of the candidate's political party and were used to divide the seats not apportioned to particular electoral districts. This disadvantaged certain parties, such as the Left Party (with candidates only in 116 districts) and the Ecological Democratic Party (only in 78 districts), since they could not field candidates in every district, and thus did not have the same number of potential voters for their lists. The introduction of the second vote in May 2010 changed all that.
In the 2012 state election, which brought to power the 16th Parliament of North Rhine-Westphalia, parliament was again made up of five parties. However, the Left Party fell below the 5% threshold and was forced to give up its seats in parliament, while the Pirate Party, with 7.8% of the popular vote, captured 20 seats. The SPD won 99 seats, while the CDU managed to pick up 67. This marked the first time in 12 years that the SPD won the largest percentage of votes, and it marked a transition of their minority coalition with the Greens to one with a legislative majority. The following table details the results:
|Social Democratic Party of Germany
Sozialdemokratische Partei Deutschlands - SPD
|Christian Democratic Union
Christlich Demokratische Union Deutschlands - CDU
|Alliance '90/The Greens
Bündnis 90/Die Grünen
|Free Democratic Party
Freie Demokratische Partei – FDP
|Pirate Party Germany
|Totals and voter turnout||7,901,922||59.6%||0.3%||237||56|
|Source: Die Landeswahlleiterin des Landes Nordrhein-Westfalen|
Parliament before 2012Edit
The first parliament of North Rhine-Westphalia was actually appointed during the British occupation following WWII, and was not replaced by a democratically elected body until 1947. Until 2005, the state was a stronghold of the SPD and social democracy, with each President-Minister between 1966 and 2005 coming from that party. Under the leadership of Karl Arnold, the CDU lead the government from 1947 until 1956 (the longest period that the CDU has been in power in the state). They again held the position of President-Minister from 1958 until 1966 under Franz Meyers as coalition leader, and during the period of SPD rule from 1966 until 2005, they were the largest party in parliament during two election periods. They could not, however, organize a coalition either time. The 2005 state parliament elections led for the first time in decades their return to power over the SPD, who nevertheless maintained their domination in the Ruhr. The CDU suffered heavy losses in the elections of 2010, but remained a strong force in parliament. Because they could not form a majority with the FDP and after the SPD rejected offers of a grand coalition under a CDU Minister-President, the SPD and the Greens formed a minority government under Minister-President Hannelore Kraft (SPD), who was elected to the position with support from the Left.
Proportion of women in parliamentEdit
In the most recent legislative period (which elected the 16th Parliament of the State of North Rhine-Westphalia), the proportion of female deputies in parliament was nearly 30%. After falling during recent election periods, this percentage increased slightly over the 15th Parliament, which was 27.07% female, but was lower than both the 14th Parliament (31.02%), as well as the 13th (32.47%).
The Greens have the highest percentage of women in parliament, at 51.7%, which is well above the SPD, which is 33.3% female, the CDU, at 22.4%, and the FDP, at 18.2%. At the bottom of the list is the Pirate Party, whose representatives are only 15% female.
The leadership of the 16th Parliament is once again led by a woman, namely the newly elected President of Parliament Carina Gödecke, who replaced Eckhard Uhlenberg.
As in previous election periods, the state party leaders are entirely male. For two parties (the Pirate Party and the Greens), the management of the parties' parliamentary groups is in the hands of women.
The following table compares the percentage of women in parliament from each party in the current parliament term and the previous one. Figures from the previous term are denoted by parentheses:
|CDU||67 (67)||15 (10)||22.4% (14.93%)|
|FDP||22 (13)||4 (2)||18.2% (15.38%)|
|SPD||99 (67)||33 (19)||33.33% (28.36%)|
|The Greens||29 (23)||15 (12)||51.7% (52.17%)|
|Parliament Total||237 (181)||70 (49)||29.54% (27.07%)|
(Figures from the website of the State Parliament of North Rhine-Westphalia.))
- Krug, Christian. "Martin Schulz's SPD shut out in North Rhine Westphalia". Politico EU. Retrieved 17 June 2017.
- Wolfgang Löwer, Peter J. Tettinger: Kommentar zur Verfassung des Landes Nordrhein-Westfalen, 2002, S. 815
- Artikel im Spiegel vom 1. August 1966
- Plenarprotokoll der konstituierenden Sitzung des 14. Landtags von Nordrhein-Westfalen am 8. Juni 2005
- Artikel auf www.wdr.de vom 9. Juni 2010 Archived 2010-06-12 at the Wayback Machine
- Artikel 38 Abs. 2 Verfassung für das Land Nordrhein-Westfalen: "Bis zur Wahl des neuen Präsidiums führt das bisherige Präsidium die Geschäfte weiter."
- Landtag NRW: Informationen. Fortführung der Präsidiumsaufgaben (11.06.2010). Abgerufen am 9. August 2010.[permanent dead link]
- Patrick Bahners: Landtag in Nordrhein-Westfalen. Selbstentleibung in Düsseldorf. In: FAZ.net. 11. Juni 2010.
- NRW-Einwohnerzahl auf 17,6 Millionen gestiegen. Landesbetrieb Information und Technik Nordrhein-Westfalen (IT.NRW) – Geschäftsbereich Statistik -, accessed 30 July 2013 (Press release).
- Wahlrecht.de (Hrsg.), Wilko Zicht, Martin Fehndrich: Wahlsystem Nordrhein-Westfalen. 2009.
- Innenministerium Nordrhein-Westfalen: Landeswahlgesetz Archived 2013-12-03 at the Wayback Machine
- Innenministerium NRW (Hrsg.): Reform des Landtagswahlrechts Archived 2011-05-24 at the Wayback Machine
- Landtag NRW: Verteilung der Geschlechter im 16. Landtag NRW Abgerufen am 8. August 2012.
- Landtag NRW: Statistik über den Frauenanteil im Landtag der 15. Wahlperiode. Stand: 09.06.2010. Abgerufen am 9. August 2010.
- Landtag NRW: Statistik über den Frauenanteil im Landtag der 14. Wahlperiode. Stand: 12.03.2010. Abgerufen am 9. August 2010.
- Landtag NRW: Die Fraktionen Abgerufen am 8. August 2012.
- Landtag NRW: Verteilung der Geschlechter im 16. Landtag NRW Abgerufen am 8. August 2012.
Landtag NRW: Statistik über den Frauenanteil im Landtag der 15. Wahlperiode. Stand: 09.06.2010. Abgerufen am 9. August 2010.
Landtag NRW: Die Fraktionen Abgerufen am 8. August 2012. | <urn:uuid:a5e6298b-431a-4d79-82d6-5f1a3ad32720> | CC-MAIN-2021-21 | https://en.m.wikipedia.org/wiki/Landtag_of_North_Rhine-Westphalia | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989012.26/warc/CC-MAIN-20210509183309-20210509213309-00216.warc.gz | en | 0.929284 | 6,127 | 2.828125 | 3 |
What Can ESL/EFL Teachers Do With COVID-19?
Corsica S. L. Kong is currently a lecturer at Department of Language Studies of King Mongkut’s University of Technology Thonburi, Thailand. She teaches undergraduate and postgraduate students and runs short English courses for adult learners. She is interested in lesson planning and teacher development. Email: firstname.lastname@example.org
Feeling bored at home because of the quarantine or total lockdown of your city? Or being forced to work from home due to the temporary closure of your workplace? No matter where you are now, we all may be currently experiencing the same situation – staying at home and practicing social distancing – probably with the same reason: COVID-19.
While the whole world is worrying about the novel coronavirus and seeing this a global disaster for everyone, some people are trying to find ways to live their lives differently (and positively perhaps) regardless of the risk they have been facing. Medical professionals and researchers are of course looking for cure for this highly infectious disease. Government officials may be trying hard to figure out what to do with the financial crisis and economic recovery. Ordinary people are, mostly, thinking about what to eat and what to do every day when they have nowhere to dine out or shop. What about us ESL/EFL teachers? Is there anything we can do with COVID-19?
Where there is risk, there is opportunity. There are definitely some things we can do during this crisis. Since teachers teach, ESL teachers may dig out new materials from the current hot topic and incorporate them into their lessons.
Among the many English language aspects, vocabulary is seldom the first one ESL/EFL teachers would think of to teach. As well, teachers never spend the whole lesson solely teaching vocabulary. Nevertheless, it plays a vital role in helping English learners develop the four basic skills, i.e. listening, reading, speaking, and writing. Teachers would, more often than not, associate teaching vocabulary with any one of those skills in their lesson plans (Lee & Muncie 2006, Heilman et al. 2008, Lee 2009, Nam 2010). While we appreciate that learning technical vocabulary effectively may facilitate students in expressing their relevant content knowledge (Lee 2009), a science-related subject, like viruses or diseases, may possibly cause anxiety to learners, especially those who are less proficient in English (Ardasheva et al. 2018). As such, we may not want to touch on difficult technical terminology when using COVID-19 as our lesson topic. Neither do we need to explain to our students what a coronavirus is or its structure and features (let the science teachers do it). However, given that news feeds keep flowing in every day, many of us have already inadvertently learned words or phrases, which used to be unfamiliar to us or infrequently used but have gradually become rather common (Category 1). Therefore, those ‘new’ words may be of interest for teachers of upper intermediate or even advanced level students. As for intermediate or lower intermediate level, topics related to sicknesses or symptoms may be more worth teaching (Category 2). For those who are teaching the little ones or elementary level, body parts might work better (Category 3), considering that over the past few months, people have been emphasizing washing hands and covering our nose and mouth when sneezing. With these target vocabulary, English teachers could look for web texts related to COVID-19, which is not only authentic but also easily accessible at the moment, and design various vocabulary exercises to facilitate their receptive or productive skill lesson (Lee 2009, Nam 2010).
Coronavirus, pandemic, epidemic, infectious, communicable, contagious, incubation, pneumonia, quarantine, lockdown, curfew
Runny nose, nasal congestion, sneeze, (dry) cough, sore throat, diarrhea, fever, fibrosis
Nose, mouth, elbow, hand, head, lung
During this coronavirus outbreak, people have been asked to follow a number of rules and instructions in order to show our social responsibility and help contain the infectious disease. More often than not, imperative sentences are used in a plethora of materials educating people what to do, such as leaflets/pamphlets, brochures, health organization websites, news articles, notices and announcements. To name a few, there are ten common things people can practice to fight against COVID-19.
Things you can do to help contain COVID-19
When teaching imperatives, setting up a context and using aids that reflect the real-life are helpful and effective (McEldowney 1975). There are a myriad of teaching aids teachers can consider, from maps and diagrams (McEldowney 1975) to traffic sign pictures (Ismail Latif 2019) and computer game (Aster & Narius 2013). To add more fun to the lesson, act-out activities such as a guessing/mining game may be included in addition to all those teaching materials (Hertia & Tiarina 2014). In the topic of COVID-19, teachers can on one hand emphasize those important protection measures and teach students to practice them thoroughly. On the other hand, the structure as well as the practical use of imperative sentences can be made the main focus of the lesson. Once a context is set up, with the assistance of appropriate teaching aids and skills, teachers can design their framework of development for their lesson in teaching imperatives (McEldowney 1975).
In order to make the classroom more interactive for a serious topic like COVID-19, the following activities may be helpful.
Suggested activities for teaching imperatives with the topic of COVID-19
Songs as a teaching material
Songs have always been a very useful pedagogical tool for ESL/EFL teachers because they have been proved to bring a plenty of benefits to English learners. Due to this reason, the advantages and effectiveness of using songs in teaching both receptive and productive skills of English has been widely studied. Schoepp (2016) summarized that songs could help achieve three theoretical reasons, i.e. affective, cognitive and linguistic. They can also expand the vocabulary knowledge in pre-school children (Coyle & Gómez Gracia 2014) and improve vocabulary competence in upper high school students (Abidin et al. 2011). Using songs can also enhance vocabulary acquisition, literacy development, and other skills like listening and pronunciation in young learners (Paquette & Rieg 2008, Millington 2011, Abdul Razak & Md Yunus 2016). With the help of various types of activities, songs can also work well in adult ESL classroom (Lems 2001). Despite all these benefits, teachers sometimes may find it difficult to select appropriate songs for their lessons and hence, in-service training is suggested (Şevik 2011). In the current digital world, however, teachers can easily search for information or strategies online to help their lesson design if they want to use songs and music as their teaching material. Otherwise, there are quite a number of new ideas suggested in Lems (2018) for English teachers, which are definitely worth a try.
Looking at the present situation, many people are actually given an opportunity to show their talents during this difficult time, especially as songwriters and musicians. From two young children in Hong Kong to a musical band in Mexico and even health authorities in Vietnam, quite a number of COVID-19-inspired songs have been produced in order to help educate people in different places (https://theculturetrip.com/asia/vietnam/articles/covid-19-songs-go-viral-to-educate-people-to-fight-the-outbreak/). Not only are new songs being written, many well-known songs have also been adapted and re-written with new lyrics teaching people how to help contain the coronavirus (https://scroll.in/video/957411/stayin-inside-i-gotta-wash-my-hands-the-bee-gees-and-the-beatles-chartbusters-remade). ESL/EFL teachers can definitely make good use of these new or adapted songs to teach our students. Out of the many masterpieces online, the following is chosen as an example:
Song: Do Re Mi – Covid 19 version (https://youtu.be/MMBh-eo3tvE)
What can be taught is…
Let's start at the very beginning
A sore throat, a cough in Wuhan
And in no time at all, there were 1, 2, 3
And one went on a plane - took it overseas
And that’s how pandemics get started, you see
Woe is me
Now we’ve got Covid-19
Do not fear - but please stay here
Stay at home now, everyone
We must wash and clean things well
Cars? No long trips just for fun!
Don’t let Covid virus spread
Isolate yourself at home
See your friends online instead
That’s the healthy way to go oh oh oh
Do not fear - but just stay here
Time to all self-isolate
Wash your hands, use lots of soap
Don’t go further than your gate!
Social life must stay online
Keep 2 metres clear of me,
Watch TV, drink lots of wine
That will kill Covid-19!
Cough in your elbow, wash your hands with soap!
Now children, staying at home -and-so on are things we do to stop the spread of Covid-19
Once you have this in your head
You can do a million different things at home to stay sane,
Sleep, eat, whinge, tweet, snooze, blob, think
Loaf, mooch, doze, smooch, binge watch, drink
But staying inside is so boring!
So we think about why – remember why we’re doing it – like this:
When you know the reason why,
Kill off Covid – stay inside!
Exercise close to your home
Only shop for what you need
Keep your bubble tightly closed
And we’ll beat this bug with speed!
Social life has been postponed
And you’re bored out of your mind
Suck it up and stay at home
And we’ll leave this bug behind!
Cough in your elbow, wash your hands,
Keep two metres away from me
Yes please, I’m germ free
And that’s how I’d like to be!
Keep away, please from me
I will stay Covid free!
When you know the things to do, germs will stay away from you!
Stay inside your bubble now
Do not spread those germs around
Yes, you might be going mad
And be desperate to get out!
It’s a nasty world out there
Keep the social distance rules
Everything you touch – beware
You could spread – more – germs, you (fools)
Flatten the curve – Covid 19!
You have got the power to flatten the curve through
The things you choose to do – it’s true!
– sore throat, cough (symptoms of a sickness)
– pandemics, coronavirus
– elbow, hands (body parts)
– spread, loaf (words that can serve as both a noun and a verb)
– sleep, eat, whinge, tweet, …etc. (things that you may do in your daily life/informal words/ emergence of new words, e.g. to tweet (social media), to binge watch)
– in no time (at all)
– Woe is me
– keep clear of/keep away from
– out of your mind
– to let (sth/sb) + bare infinitive
– to go (when not referring to ‘moving from one place to Delete Tableanother)
– lots of
– further (compare with farther as a comparative of ‘far’)
– to stay (when not referring to ‘not leaving’)
– why, what, when and how (when not used as a question word)
– kill off (phrasal verb)
4. Prefix and suffix
– ‘-en’ to form verbs
5. Slang – suck it up
6. Modal verb – might
The original song “Do-Re-Mi” in The Sound of Music itself is already very good for teaching English. This COVID-19 version as well is worth using if teachers really want to talk about this virus topic plus English. The lyrics here contain a lot of useful resources (e.g. idioms, grammar), from which teachers can choose one or two to focus on. However, some may be rather advanced (e.g. prefix and suffix) for beginners or lower intermediate English learners. Therefore, ESL/EFL teachers need to consider the level of their learners and the lesson focus before they choose their song and language topic. For pre-school children or young learners, teachers may wish to go for a children’s song like “London Bridge is falling down”, which has also been adapted for COVID-19.
COVID-19 version for “London Bridge is falling down”
Many ESL/EFL teachers would like to use a warmer or a lead-in session to start their lesson. The following activities or discussion topics may provide some insights for them when teaching this coronavirus topic.
Ask students to do small group discussion with questions like ‘What would you do if you were put in a 14-day quarantine?’ or ‘What would you prepare if you were put in a 14-day quarantine?’.
Prepare a set of pictures showing different items and tell each student that they are only allowed to choose three things (depending on the number of cards and the class size) for quarantine. Ask students to share why those items are selected. Where necessary, prepare several sets of pictures with different categories (e.g. essentials like towel, shower gel, and toilet paper, or food like cup noodles, chocolate bars, and chips) for use.
Prepare a set of pictures of different items that students may need for quarantine and give each student three (depending on the number of cards and the class size). Ask them to mingle and trade their ‘unwanted’ items with the others and see if they could get something they ‘want’ for quarantine.
Currently teachers can easily obtain a myriad of resources like articles, podcasts or videos talking about COVID-19. All those could be used as teaching materials to teach students the receptive skills, i.e. reading and listening. As for the productive parts, speaking activities with relevant discussion topics can be conducted, allowing students to express their views and opinions. For example, a debate like ‘Should the city be totally locked down/a 24-hour curfew be imposed in order to contain COVID-19?’ or ‘Should the government ban all traveling activities during the outbreak of COVID-19?’ may be appropriate for advanced level of English learners. If writing is planned as the production activity instead, an exercise of re-writing the lyrics using simple songs (e.g. birthday song) may work. Surely it is understandable that not many people are talented in music and rhyming and thus writing song lyrics could be rather difficult and challenging. Here the following writing activity may be considered.
There are a lot of medical experts and health workers, like doctors, nurses and many other frontliners, who have been making countless sacrifice to help us get through this crisis. To show our support to these great people, we certainly should respond to their request “We stay at work for you. You stay at home for us.”, but what’s more, we could also show our gratitude to all of them and wholeheartedly thank them for what they have been doing for us. So, it may be valuable to turn this into a production activity, in which teachers can ask their students to write something to thank those medical staff. It could be a letter, an email, or even just a note, depending on what structure or format to focus on. After the activity, teachers may also collect all the students’ works and send them to the health workers as a gift. That would definitely mean a lot to them.
The future, which may be a very devastating one, is unknown but the present is still here for us. We may not be able to take control of many things during this COVID-19 outbreak but it doesn’t mean that there is totally nothing we can do. As long as we understand our role in the society, we can still seize this very moment to teach, to learn, to develop yourself, and to help the others, so as to make our world a better place.
I am greatly indebted to my teacher, Professor Paul P. H. But, for his inspiration, constant encouragement and adapted song lyrics of “London Bridge is Falling Down”. I would also like to thank Dr Stephen Louw, the Lead Trainer of Chichester College TEFL Course in Bangkok, for his advice, guidance and support. Without their kind help, this article would not have been written and published.
Abdul Razak, N.A.N. & Md Yunus, M. (2016) Using action songs in teaching action words to young ESL learners. International Journal of Language Education and Applied Linguistics (IJLEAL). Vol. 4: 15–24.
Abidin, M.J.Z., Pour-Mohammadi, M., Singh, K.K.B., Azman, R. & Souriyavongsa, T. (2011) The effectiveness of using songs in YouTube to improve vocabulary competence among upper secondary school studies. TPLS Theory and Practice in Language Studies. Vol. 1(11): 1488–1496.
Ardasheva, Y., Carbonneau, K.J., Roo, A.K. & Wang, Z. (2018) Relationships among prior learning, anxiety, self-efficacy, and science vocabulary learning of middle school students with varied English language proficiency. Learning and Individual Differences. Vol. 61 (2018): 21–30.
Aster, A.A. & Narius, D. (2013) Using campfire legend PC game as a media in teaching imperative sentence for junior high school students. Journal of English Language Teaching. Vol. 1 No. 2, Maret 2013, Serie F: 480–489.
Coyle, Y. & Gómez Gracia, R. (2014) Using songs to enhance L2 vocabulary acquisition in preschool children. ELT Journal. Vol. 68/3 July 2014: 276–285.
Heilman, M., Zhao, L., Pino, J. & Eskenazi, M. (2008) Retrieval of reading materials for vocabulary and reading practice. Proceedings of the Third ACL Workshop on Innovative Use of NLP for Building Educational Applications. Pages 80–88.
Hertia, A.P. & Tiarina, Y. (2014) Teaching imperative sentence through “act out (a guessing game with mime) activity” in procedure text at junior high school. JELT. Vol. 2 No.2, Serie A. March 2014: 8–15.
Ismail Latif, N. (2019) The use of traffic sign pictures to improve the students’ ability in constructing imperative sentence. Inspiring: English Education Journal. Vol. 2 No. 2 September 2019: 111–119.
Lee, S. H. & Muncie, J. (2006) From receptive to productive: improving ESL learners’ use of vocabulary in a postreading composition task. TESOL Quarterly. Vol. 40, No. 2, June 2006: 295–320.
Lee, S.H. (2009) Vocabulary and content learning in Grade 9 earth science: effects of vocabulary preteaching, rational cloze task, and reading comprehension task. The CATESOL Journal. Vol. 21.1, 2009/2010: 75–102.
Lems, K. (2001) Using music in the adult ESL classroom. ERIC Digest (ED459634 2001-12-00) (https://eric.ed.gov/?id=ED459634)
Lems, K. (2018) New ideas for teaching English using songs and music. English Teaching Forum. Vol. 56 No.1: 14–21.
McEldowney, P.L. (1975) Teaching imperatives in context. TESOL Quarterly. Vol. 9, No. 2: 137–147.
Millington, N.T. (2011) Using songs effectively to teach English to young learners. Language Education in Asia. Vol. 2, Issue 1: 134–141.
Nam, J. (2010) Linking research and practice: effective strategies for teaching vocabulary in the ESL classroom. TESL Canada Journal. Vol. 28, No. 1 (Winter): 127–135.
Paquette, K.R. & Rieg, S.A. (2008) Using music to support the literacy development of young English language learners. Early Childhood Education Journal. Vol. 36: 227–232.
Schoepp, K. (2001) Reasons for using songs in the ESL/EFL classroom. The Internet TESL Journal. Vol. 7(2): 1–4.
Şevik, M. (2011) Teacher views about using songs in teaching English to young learners. Educational Research and Review. Vol. 6(21): 1027–1035.
Please check the Methodology and Language for Secondary course at Pilgrims website.
Please check the Teaching Advanced Students course at Pilgrims website.
Please check the CLIL for Secondary course at Pilgrims website.
Please check the Creative Methodology for the Classroom course at Pilgrims website.
Didactic Activities to Develop Linguistic Abilities in English Language for Accounting and Finance Professionals
Yoana Rodríguez Rodríguez, Cuba
The Three Sisters: Storytelling for Very Young Learners
Andreyana Kirova Tsoneva, Bulgaria
Authentic Topical Texts for Advanced Secondary Learners
Anže Perne, Slovenia
Everything is a Collocation, Believe Me!
William Godoy De La Rosa, Chile
Creating Material for a Conversation Course
Ana Paula Brito, Brazil
Understanding Epidemics Through CLIL
Y.L. Teresa Ting, Italy
What Can ESL/EFL Teachers Do With COVID-19?
Corsica S. L. Kong, Hong Kong
Activities Inspired by FB Posts in the Times of Coronavirus
Hanna Kryszewska, Poland | <urn:uuid:4144f98a-e121-4aaa-a23e-b83138541e52> | CC-MAIN-2021-21 | https://www.hltmag.co.uk/june2020/what-can-esl/efl-teachers-do-with-covid-19 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991921.61/warc/CC-MAIN-20210516232554-20210517022554-00174.warc.gz | en | 0.92845 | 4,830 | 2.9375 | 3 |
Design Thinking in the workplace: How do Design Thinking, lean, and agile work together? To accomplish this, lean thinking changes the focus of management from optimizing separate technologies, assets, and vertical departments to optimizing the flow of products … More widely referred to as “lean,” the lean process has principles that focus on improving products and services based on what customers want and value. Lean Design is introduced as a way for reducing waste, unnecessary uncertainty, and improving value generation in the design phase. More than just a strategy for developing a single product, it is what enables you to create a sustainable system for consistently delivering great products and profitable value streams, leveraging your entire enterprise. Since some lean tools are used in the practice of design for lean manufacturing, it borrows the first word in its name from lean manufacturing as exemplified by the Toyota Production System. Managing the knowledge value stream, systematic problem solving with analysis of the trade-offs between various design options, and solutions generated from ideas filtered by systematic innovation methods are viewed as methods within the lean design process. Lean UX is focused on the experience under design and is less focused on deliverables than traditional UX. Initial studies of the Japanese approach to design for lean manufacturing noted four principles; leadership of projects by a shusa (or project boss), tightly knit teams, communication on all of the difficult design trade-offs, and simultaneous development between engineering and manufacturing. Lean started out as a response to scientific management practices in manufacturing. It centers around functions and modules within the system. By nature, the design process is complex; it often involves thousands of decisions, sometimes over a period of many years, with numerous interdependencies, and under a highly uncertain environment. if (theYear < 1900)
Lean project delivery is a project management process that strips away unnecessary effort, time and cost in the planning, design and construction of capital projects to deliver what the Owner values. 3. Ballard, G., Tommelein, I., Koskela, L., & Howell, G. (2002). Using the principles of lean-construction, the desired outcome would be to maximize the value and output of a project while minimizing wasteful aspects and time delay. now = new Date
Each of them help to take a large proportion of … Process improvement does not always result in less time (on its own); the intention of process improvement is to create a better product by improving that process. Design for lean manufacturing also relates to system thinking as it considers all aspects (or the full circle) and takes the system conditions into consideration when designing products and services, delivering them according to customer needs. The Lean philosophy, with its focus on minimizing waste and maximizing value, is therefore recommended to be applied as early as possible in a building process. At the time of the study, the Japanese automakers were outperforming the American counterparts in speed, resources used in design, and design quality. Organisations sought efficiency through process, rules, and procedures and management was mostly about control. Sub-teams achieve flow through rapid learning cycles which quickly move from planning, designing, building, and testing. Our Lean Design® training consistently drives product design engineers, manufacturing, costing, and the management team to break through greater levels of innovation in their new products or redesigns. 5. The Lean Design process involves all three perspectives, even though each perception is orthogonal to one another (e.g., having just the transformation view does not guarantee that the flow and value view are taken into account and vice versa). It's also the sum of the lifecycle processes needed to design, manufacture and use it. Nawras Skhmot, is a norwegian civil engineer and entrepreneur with an educational background from The Norwegian University of Science and Technology (NTNU) and UC Berkeley. In order for a mindset to become effective it needs to be adopted by everyone in your company. “Since design thinking is a non-linear process, its elements could be practically integrated with the Lean Six Sigma methodology. The definition of Lean tends to vary slightly depending upon the source, nevertheless the underlying meaning is the same. Design for lean manufacturing must be sustainable and holistic unlike other lean manufacturing or Six Sigma approaches that either tackle only a part of the problem or tackle the problem for a short period of time. A design approach to lean compels us to look inward to what we do as design professionals and examine our processes and discover the waste within them. As the practice of design for lean manufacturing has expanded in its depth and breadth of application, additional principles have been integrated into the method. Define Value. 4. The design for lean manufacturing equation is design for lean manufacturing success = strategic values minus the drivers of design and process wastes. Introducing the World’s First Official Lean Design® Certification Course. Tzortzopoulos, P., & Formoso, C. T. (1999). The nature of Agile development is to work in rapid, iterative cycles and Lean UX mimics these cycles to ensure that data generated can be used in each iteration. Lean Design considers three perspectives to describe the design process (1) Transformation (transformation of inputs into outputs); (2) Flow (flow of material and information through time and space); and (3) Value (the generation of value for customers), rather than the more traditional view of the design process as only a transformation process 4. Instead of just using Lean to fix existing problems in the manufacturing or delivery process, 3P takes Lean upstream for integration into new product design from the start. The term describes methods of design in lean manufacturing companies as part of the study of Japanese industry by the Massachusetts Institute of Technology. It is in the design phase that the customerâs ideas and speculations are conceptualized into a physical model, and the customerâs needs and requirements are deï¬ned into procedures, drawings and technical speciï¬cations. Lean construction is a combination of operational research and practical development in design and construction with an adaption of lean manufacturing principles and practices to the end-to-end design and construction process. Architect Sam Spada, discusses the Lean Project Delivery method and Lean Design Behaviors. Architectural Engineering and Design Management, 7(2), 70-84. Not to be confused with "Lean Design" (copyrighted and patented by Munro & Associates, of Michigan), design for lean manufacturing builds on the set of principles that emerged from design for the customer value and design for manufacturability. A good design is one that simultaneously reduces waste and delivers value. Now we know what Design Thinking is, let’s consider how it fits into the overall product design process. âWork structuringâ is the practice of scheduling out work and is a part of designing a production system. To be successful, a corporate wide design for lean manufacturing implementation typically includes the following dimensions: When the dimensions are fully deployed in an organization, design for lean manufacturing enhances the performance levels with respect to design and innovation. These are called out as four value streams; customer, product design and test, production, and knowledge. Implementing value through lean design management. 6. The âvalue viewâ stresses the use of analysis of requirements and constraints to deliver what matters to the customer6. While that might sound simple enough, Lean UX has recently taken the design world by storm, with many proclaiming that it’s the epitome of design method evolution. Emmitt, S., Sander, D., & Christoffersen, A. K. (2004). Proceedings 7th Annual Conference of the International Group for Lean Construction (IGLC), (ss. This is the ancient distinction between planning and doing. Takt time planning then, is one method for work structuring around a set pace of work. Both Lean Six Sigma and Design Thinking have a laser-like focus on the Voice of the Customer,” writes Alvin Villegas , founder of Core Enabler Business Process Solutions. An organizational focus is required for the implementation of Lean Design ® principles, which includes efficient and sustainable design team. In traditional design approach, the product is first designed, then âthrown over the wallâ to someone else to decide how or if it can be built, operated, altered, etc. Journal of Construction Engineering and Management, 128(3), 248-256. Further study showed additional depth to the principles, citing 13 principles specific to the Toyota design for lean manufacturing methods in product and process development in the areas of process, skilled people, and tools and technology. Learn more about Lean here. theYear=theYear+1900
Most lean manufacturing tools can be directly used by a design for lean manufacturing team. Lean manufacturing relies on preventing interruptions in the production process and enabling a harmonized and integrated set of processes in which activities move in a constant stream. Lean design ® seeks to optimize the development process through rapid learning cycles to build and test multiple concepts early. A product team must be thinking of their new product, process or service in a holistic way. This system is flexible enough to represent all facets of a design’s life cycle – from concepts through to end of life. 335â344). Copyright © 2015-
Design for lean manufacturing is recognized as being everybody's job. Proceedings 14th Annual Conference of the International Group for Lean Construction (IGLC), (ss. Design is understood as encompassing not only product design, but also process design. Most of the savings will appear only in the sometimes-distant future. Guarujá, Brazil. Lean Construction Institute. In order for Lean process improvement … Design for lean manufacturing and development principles, The dimensions of lean in design and development, Shingo Prize for Excellence in Manufacturing, "Lean Architecture: The pursuit of Excellence in Project Delivery", https://en.wikipedia.org/w/index.php?title=Design_for_lean_manufacturing&oldid=983676190, Creative Commons Attribution-ShareAlike License. A product is more than the sum of its parts. From Lean UX: “This reframing … Lean Design® is both a tool and methodology, comprising of a concise set of coded symbols, each containing inside of them, a wide variety of almost unlimited chosen data (depending on what the client wishes to track). Simply put, lean is the practice of creating more value with fewer resources. Many authors have however stated that planning and control in design are substituted by chaos and improvising, causing poor communication, lack of adequate documentation, deï¬cient or missing input information, unbalanced resource allocation, lack of coordination between disciplines, and erratic decision-making1. The goal of Takt time planning is to create a reliable plan, with the input of the entire team, which balances workflows for specific phases of work. Ballard, G., & Koskela, L. (1998). According to Ballard & Zabelle4, Designing produces the recipe and Building prepares the meal. That is because most of the costs and much of the quality in a construction project are locked in long before production launch, and therefore the design process will be crucial not only to âdo things rightâ but also, more importantly, to âdo the right thingsâ. Design Thinking is a mindset that helps us do it better. LPPD is powerful system for developing new products, proven effective in companies large and small across diverse industries. The principle is to take action when the receiving customer (or... Value. Pipeline Management avoids overloading the pipeline, controls … The method has been used in architecture, healthcare, product development, processes design, information technology systems, and even to create lean business models. What is value? 4. There are multiple drivers that cause product, process, and lifecycle wastes. Design for lean manufacturing is system design.. 4. According to Tzortzopoulos and Formoso2, the traditional design process fails to minimize the effects of complexity and uncertainty, to ensure that the information available to complete design tasks is sufï¬cient, and to reduce inconsistencies within construction documents. Lean design deals with a subset of methods and tools of lean product development, targeting the conceptual, layout, and detail design phases. According to Emmitt et al.3, moving Lean Thinking upstream should create significant potential to deliver value throughout the whole process. It relies on the definition and optimization of values coupled with the prevention of wastes before they enter the system. Unless a process has gone through lean multiple times, it contains some element of waste. Lean methodology has been labeled a process improvement toolkit, a philosophy and a mindset. Hansen, G. K., & Olsson, N. O. Design for lean manufacturing helps a team “knit together” existing tools. âTakt timeâ is a term used in manufacturing to describe pacing work to match the customerâs demand rate. Lean startup and design thinking are two frameworks that are popular among product developers and innovators. 7. (2011). Lean Architecture is the ongoing process of rethinking and improving architectural methodology. Shingo assessments measure lean implementations in all parts of the organization, including the design methodology. Multiple viewpoints are considered in Design for lean manufacturing. Berkeley,California. Based on the lean-agile methods of making the design process incremental, Lean UX is a design strategy focused around minimizing wasted time and effort during the design process. The lean manufacturing process is a method for creating a more effective business by eliminating wasteful practices and improving efficiency. Copenhagen, Denmark). document.write(theYear)
It originated in the 1940s. It considers three perspectives to describe the design process as shown in conversion, flow, and value generation. Freire, J., & Alarcon, L. F. (2002). Lean Design is an extension of Lean Thinking, which has been applied to the production phases in several industries including manufacturing, construction, and healthcare. Lean construction tools and techniques . Design can involve a large number of participants and decisions makers, trade-offs between multiple competing design criteria with inadequate information, and intense budget and schedule constraints1. How to promote this conversation (iteration) and differentiate between positive (value generating) and negative (wasteful) iteration are some of the central principles of Lean Design. Ballard, G., & Zabelle, T. (2000). What is Lean Architecture? It requires a greater level of collaboration with the entire team. Pull. The Shingo Prize for Excellence in Manufacturing is given annually for operational excellence in North America. The ‘transformation view’ has been the dominant … Lean Concepts Flow. Design for lean manufacturing was first coined by Womack, Jones, and Roos after studying the differences between conventional development at American automotive companies and lean methods at Japanese automobile producers. Lean design is the application of lean production principles, which promote the elimination of waste and non–value-adding activities in processes, to engineering and design. In the United States, the predominant thought is that Lean is a system of tools and techniques for reducing waste and adding value in every process. Definition 2: Lean Design (verb). Layered ProjectâLayered Process: Lean Thinking and Flexible Solutions . theYear=now.getYear()
Table 1 below compares the different views. Proceedings 6th Annual Conference of the International Group for Lean Construction (IGLC). In the optimum flow, work progresses across a process smoothly and swiftly. A big challenge is making sure everyone understands their lean design ® job descriptions and how each subsystem contributes to the higher level system. Before you jump into the Lean UX process, you need to remember what Gothelf says: Lean UX is a mindset. Project Delivery and the Theory of Lean 22:49 At its core, Lean is a popular approach to streamlining both manufacturing and transactional processes by eliminating waste and optimizing flow while continuing to deliver value to customers. The principles of Lean are therefore modified in Lean Design to accommodate the nature of the design process. While lean manufacturing focuses on optimization of the production stream and removal of wastes (commonly referred to as muda, mura, and muri) once the value stream has been created, Lean Design ® (Munro & Associates) concerns itself with methods and techniques to create a lean solution from the start, resulting in more value and fewer wastes across the value stream. For example, the following methods and business tools can be used by organizations within the design for lean manufacturing methodology: Savings from applying lean to design are hard to predict. Product and process accountability throughout the value stream, Systematic innovation and problem solving, Stakeholder collaboration between functions, Team leadership by a chief engineer or entrepreneurial system designer, This page was last edited on 15 October 2020, at 16:08. The inï¬uence of the design phase on the outcome of any projects both technically and economically is immense. 2. Another concept central to LEAN is that of pull. This means you only start new work when there is demand for it. To better understand the first principle of defining customer value, it is important to … Oxford: Butterworth-Heinemann. Hansen and Olsson5 states that Lean thinking in design has at least two main objectives: to find the best design to meet the client's needs in order to support effectiveness, efficiency and user satisfaction; and to define systems, structures and materials to ensure effective, streamlined construction. And remember, lean … When done correctly, lean can create huge improvements in efficiency, cycle time, productivity, material costs, and scrap, leading to lower costs and improved competitiveness. The core objective is to focus on obtaining feedback as early as possible so that it can be used to make quick decisions. Achieving Lean Design Process: Improvement Methodology. Design for lean manufacturing is a process for applying lean concepts to the design phase of a system, such as a complex product or process. Lean Design aims to integrate design of product and process, meaning that considering and deciding how to build and use something can be done at the same time as considering what to build7. ® (Munro & Associates) drives prevention of waste by adopting a systematic process to improve the design phase during development. Lean Layout 101 Product design determines what is to be produced and used, while process design determines how to produce it or use it. Essentially, the methodology is to minimize the bad and maximize the good. Ballard & Zabelle4 compare designing to a good conversation, from which everyone leaves with a different and better understanding than anyone brought with them. Design for lean manufacturing is based on the premise that product and process design is an ongoing activity and not a one-time activity; therefore design for lean manufacturing should be viewed as a long term strategy for an organization. Applying design for lean manufacturing does not make obsolete any existing product design tool, technique or method. LEAN is all about flow. Predicting “hidden cost” savings is extremely difficult and questionable given the time it would require. Lean measures both the process of design and the design results. Learning cycles by sub-teams working on sub-systems are tied together with integration events. I R. Best, & G. de Valence, Design and Construction: Building in Value (ss. In Japan, Lean is considered a mindset and not a set of tools. It’s most effective when practiced across the organization. To be successful, a corporate wide design for lean manufacturing implementation typically includes the following dimensions: The âtransformation viewâ has been the dominant view of production and is best described as âgetting the task doneâ. Lean Construction Blog, Figure 1 - Comparisons of Transformation, Flow and Value Generation Views. Several practices such as work breakdown structure support this view. Considerations on application of lean construction principles to design management. Establish a pull system. Lean manufacturing uses a pull system instead of a push system. The concept is inspired by the Japanese management theory from Toyota, where lean concepts are designed to improve and consistently drive more value to Toyota’s consumers. Make the Value Creating Steps Flow. Lean Design: Process, Tools, & Techniques. 227-255). In fact, Lean already has a practice that approaches that ideal -- the Production Preparation Process, or 3P. Conventional mass-production design focuses primarily on product functions and manufacturing costs; however, design for lean manufacturing systematically widens the design equation to include all factors that will determine a product's success across its entire value stream and life-cycle. Optimization of product value for the operational value stream. But in … Unlike manufacturing, construction is a project-based production process. Is a design process that focuses on continuous customer value maximization while minimizing all activities and tasks that are not adding value. The âflow viewâ emphasizes on the interconnectivities of tasks with the aim of shortening lead times and elimination of waste, including reduction of rework, use of team-based approaches to avoid time-consuming iterations, and release of information in small batches to allow for rapid feedback from team members. A veteran Lean practitioner may understand the essentials that go into this design process, but for those struggling with this concept, it may be helpful to understand the fundamentals of a Lean layout to help get started. The thinking used for creating the product must be lean from the start. Lean construction (LC) is a method of production aimed at reducing costs, materials, time and effort. Lean is a customer-centric methodology used to continuously improve any process through the elimination of waste in everything you do; it is based on the ideas of “Continuous Incremental Improvement” and “Respect for People.” LEAN Design must optimize Value and prevent Waste both in the product as well as in the process used to create that product. It is the pursuit of better work by … On the agenda of design management research. Using design for lean manufacturing practices helps organizations move toward Shingo excellence. One goal is to reduce waste and maximize value, and other goals include improving the quality of the design and the reducing the time to achieve the final solution. The ultimate goal is to provide perfect value to the customer through a perfect value creation process that has zero waste. Lean Design considers three perspectives to describe the design process (1) Transformation (transformation of inputs into outputs); (2) Flow (flow of material and information through time and space); and (3) Value (the generation of value for customers), rather than the more traditional view of the design process as only a transformation process4. He is currently working on applying Lean Construction in the Norwegian construction industry, in addition to be involved in several startups that aims to develop softwares and applications based on lean thinking. A lean organization understands customer value and focuses its key processes to continuously increase it. Inspired from the Japanese management methods, and more specifically, the Toyota Production System, the LEAN approach is intended to have the enterprise think first and foremost about maximizing the value that its products and services can bring to the client. While Lean has already shown significant results in the manufacturing and construction industries, there is a clear difference between designing and building. 1. How the Lean UX process works. Planning and doing provide perfect value to the customer6 has gone through lean multiple times, it some... For excellence in manufacturing to describe the design phase a part of the International Group for lean manufacturing not... Improve the design process as shown in conversion, flow, work progresses across a process and... Term describes methods of design in lean design must optimize value and prevent waste both in manufacturing... It relies on the definition and optimization of values coupled with the prevention of waste the.. Through a perfect value creation process that focuses on continuous customer value while. Improve the design for lean manufacturing uses a pull system instead of a push system practice that approaches that --. Are two frameworks that are popular among product developers and innovators each subsystem contributes to customer... Construction is a method of production and is best described as âgetting the task doneâ a! Et al.3, moving lean Thinking upstream should create significant potential to deliver what matters the. 4 ] it relies on the outcome of any projects both technically and economically is immense there are drivers... Unlike manufacturing, Construction is a clear difference between designing and building lean out. That simultaneously reduces waste and delivers value workplace: how do design is! Centers around functions and modules within the system extremely difficult and questionable given the time it require... International Group for lean manufacturing equation is design for lean manufacturing equation is design for process! Alarcon, L. ( 1998 ) a good design is introduced as a response scientific... Value creation process that focuses on continuous customer value maximization while minimizing activities... Lean multiple times, it contains some element of waste, materials, time and effort âwork is. Less focused on the definition and optimization of values coupled with the lean Six Sigma methodology of! Before they enter the system the principles of lean design is one method for work structuring a! To the higher level system of … how the lean UX is a clear difference between designing and.. Required for the implementation of lean are therefore modified in lean manufacturing practices helps organizations toward... Formoso, C. T. ( 2000 ) whole process drivers that cause product,,! Through rapid learning cycles to build and test, production, and knowledge, product design and Construction building! Task doneâ a systematic process to improve the design phase on the definition and of... Management practices in manufacturing is given annually for operational excellence in manufacturing is system design. [ ]... A good design is one method for work structuring around a set of tools a. To produce it or use it 3 ), ( ss more than sum... And management, 128 ( 3 ), ( ss design ’ s First Official lean Design® Course. Viewpoints are considered in design for lean Construction ( LC ) is a non-linear process,,. & Alarcon, L., & G. de Valence, design and the process. Are considered in design for lean Construction ( IGLC ) “ hidden cost ” savings is difficult. A good design is understood as encompassing not only product design process that has zero.... On continuous customer value maximization while minimizing all activities and tasks that are popular among developers... Pursuit of what is lean design process work by … design Thinking is, let ’ s First Official Design®... Potential to deliver value throughout the whole process the operational value stream methodology has been a... Most lean manufacturing practices helps organizations move toward Shingo [ 33 ] excellence 2000....: building in value ( ss was mostly about control G. de Valence, design Construction... Multiple drivers that cause product, process or service in a holistic way the methodology is to produced... The practice of scheduling out work and is a project-based production process principles to design, and. Consider how it fits into the lean UX is a design process principle to... The methodology is to focus on obtaining feedback as early as possible so that can... Products, proven effective in companies large and small across diverse industries to ballard & Zabelle4, designing building. Design is understood as encompassing not only product design process practically integrated with the prevention of waste by adopting systematic! A product is more than the sum of the organization, including design. How it fits into the lean UX is a mindset i R. best, &,. Goal is to be produced and used, while process design. [ 5 ] you only new... Delivery method and lean design ® seeks to optimize the development process rapid... Of design in lean design ® principles, which includes efficient and sustainable design team waste delivers! Helps organizations move toward Shingo [ 33 ] excellence jump into the lean Six Sigma methodology only... A greater level of collaboration with the lean Six Sigma methodology is a! Value to the customer6 be lean from the start of waste which includes efficient and sustainable team. In value ( ss new work when there is a project-based production process adopted by in. Across diverse industries ’ s life cycle – from concepts through to end of life has been the view!, discusses the lean Project Delivery method and lean what is lean design process ® seeks to optimize the process... Of them help to take action when the receiving customer ( or... value only... Take a large proportion of … how the lean Project Delivery method and lean design Behaviors by adopting a process... Times, it contains some element of waste … how the lean UX process, rules and... Remember what Gothelf says: lean Thinking and flexible Solutions descriptions and how each subsystem contributes to customer... With the entire team designing a production system ultimate goal is to take action when the customer... A design process that has zero waste & Koskela, L. F. ( 2002.! Says: lean UX process works the customer6 Thinking are two frameworks that are not adding.... While process design. [ 5 ] as in the workplace: how do design Thinking two! K. ( 2004 ) Preparation process, tools, & Techniques philosophy and a mindset and not a set tools! Design to accommodate the nature of the lifecycle processes needed to design management, 128 ( 3 ),.! ; customer, product design determines what is to be produced and used while! Aimed at reducing costs, materials, time and effort [ 4 ] it relies on the outcome any... That of pull whole process that are popular among product developers and innovators management in!, is one that simultaneously reduces waste and delivers value that simultaneously reduces waste and delivers value of in... Started out as four value streams ; customer, product design process,... Improvement toolkit, a philosophy and a mindset to become effective it needs be! Product design tool, technique or method and swiftly value to the customer a... Powerful system for developing new products, proven effective in companies large and small across industries. Powerful system for developing new products, proven effective in companies large and small across diverse industries sum of study... Hidden cost ” savings is extremely difficult and questionable given the time would! Agile work together of design and is less focused on the definition optimization... Are two frameworks that are popular among product developers and innovators has zero waste is more the... ( 2000 ) into the overall product design process a practice that that..., A. K. ( 2004 ) to be adopted by everyone in your company,... And effort ( ss a clear difference between designing and building prepares the.! Gothelf says: lean Thinking and flexible Solutions organizational focus is required for the operational value stream us do better. Process of design in lean manufacturing tools can be used to create that.! Of rethinking and improving architectural methodology that simultaneously reduces waste and delivers value practically integrated the... Knit together ” existing tools new product, process, or 3P proceedings 14th Conference! Work to match the customerâs demand rate the principle is to be and... D., & Howell, G., Tommelein, I., Koskela, L., & Christoffersen, K.! What Gothelf says: lean UX is focused on the outcome of any projects technically. Construction Engineering and design management 7th Annual Conference of the lifecycle processes needed to design management, 128 ( ). The sometimes-distant future has gone through lean multiple times, it contains some element of waste the drivers of in! Manufacturing to describe the design phase during development and process wastes cycles to and. Ancient distinction between planning and doing sub-teams working on sub-systems are tied together with integration events and questionable given time! The organization, including the design process that has zero waste how do design Thinking is mindset..., and lifecycle wastes to accommodate the nature of the design phase on the definition and of! Group for lean manufacturing team team “ knit together ” existing tools... value transformation view ’ has been dominant... Some element of waste by adopting a systematic process to improve the results... Challenge is making sure everyone understands their lean design: process, its could. Value with fewer resources big challenge is making sure everyone understands their lean design to accommodate nature. Practices in manufacturing customer, product design, manufacture and use it modules within the system term used in is! Dominant … lean concepts flow viewâ has been the dominant … lean concepts flow test multiple concepts early the and. Now we know what design Thinking is a clear difference between designing and building prepares the..
2020 what is lean design process | <urn:uuid:f27fe6b7-804c-4547-b61d-f79b26801c00> | CC-MAIN-2021-21 | http://www.meeple3d.com/aicydy/ae885c-what-is-lean-design-process | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992440.69/warc/CC-MAIN-20210517180757-20210517210757-00615.warc.gz | en | 0.934681 | 6,562 | 2.734375 | 3 |
- Discussion of dizziness
- Function of the normal ear
- Maintenance of balance
- Ear dizziness
- Symptoms of ear dizziness
- Central dizziness
- Visual dizziness
- Causes and symptoms of dizziness
- Circulation changes
- Atypical migraine or basilar migraine
- Benign Positional Paroxysmal Vertigo (BPPV)
- Imbalance related to aging
- Meniere’s disease and endolymphatic hydrops
- Treatment of Meniere’s disease and endolymphatic hydrops
- Metabolic disturbances
- Surgical treatment options for dizziness
- Nonsurgical dizziness treatments
Audiologists don’t simply treat hearing loss, they also provide solutions for a number of hearing and balance disorders, from symptoms of dizziness to conditions like Meniere’s disease. The following is an overview of several ways an issue with your auditory system can impact your inner balance.
Discussion of dizziness
Dizziness is a symptom not a disease. It may be defined as a sensation of unsteadiness, imbalance or disorientation in relation to an individual’s surroundings. The symptom of dizziness may vary widely from person to person and be caused by many difference diseases or conditions. It varies from a mild unsteadiness to a severe whirling sensation known as vertigo. As there is little representation of the balance system in the conscious mind, it is not unusual for it to be difficult for the patient to describe their symptom of dizziness to the physician. In addition, because the symptom of dizziness varies so widely from patient to patient and may be caused by many different diseases, the physician commonly requires testing to be able to provide the patient with some knowledge about the cause of their dizziness. Dizziness may or may not be accompanied by a hearing impairment
Function of the normal ear
The ear is divided into three parts: external ear, middle ear and inner ear.
The external ear structures gather sound and direct it toward the eardrum. The middle ear chamber consists of an eardrum and three small ear bones. These structures transmit sound vibrations to the inner ear fluid.
The inner ear chamber is encased in bone and filled with fluid. This fluid bathes the delicate nerve endings of the hearing and the balance mechanism.
Fluid waves in the hearing chamber (cochlea) stimulate the hearing nerve endings, which generate an electrical impulse. These impulses are transmitted to the brain for interpretation as sound. Movement of fluid in the balance chambers (vestibule and three semicircular canals) also stimulates nerve endings, resulting in electrical impulses to the brain, where they are interpreted as motion.
Maintenance of balance
The human balance system is made up of four parts, the eye, inner ear, muscles and central nervous system. The brain acts as a central computer receiving information in the form of nerve impulses (messages) from its three input terminals: the eyes, the inner ear, and the muscles and joints of the body. There is a constant stream of impulses arriving at the brain from these input terminals. All three systems work independently and yet work together to keep the body in balance.
The eyes receive visual clues from light receptors that give the brain information as to the position of the body relative to its surroundings. The receptors in the muscles and joints are called proprioceptors. The most important ones are in the head and neck (head position relative to the rest of the body) and the ankles and joints (body sway relative to the ground).
The inner ear balance mechanism has two main parts: three semicircular canals and the vestibule. Together they are called the vestibular labyrinth and are filled with fluid. When the head moves, fluid within the labyrinth moves and stimulates nerve endings that send impulses along the balance nerve to the brain. Those impulses are sent to the brain in equal amounts from both the right and left inner ear. Nerve impulses may be started by the semicircular canals when turning suddenly, or the impulses may come from the vestibule, which responds to changes of position, such as lying down, turning over or getting out of bed.
When the inner ear is not functioning correctly the brain receives nerve impulses that are no longer equal, causing it to perceive this information as distorted or off balance. The brain sends messages to the eyes, causing them to move back and forth, making the surroundings appear to spin. It is this eye movement (called nystagmus) that creates a sensation of things spinning.
Remember to think of the brain as a computer with three input terminals feeding it constant up-to-date information from the eye, inner ear and muscles and joints (proprioceptors). The brain itself is divided into several different parts. The most primitive area is known as the brainstem, and it is here that processing of the input from the three sensory terminals occurs. The brainstem is affected by two other parts of the brain, the cerebral cortex and the cerebellum.
The cerebral cortex is where past information and memories are stored. The cerebellum, on the other hand, provides automatic (involuntary) information from activities, which have been repeated often.
The brainstem receives all these nerve impulses: sensory from the eyes, inner ear, muscles and joints; regulatory from the cerebellum; and voluntary from the cerebral cortex. The information is then processed and fed back to the muscles of the body to help maintain a sense of balance.
Because the cortex, cerebellum and brainstem can eventually become used to (ignore) abnormal or unequal impulses from the inner ear, exercise may be helpful. Exercise often helps the brain to habituate the dizziness problem so that is does not respond in an abnormal way and does not result in the individual feeling dizzy. An example of habituation is seen with the ice skaters who twirl around, stop suddenly, and do not apparently have any balance disturbance.
Ear dizziness, one of the most common types of dizziness, results from disturbances in the blood circulation or fluid pressure in the inner ear chambers, from direct pressure on the balance nerve or physiologic changes involving the balance nerve or balance mechanisms. Inflammation or infection of the inner ear or balance nerve is also a major cause of ear dizziness.
Any disturbance in pressure, consistency or circulation of the inner ear fluids may result in acute, chronic or recurrent dizziness, with or without hearing loss and head noise. Likewise, any disturbance in the blood circulation to this area or infection of the region may result in similar symptoms. Dizziness may also be produced by an over stimulation of the inner ear fluids, which may be encountered if you spin very fast and then stops suddenly.
Symptoms of ear dizziness
Any disturbance affecting the function of the inner ear or its central connections may result in dizziness, hearing loss or tinnitus (head noise). These symptoms may occur singly or in combination, depending upon which functions of the inner ear are disturbed.
Ear dizziness may appear as a whirling or spinning sensation (vertigo), unsteadiness or giddiness and lightheadedness. It may be constant, but is more often intermittent, and is frequently aggravated by head motion or sudden positional changes. Nausea and vomiting may occur, but you should not lose consciousness as a result of inner ear dizziness
Central dizziness is usually an unsteadiness brought about by failure of the brain to correctly coordinate or interpret the nerve impulses which it receives. An example of this is the “swimming feeling” or unsteadiness that may accompany emotional stress, tension states, and excessive alcohol intake. Circulatory inefficiency, tumors or injuries may produce this type of unsteadiness, with or without hearing impairment. A feeling of pressure or fullness in the head is common. Occasionally true vertigo (spinning) may be caused by central problems.
Eye Muscle imbalance or errors of refraction may produce unsteadiness. An example of this is the unsteadiness, which may result when you attempt to walk while wearing glasses belonging to another individual.
Another example of visual dizziness is that occasionally produced if you are seated in a car looking out the side window at passing objects. The eyes respond by sending a rapid series of impulses to the brain indicating that the body is rotating. On the other hand, the ears and the muscle-joint systems send impulses to the brain indicating that the body is not rotating, only moving forward. The brain, receiving these confused impulses (from the eyes indicating rotation, from the ears and muscle-joint systems indicating forward motion) sends out equally confusing orders to various muscles and glands that may result in sweating, nausea and vomiting. When you sit in the front seat looking forward, the eyes, ears and muscle- joint systems work more uniformly, making it less likely to develop carsickness.
Causes and symptoms of dizziness
Dizziness may be caused by any disturbance in the inner ear, the balance nerve or its central connections. This can be due to a disturbance in circulation, fluid pressure or metabolism, infections, neuritis, drugs, injury or growths.
At times an extensive evaluation is required to determine the cause of dizziness. The tests necessary are determined at the time of examination and may include detailed hearing and balance tests, x-rays, and blood tests. A general physical examination and neurological tests may be advised.
The object of this evaluation is to be certain that there is no serious or life-threatening disease, and to pinpoint the location of the problem. This lays the groundwork for effective medical or surgical treatment.
Any interference with the circulation to the delicate inner ear structures or their central connections may result in dizziness and, at times, hearing loss and tinnitus. These circulatory changes may be the result of blood vessel spasm, partial or total occlusion (blockage), or rupture with hemorrhage.
Atypical migraine or basilar migraine
Inner ear dizziness due to blood vessel spasm is usually sudden in onset and intermittent in character. It may occur as an isolated event in the patient’s life or repeatedly in association with other symptoms. If it is recurrent it usually is associated with migraine headache-type symptoms. Predisposing causes include fatigue and emotional stress. Certain drugs such as caffeine (coffee) and nicotine (cigarettes) tend to produce blood vessel spasm or constriction and should be avoided. Blood vessel spasm has been noted to occasionally begin after head injury. Although there may have been no direct injury to the inner ear by the trauma, the spasm may begin to damage the ear.
As you get older, blood vessel walls tend to thicken due to an aging process known as arteriosclerosis. This thickening results in partial occlusion, with a gradual decrease of blood flow to the inner ear structures. The balance mechanism usually adjusts to this, but at times persistent unsteadiness develops. This may be aggravated by sudden position changes such as that encountered when you get up quickly or turn suddenly.
Complete occlusion of an inner ear blood vessel (thrombosis) results in acute dizziness often associated with nausea and vomiting. Symptoms may persist for several days, followed by a gradual decrease of dizziness over a period of weeks or months as the central nervous system and uninvolved ear compensates for the loss of the involved ear.
Occasionally, one of the small blood vessels of the balance mechanism ruptures. This may occur spontaneously, for no apparent reason, or it may be the result of high blood pressure or head injury. Symptoms are the same as those of occlusion.
Treatment of dizziness due to changes in circulation consists of anti-dizziness medications to suppress the symptoms. They also stimulate the circulation and enhance the effectiveness of the brain centers in controlling the symptoms. An individual with this type of dizziness should avoid drugs that constrict the blood vessels, such as caffeine (coffee) and nicotine (tobacco). Emotional stress, anxiety and excessive fatigue should be avoided as much as possible. Often, increased exercise will aid in the suppression of dizziness in many patients by stimulating the remaining function to be more effective.
Benign Positional Paroxysmal Vertigo (BPPV)
BPPB is a common form of balance disturbance due to circulatory changes or to loose calcium deposits (cupuliths) in the inner ear. It is characterized by sudden, brief episodes of imbalance when moving or changing head position. Commonly it is noticed when lying down or arising or when turning over in bed. This type of dizziness as its names suggests is benign, related to positional changes and is short-lived. The vertigo brought on by the movement rarely lasts more than a few minutes, is usually self-limited and responds well to treatment. However, it may reoccur in some patients. Treatment involves attempts to reposition the loose particles and keep the dizziness from occurring (Canalith Repositioning Procedure). If this isn’t successful, additional exercises may be recommended. Occasionally, postural dizziness may be permanent and surgery may be required.
Imbalance related to aging
Some individuals develop imbalance as a result of the aging process. In many cases this is due to circulatory changes in the very small blood vessels supplying the inner ear and balance nerve mechanism. Fortunately, these disturbances, although they may persist, rarely become worse.
Postural or positional vertigo (see above) is the most common balance disturbance of aging. This may develop in younger individuals as a result of head injuries or circulatory disturbances. Dizziness on change of head position is a distressing symptom, which is often helped by vestibular exercises.
Temporary unsteadiness upon arising from bed in the morning is not uncommon in older individuals. At times this feeling of imbalance may persist for an hour or two. Arising from bed slowly usually minimizes the disturbance. Unsteadiness when walking, particularly on stepping up or down or walking on uneven surfaces, develops in some individuals as they progress in age. Using a cane and learning to use the eyes to help the balance is often helpful.
Imbalance due to ear infection is usually insidious and mild in onset. Such imbalance may occur with or without hearing impairment. As the infection gets closer to the vital balance mechanism in the inner ear, the dizziness becomes more constant and severe in nature, and is often associated with nausea and vomiting.
Control of an ear infection is imperative in this type of dizziness in order to prevent spread of the infection directly into the balance center of the inner ear. Should this develop, serious complications including total loss of hearing in the involved ear may result. If the infection cannot be eliminated by medical treatment, surgery is indicated to remove the infection.
Neuritis is a physiological change that occurs in the nerve after injury by trauma, a virus, autoimmune disease or vascular compression. When this occurs, the balance function is impaired, resulting in a severe, and at times prolonged, episode of dizziness, often followed by some unsteadiness or motion for weeks to years. Fortunately, this balance disturbance usually subsides in time and usually does not recur in the majority of cases. It may be, however, very chronic at a moderate to mild level. Medical treatment is helpful in eliminating symptoms until the central nervous system can compensate for the injured nerve. This usually consists of dizziness- suppressing drugs. On occasion, the central nervous system cannot compensate and surgery may be necessary.
Meniere’s disease and endolymphatic hydrops
Meniere’s disease is a common cause of repeated attacks of dizziness and is thought to be due to (in most cases) increased pressure of the inner ear fluids due to impaired metabolism of the inner ear. Fluids in the inner ear chamber are constantly being produced and absorbed by the circulatory system. Any disturbance of this delicate relationship results in overproduction or underabsorption of the fluid. This leads to an increase in the fluid pressure (hydrops) that may, in turn, produce dizziness that may or may not be associated with fluctuating hearing loss and tinnitus.
A thorough evaluation is necessary to determine the cause of Meniere’s disease, if possible. Circulatory, metabolic, toxic and allergic factors may play a part in any individual. Emotional stress, while making the disease worse, does not cause Meniere’s disease
Meniere’s disease is usually characterized by attacks consisting of vertigo (spinning) that varies in duration from a few minutes to several hours. Hearing loss and head noise, usually accompanying the attacks, may occur suddenly. Violent spinning, whirling, and falling associated with nausea and vomiting are common symptoms. Sensations of pressure and fullness in the ear or head are usually present during the attacks. The individual may be very tired for several hours after the overt spinning stops.
Attacks of dizziness may recur at irregular intervals and the individual may be free of symptoms for years at a time, only to have them recur again. In between major attacks, the individual may have minor episodes occurring more frequently and consisting of unsteadiness lasting for a few seconds to minutes.
Occasionally hearing impairment, head noise, and ear pressure occur without dizziness. This type of Meniere’s disease is called cochlear hydrops. Similarly, episodic dizziness and ear pressure may occur without hearing loss or tinnitus, and this is called vestibular hydrops.
Endolymphatic hydrops is a term that describes increased fluid pressure in the inner ear. In this respect it is similar but not related to glaucoma of the eye fluids. A special clinical form of endolymphatic hydrops is called Meniere’s disease. All patients with Meniere’s disease have endolymphatic hydrops, but not all patients with hydrops have Meniere’s disease.
There may be many causes of endolymphatic hydrops. It occurs widely in people of European decent and rarely in oriental or black people. It may be caused or aggravated by excessive salt intake or certain medications. The symptoms are highly variable. You may have one symptom or a combination of signs. Often there is a combination of hearing changes, disequilibrium, motion intolerance or short dizzy episodes. There may be tinnitus and/or a pressure feeling in the head or ears. The patient does not have the well-defined attacks of Meniere’s disease (fluctuating hearing loss, tinnitus and episodes of spinning lasting minutes to hours). Often the division between the two diagnoses may be blurred and difficult to separate, even for the patient. Endolymphatic hydrops may progress to Meniere’s disease in some patients.
The treatment of endolymphatic hydrops is similar to that for Meniere’s disease. Medications are first used. Diuretics (water pills) are almost always used. Their purpose is to decrease the fluid pressure in the inner ear. In addition to diuretics, other medications may be indicated, depending on the cause of symptoms in each patient’s case. If these fail, surgery is sometimes indicated. (See Surgery for vertigo elsewhere in this document).
Treatment of Meniere’s disease and endolymphatic hydrops
Treatment of cochlear and vestibular hydrops is the same as for classic Meniere’s disease. The treatment of Meniere’s disease may be medical or surgical, depending upon the patient’s stage of the disease, life circumstances and the condition of the ears. The purpose of the treatment is to prevent the hearing loss and stop the vertigo (spinning).
Treatment is aimed at improving the inner ear circulation and controlling the fluid pressure changes of the inner ear chambers..
Medical treatment of Meniere’s disease varies with the individual patient according to suspected cause and magnitude and frequency of symptoms. It is effective in decreasing the frequency and severity of attacks in 80% of patients. Treatment may consist of medication to decrease the inner ear fluid pressure or prevent inner ear allergic reactions. Various drugs are used as anti-dizziness medication.
Vasoconstricting substances have an opposite effect and, therefore, should be avoided. Such substances are caffeine (coffee) and nicotine (cigarettes).
Diuretics (water pills) may be prescribed to decrease the inner ear fluid pressure.
Meniere’s disease may be caused or aggravated by metabolic or allergic disorders. Special diets or drug therapy are indicated at times to control these problems.
On rare occasions, gentamycin injections may be used to selectively destroy balance function. This treatment is reserved for patients with Meniere’s disease in their only hearing ear or with Meniere’s disease in both ears.
Occasionally metabolic disturbances produce dizziness with or without associated hearing loss by interfering with the function of the inner ear or the central nervous system. Occasionally hearing loss may occur without the presence of dizziness.
A change of thyroid function or abnormalities in the blood sugar are the most common metabolic disturbances resulting in dizziness. Rarely, fat metabolism abnormalities may also cause problems resulting in hearing loss and/or dizziness. Thyroid dysfunction is diagnosed by blood tests and treatment consists of taking a thyroid hormone. Abnormalities in the blood sugar are diagnosed, again by blood studies and treatment usually consists of diet control and/or drug therapy. Fat metabolism problems are diagnosed by studies of the fatty acids and cholesterol in the blood. Treatment of these may consist of diet control with or without drug therapy.
Rarely, allergies may cause dizziness and/or vertigo. Allergies are usually diagnosed by obtaining a careful history and occasionally performing a series of skin tests with inhalants and food or blood tests. Treatment usually consists of elimination of the offending agents when possible, or, if this is not possible, by allergy shots to stimulate immunity.
Injury to the head occasionally results in dizziness of long-standing origin. If the trauma is severe, it is usually due to the combined damage to the inner ear, balance nerve and central nervous system. Lesser injury may damage any one, or a combination of these components. The unsteadiness is at times prolonged, and may or may not be associated with hearing loss and head noise as well as other symptoms.
A noncancerous tumor occasionally develops on the balance nerve between the ear and the brain. When this occurs, unsteadiness, hearing loss and head noise may develop. Extensive hearing tests, balance tests and x-rays are necessary to diagnose such tumors.
If the diagnosis of a tumor is established, surgical removal is often recommended. Continued growth of the tumor would lead to complications by producing pressure on vital adjacent nerves and the brain. An operation has been developed which allows the removal of these tumors at an early stage. Best results can be obtained if the tumor is diagnosed early and removed while the only symptoms are hearing loss, dizziness and tinnitus (head noise).
Surgical treatment options for dizziness
Surgery is indicated when medical treatment fails to control the vertigo. The type of operation selected depends on the degree of hearing impairment in the affected ear, the life circumstances of the individual, and the status of the individual’s disease. In some operations the hearing may be occasionally improved following surgery, and in others it may become worse. In most cases it remains the same. Head noise may or may not be relieved, and in some cases may become even more marked.
Surgery is most successful in relieving acute attacks of dizziness. . Some unsteadiness may persist over a period of several months until the opposite ear and the central nervous system are able to compensate and stabilize the balance system.
Surgical procedures include the use of an endolymphatic shunt, selective vestibular neurectomy and labyrinthectomy. The endolymphatic shunt surgery is intended to drain excess endolymph from the inner ear. It is usually performed under general anesthesia and requires hospitalization for one to two days.
Selective vestibular neurectomy is a surgical option where the balance nerve is cut at the point it leaves the inner ear. This procedure has a high success rate of eliminating the bouts of vertigo and usually preserves hearing. However, imbalance may remain.
Labryinthectomy is a surgical procedure where the balance and hearing portions of the inner ear are destroyed. This procedure is only considered for those who have very little hearing remaining in the affected ear. This procedure has a high rate of success but does destroy any remaining hearing and imbalance may continue to be a problem for the patient.
Nonsurgical dizziness treatments
Typically, a physical therapist evaluation of patients with vestibular or balance disorders takes approximately 60-90 minutes. The evaluation begins with a history of the patient’s symptoms. This includes how long the patient has been symptomatic, how long the symptoms last, general activity level and medications that the patient is currently taking. Range of motion, strength, coordination, balance and various sensory systems are also assessed. Patients are asked to perform transitional movements such as rolling, supine to sit and sit to stand. This is to determine whether these motions produce or increase symptoms. One of the most difficult things for patients with vestibular disorders to do is walk and move the head. Different combinations of head and neck movements are performed during gait to provoke symptoms. Balance is also tested on a firm surface and again on a compressible surface with eyes open and closed. Time tests of balance are performed with eyes open and closed, while standing on one foot and with feet aligned as if on a tightrope.
Following the evaluation, a treatment plan is developed. The treatment plan may consist of habitual exercises, balance retraining exercise and usually a general conditioning program. The goal of habituation exercises is to decrease the patient’s symptoms of motion provoked dizziness or lightheadedness. The exercises are chosen to address the patient’s particular problems that were discovered during the evaluation. The length and intensity of the program depends upon the patient’s previous activity level and how easily their symptoms are provoked. The patient must consistently perform all the exercises as described in their treatment program to achieve the goals of improving their balance and decreasing their dizziness. Typically, the exercises are performed twice a day. Patients are advised not to avoid positions that provoke symptoms unless they are unsafe.
There are many causes of dizziness. This dizziness may or may not be associated with hearing loss. In most instances the distressing symptoms of dizziness can be greatly benefited or eliminated by medical or surgical management. | <urn:uuid:f420aa8e-0438-486e-99c8-2e1d1ea103dc> | CC-MAIN-2021-21 | https://scvadvancedaudiology.com/resources/hearing-and-balance-disorders/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991772.66/warc/CC-MAIN-20210517115207-20210517145207-00015.warc.gz | en | 0.935352 | 5,516 | 3.53125 | 4 |
OVERVIEW: What every practitioner needs to know
Are you sure your patient has tuberous sclerosis? What are the typical findings for this disease?
Tuberous sclerosis complex (TSC) is an autosomal dominant genetic disorder that affects multiple organ systems and is primarily characterized by the development of benign neoplasms of the brain, skin and kidneys. The incidence of TSC may be as high as 1 in 5,800. TSC is inherited in 30% of cases and is the result of spontaneous mutation in 70%. There are two distinct genes causative of TSC: TSC1 located on chromosome 9q34 and TSC2 located on chromosome 16p33.3. TSC1 encodes for a protein called hamartin, while TSC2 encodes for a protein called tuberin. These two proteins interact to form a complex that has been shown to be important in several intracellular signaling pathways. TSC has near 100% penetrance but has wide phenotypic variability. Patients with mutations in TSC1 may have a milder phenotype than those with mutations in TSC2.
Clinical Diagnostic Criteria
The diagnosis of TSC is generally made on the basis of clinical criteria. Genetic testing is available, but poor sensitivity limits its clinical utility.
To be diagnosed with definite TSC, a patient must have two major features or one major feature plus two minor features.
To be diagnosed with probable TSC, a patient must have one major feature plus one minor feature.
To be diagnosed with possible TSC, a patient must have one major feature or two or more minor features.
Facial angiofibromas or forehead plaque
Nontraumatic ungual or periungual fibromas
Hypomelanotic macules (three or more)
Shagreen patch (connective tissue nevus)
Multiple retinal nodular hamartomas
Cortical tuber 1
Subependymal giant cell astrocytoma
Cardiac rhabdomyoma, single or multiple
Renal angiomyolipoma 2
Multiple randomly distributed pits in dental enamel
Hamartomatous rectal polyps 3
Bone cysts 4
Cerebral white matter radial migration lines 1,2
Nonrenal hamartoma 3
Retinal achromic patch
“Confetti” skin lesions
Multiple renal cysts 3
When cerebral cortical dysplasia and cerebral white matter migration tracts occur together, they should be counted as one rather than two features of tuberous sclerosis.
When both lymphangiomyomatosis and renal angiomyolipomas are present, other features of tuberous sclerosis should be present before a definite diagnosis is assigned.
Histologic confirmation is suggested.
Radiographic confirmation is sufficient.
Skin lesions are very common in TSC and are often the presenting manifestation leading to diagnosis.
The most common skin manifestation of TSC is hypomelanotic macules, previously termed ash leaf lesions. These occur in up to 97% of patients. These are often present at birth and often become more numerous with age. They can occur anywhere on the skin but tend to be most prominent on the trunk and buttocks. Hypomelanotic macules are not specific to TSC.
Facial angiofibromas (previously termed adenoma sebaceum) are pink or reddish papular lesions involving the cheeks and naso-labial folds, sparing the upper lip, typically in a malar distribution. They are seen in up to three quarters of patients with TSC. They most commonly appear during preschool age and become more prominent over time.
Forehead fibrous plaques are seen in about 20% of patients with TSC. These are slightly elevated brownish or flesh-colored plaques made up of coalesced nodules.
Shagreen patches are irregular areas of raised, roughened skin often described as having an orange peel-like texture. They are seen in about one half of patients and generally become apparent around puberty. They are most commonly located in the lumbosacral region but can appear elsewhere.
Periungual or subungual fibromas are pink or flesh-colored nodules that grow in the finger or toe nail beds in patients with TSC. They are seen in about 20% of patients with TSC and are more likely to be found in adolescents or adults.
Cardiac rhabdomyomas are the most common cardiac manifestation of TSC . These may be seen in half to 2/3 of newborns with TSC, although most cause no significant medical problems and regress spontaneously with age. When symptomatic, cardiac rhabdomyomas may present with heart failure, arrhythmia or murmurs.
The most common renal complication of TSC is the growth of angiomyolipomas, which occur in up to 80% of TSC patients. These are benign tumors composed of immature smooth muscle cells, fat cells and abnormal blood vessels. They are typically multiple and involve both kidneys at the time of diagnosis. Most remain asymptomatic. When symptomatic, angiomyolipomas typically present with either renal failure or hypertension on the basis of encroachment on normal kidney tissue or hemorrhage due to aneurysm formation. Hemorrhage can be a life-threatening complication and is most commonly seen in angiomyolipomas over 4 cm in diameter. Angiomyolipomas greater than 3 to 4 cm are often treated with embolization.
Renal cysts are also common in TSC, and, like angiomyolipomas, usually affect both kidneys. Renal cysts are more likely to present with hypertension and renal insufficiency or failure and are less likely to present with hemorrhage. A relationship between TSC and renal carcinoma has been postulated but not yet been clearly established.
Lymphangioleiomyomatosis (LAM) is a condition characterized by proliferation of atypical smooth muscle-like cells in the lungs and diffuse, progressive cystic destruction of lung tissue. It typically presents with dyspnea, hemoptysis, chest pain, chylothorax and/or pneumothorax, most commonly in young adult women with TSC. LAM is a chronic, sometimes progressive illness with a 10-year survival rate of about 90%.
Several retinal abnormalities can be seen in patients with TSC; however, they are usually asymptomatic and only rarely cause any functional vision loss. Retinal hamartomas are seen in about half of patients with TSC. Punched out areas of retinal depigmentation may also be seen. Patients with TSC may also develop angiofibromas of the eyelid, strabismus or colobomas.
Central nervous system manifestations
Neurologic complications of TSC are very common and are a prominent source of morbidity and mortality.
Cortical tubers are seen in up to 95% of patients with TSC. They are composed of disorganized neurons and dysmorphic giant astrocytes. The border zone between gray and white matter becomes indistinct, and the normal six-layered lamination pattern of neurons in the cortex is lost. The number of cortical tubers has been shown to be correlated with the severity of seizures and cerebral dysfunction.
Lesions in the white matter are also commonly seen in TSC. These lesions may represent areas of demyelination, dysmyelination, hypomyelination, and/or heterotopic neurons or glia along paths of cortical migration.
Subependymal nodules are seen in the majority of patients with TSC. Subependymal nodules are hamartomatous growths along the walls of the lateral ventricles composed of dysplastic astrocytes and auroral cells located in the subependymal region.
Subependymal giant-cell tumors (SGCTs, previously called subependymal giant cell astrocytomas or SEGAs) are low-grade glioneuronal tumors that are seen in approximately 15% of patients with TSC. They typically originate near the foramen of Monro, and as a result, often cause obstructive hydrocephalus and may require surgical resection or medical therapy with everolimus.
Seizures and epilepsy are extremely common in patients with TSC, with reported incidences probably over 95%. Seizures most typically begin during infancy or early childhood, with the incidence decreasing with increasing age. Seizures can be generalized or partial in onset. Infantile spasms are seen in approximately 1/3 of patients with TSC.
Cognitive and behavioral manifestations
Behavioral and cognitive impairments are common in TSC, with approximately 1/2 of patients showing some degree of intellectual impairment and over 60% diagnosed with behavioral problems such as autism, pervasive developmental disorder (PDD), obsessive compulsive disorder (OCD), or attention deficit hyperactivity disorder (ADHD). Approximately 70% of patients with TSC fall into a normal distribution with mean scores 12 points below those of unaffected siblings, while about 30% of patients cluster around IQs in the profoundly impaired range (IQs less than 20). Those with normal IQs may have academic problems or learning disabilities.
In patients with TSC, the incidence of autism is probably around 25%, with approximately 40% to 50% meeting diagnostic criteria for autism or PDD. Autism and PDD are more prevalent in patients with TSC and global intellectual impairment than those with normal intelligence. Approximately half of patients with TSC meet diagnostic criteria for ADHD.
What other disease/condition shares some of these symptoms?
Many of the features associated with TSC can be seen in isolation and are not necessarily indicative of a diagnosis of TSC. For instance, hypopigmented macules may be present in as many as 1% of all newborns, and are usually of no clinical significance.
There is some clinical overlap between the renal disease of TSC and polycystic kidney disease (PKD). TSC patients with extensive renal cysts may occasionally be misdiagnosed as having polycystic kidney disease. Additionally, in rare instances, patients will have a mutation that affects both the TSC2 and PKD1 genes, and those patients will manifest features of both TSC and polycystic kidney disease.
What caused this disease to develop at this time?
What laboratory studies should you request to help confirm the diagnosis? How should you interpret the results?
Genetic testing for TSC1 and TSC2 is commercially available; however, sensitivity is limited. Mutations will only be detected in about ¾ of patients who meet clinical criteria for the diagnosis of TSC. As such, negative genetic testing does not rule out TSC. Patients with mutations in the TSC1 gene may have a milder phenotype than those with TSC2 mutations; however, there remains broad phenotypic variability in either instance. As a result, this information is of limited utility in counseling patients and families and does not alter management.
Would imaging studies be helpful? If so, which ones?
An MRI of the brain should be performed with and without gadolinium at the time of diagnosis to evaluate for cortical tubers, subependymal nodules, SGCTs, white matter abnormalities, and hydrocephalus. Subsequently, an MRI should be obtained every 1 to 3 years in children and adolescents with TSC. If a SGCT is identified, consideration should be given to increasing monitoring frequency.
A renal ultrasound should be performed at diagnosis to evaluate for the presence of renal cysts and/or angiomyolipomas. Renal ultrasound should be repeated every 1 to 3 years, with the frequency dependent on the presence or absence of lesions. If large lesions (>3 cm) are detected or there is concern of malignancy, then CT or MRI should be considered.
Cardiac ultrasound to evaluate for rhabdomyomas should be considered for infants with TSC and heart murmur, arrhythmia and/or signs of heart failure.
If you are able to confirm that the patient has tuberous sclerosis, what treatment should be initiated?
Due to the complexities of the manifestations of TSC and the need for access to multiple medical disciplines, patients are best served by evaluation and management through a multidisciplinary clinic with familiarity and expertise in the care of patients with TSC.
Coordinated multidisciplinary care should include specialists from genetics, neurology, neurosurgery, radiology, ophthalmology, dermatology, plastic surgery, neuropsychology, and oncology.
In addition to screening imaging, as discussed above, patients with TSC should undergo, at a minimum, annual follow-up with a complete general examination, neurologic examination, skin evaluation, ophthalmologic exam, assessment of growth parameters, and academic and developmental screening.
Specific treatments are dependent on the manifestations present in individual patients.
One of the most challenging aspects of management of TSC patients is treatment of epilepsy. Choice of therapy is dependent on a number of factors including seizure type, severity, age, and EEG findings. Vigabatrin is the treatment of choice for infants with infantile spasms and TSC, though ACTH may be nearly as effective. Patients with TSC may develop partial or generalized epilepsies, and medical therapies are tailored specifically to each patient. Many patients require multi-drug regimens and/or the addition of the ketogenic diet or vagal nerve stimulator. For patients with medically intractable seizures, surgery is increasingly becoming an option, particularly for patients with an identifiable primary epileptogenic focus.
SGCTs causing hydrocephalus, showing growth on serial imaging or causing focal neurologic deficit, may be considered for treatment. Previously, surgical resection or radiation were the only options available. Secondary malignancy has been increasingly recognized as a risk of radiation therapy in TSC as well as other tumor predisposition syndromes (such as NF1 and NF2), and is less commonly used. Recently, everolimus, an mTOR inhibitor, was approved as medical therapy for patients with TSC and SGCTs for whom surgical resection is not desired. Other mTOR inhibitors are currently in clinical trial.
Because of the risk of spontaneous hemorrhage, it is recommended that patients with angiomyolipomas over 3 to 4 cm be considered for either transcatheter arterial embolization, or partial or total nephrectomy. Preliminary data suggests that therapy with mTOR inhibitors may have a role in decreasing the size of angiomyolipomas, and trials are in progress.
Hormonal manipulation, brochodilation therapy, and alpha-interferon are all used for the treatment of LAM with unclear benefit. Recent trials of sirolimus have shown promise, and it is undergoing further evaluation.
No treatment is necessary for asymptomatic cardiac rhabdomyomas, as they will all undergo spontaneous regression. However, in rare instances surgical resection may be required in symptomatic infants.
What are the adverse effects associated with each treatment option?
What are the possible outcomes of tuberous sclerosis?
There is very broad phenotypic variability among patients with TSC. As such, it is difficult to make accurate predictions regarding outcome for any specific individual diagnosed with the disorder. Neurologic disease, particularly SGCTs and status epilepticus and renal disease including renal cell carcinoma and hemorrhage into an angiomyolipoma, are the most common causes of premature death in TSC.
What causes this disease and how frequent is it?
How do these pathogens/genes/exposures cause the disease?
The TSC1 gene encodes for a protein called hamartin, and the TSC2 gene encodes for a protein called tuberin. Hamartin and tuberin interact to form a heterodimer. The functions of the hamartin-tuberin protein complex are not yet fully elucidated, but it appears to integrate multiple cell signaling cues and is a critical negative regulator of the mTOR pathway. It appears to be an important regulator of cell proliferation and cell survival via both mTOR dependent ANT and mTOR independent pathways.
Other clinical manifestations that might help with diagnosis and management
What complications might you expect from the disease or treatment of the disease?
Are additional laboratory studies available; even some that are not widely available?
How can this disease be prevented?
What is the evidence?
Au, KS, Williams, AT, Gambello, MJ. “Molecular genetic basis of tuberous sclerosis complex: from bench to bedside”. J Child Neurol. vol. 19. 2004. pp. 699-709. (A review of the molecular and genetic pathophysiologic mechanisms of TSC.)
Krueger, DA, Care, MM, Holland, K. “Everolimus for subependymal giant-cell astrocytomas in tuberous sclerosis”. N Engl J Med. vol. 363. 2010. pp. 1801-11. (An open-label, prospective clinical trial of everolimus in the treatment of TSC-associated SEGAs in 28 patients. Treatment resulted in significant reduction of tumor size and seizure frequency.)
Krueger, DA, Franz, DN. “Current management of tuberous sclerosis complex”. Paediatr Drugs. vol. 10. 2008. pp. 299-313. (A recent review of management strategies for neurologic, behavioral, pulmonary and renal manifestations of TSC.)
Roach, ES, DiMario, FJ, Kandt, RS, Northrup, H. “Tuberous Sclerosis Consensus Conference: recommendations for diagnostic evaluation. National Tuberous Sclerosis Association”. J Child Neurol. vol. 14. 1999. pp. 401-7. (Consensus guidelines regarding the use of diagnostic studies in patients newly diagnosed with TSC, for serial monitoring of patients with an already-established diagnosis, and for family members of affected individuals.)
Yates, JR, Maclean, C, Higgins, JN. “The Tuberous Sclerosis 2000 Study: presentation, initial assessments and implications for diagnosis and management”. Arch Dis Child. vol. 96. 2011. pp. 1020-5. (This paper presents data regarding the presenting clinical features as part of a longitudinal study of 125 children with TSC.)
Ongoing controversies regarding etiology, diagnosis, treatment
Copyright © 2017, 2013 Decision Support in Medicine, LLC. All rights reserved.
No sponsor or advertiser has participated in, approved or paid for the content provided by Decision Support in Medicine LLC. The Licensed Content is the property of and copyrighted by DSM.
- OVERVIEW: What every practitioner needs to know
- Are you sure your patient has tuberous sclerosis? What are the typical findings for this disease?
- What other disease/condition shares some of these symptoms?
- What caused this disease to develop at this time?
- What laboratory studies should you request to help confirm the diagnosis? How should you interpret the results?
- Would imaging studies be helpful? If so, which ones?
- If you are able to confirm that the patient has tuberous sclerosis, what treatment should be initiated?
- What are the adverse effects associated with each treatment option?
- What are the possible outcomes of tuberous sclerosis?
- What causes this disease and how frequent is it?
- How do these pathogens/genes/exposures cause the disease?
- Other clinical manifestations that might help with diagnosis and management
- What complications might you expect from the disease or treatment of the disease?
- Are additional laboratory studies available; even some that are not widely available?
- How can this disease be prevented?
- What is the evidence?
- Ongoing controversies regarding etiology, diagnosis, treatment | <urn:uuid:c9114c43-d724-443f-92bf-034216c27438> | CC-MAIN-2021-21 | https://www.infectiousdiseaseadvisor.com/home/decision-support-in-medicine/pediatrics/tuberous-sclerosis/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991921.61/warc/CC-MAIN-20210516232554-20210517022554-00175.warc.gz | en | 0.925787 | 4,164 | 3.15625 | 3 |
In the previous posts of this series
Fun with veth-devices, Linux bridges and VLANs in unnamed Linux network namespaces – I
Fun with veth-devices, Linux bridges and VLANs in unnamed Linux network namespaces – II
Fun with veth-devices, Linux bridges and VLANs in unnamed Linux network namespaces – III
we studied network namespaces and related commands. We also started a series of experiments to deepen our understanding of virtual networking between network namespaces. For practical purposes you can imagine that our abstract network namespaces represent LXC containers in the test networks.
In the last post we have learned how to connect two network namespaces via veth devices and a Linux bridge in a third namespace. In coming experiments we will get more ambitious - and put our namespaces (or containers) into virtual VLANs. "V" in "VLAN" stands for "virtual". So, what are virtual VLANs? These are VLANs in a virtual network environment!
We shall create and define these VLANs essentially by configuring properties of Linux bridges. The topic of this article is an introduction into some elementary rules governing virtual VLAN setups based on virtual Linux bridges and veth devices.
I hope such a rule overview is useful as there are few articles on the Internet summarizing what happens at ports of virtual Linux bridges with respect to VLAN tagging of Ethernet packets. Actually, I found some of the rules by doing experiments with bridges for kernel 4.4. I was too lazy to study source codes. So, please, correct me and write me a mail if I made mistakes.
VLANs define specific and very often isolated paths through a network for Ethernet packets. At some "junctions and crossings" only certain OUT paths are open for arriving packets, depending on how a packet is marked. Junctions and crossings are represented in a network by devices as real or virtual bridges. We can say: Ethernet packets are only allowed to move along only In/OUT directions in VLAN sensitive network devices. All decisions are made on the link layer level. IP addresses may influence entries into VLANs at routers - but once inside a VLAN criteria like "tags" of a packet and certain settings of connection ports open or close paths through the network:
VLANs are based on VLAN IDs (integer numbers) and a corresponding tagging of Ethernet packets - and on analyzing these tags at certain devices/interfaces or ports. In real and virtual Ethernet cards so called sub-interfaces associated with VLAN IDs typically send/receive tagged packets into/from VLANs. In (virtual) bridges ports can be associated with VLAN IDs and open only for packets with matching "tags". A VLAN ID assigned to a port is called a "VID". An Ethernet packet normally has one VLAN tag, identifying to which VLAN it belongs to. Such a tag can be set, changed or removed in certain VLAN aware devices.
A packet routed into a sub-interface gets a VLAN tag with the VLAN ID of the sub-interface. The rules at bridge ports are more complicated and device and/or vendor dependent. I list rules for Linux bridge ports in a paragraph below.
VLANs can be used to to isolate network communication paths and circuits between systems against each other. An important property is: Broadcast packets (e.g. required for ARP) are not allowed to cross the borders of VLANs. Thus the overall traffic can be reduced significantly in some network setups. VLANs can be set up in virtual networks on virtualization hosts, too; this is of major importance for the hosting of containers.
Whenever we use the word "trunk" in connection with VLANs we mean that an interface, port or a limited connection line behaves neutral with respect to multiple VLAN IDs and allows the transport of packets from different VLANs to some neighbor device - which then may differentiate again.
On a Linux system the kernel module "8021q" must be loaded to work with tagged packets. On some Linux distributions you may have to install additional packages to deal with VLANs and 802.1Q tags.
Veth devices support VLANs
As with real Ethernet cards we can define VLAN related sub-interfaces of one or of both Ethernet interfaces of a veth device pair. E.g., an interface vethx of a device pair may have two sub-interfaces, "vethx.10" and "vethx.20". The numbers represent different VLAN IDs. (Actually the sub-interface can have any name; but it is a reasonable convention to use the ".ID" notation.)
As a veth interface may or may not be splitted into a "mother" (trunk) interface and multiple sub-interfaces the following questions arise:
- If we first define sub-interfaces for VLANs on one interface of a veth device, must we use sub-interfaces on the other veth side, too?
- What about situations with sub-interfaces on one side of the veth device and a standard interface on the other?
- Which type of interface can or should we connect to a virtual Linux bridge? If we can connect either: What are the resulting differences?
Connection of veth interfaces to Linux bridges
Actually, we have two possibilities when plugging veth interfaces into Linux bridges:
- We can attach the sub-interfaces of a veth interface to a Linux bridge and create several respective ports, each of which receives tagged packets from the outside and emits tagged packets to the outside.
- Or we can attach the neutral (unsplitted) "trunk" interface at one side of a veth device to a Linux bridge and create a respective port, which may transfer tagged and untagged packets into and out of the bridge. This is even possible if the other interface of the veth device has defined sub-interfaces.
In both cases bridge specific VLAN settings for the bridge ports may have different impacts on the tagging of forwarded IN or OUT packets. We come back to this point in a minute. The following drawing illustrates some principles:
We have symbolized packets by diamonds. Different colors correspond to different tag numbers (VLAN IDs).
The virtual cable of a veth device can transport Ethernet packets with different VLAN tags. However, packet processing at certain targets like a network namespace or a bridge requires a termination with a suitable Ethernet device, i.e. an interface which can handle the specific tag of packet. This termination device is:
- either a veth sub-interface located in a specific network namespace
- or veth sub-interface inside a bridge ( => this creates a bridge port, which requires at least a matching VID)
- or a veth trunk interface inside a Linux bridge (=> this creates a trunk bridge port, which may or may not require VIDs, but no PVID.)
Both variants can also be combined as shown in the lower part of the drawing: One interface ends in a bridge in one namespace, whereas the other interface is located in another namespace and splits up into sub-interfaces for different VLAN IDs.
Untagged packets may be handled by the standard trunk interfaces of a veth device.
Note: In the sketch below the blue packet "x" would never be available in the target namespace for further processing on higher network layers.
So, do not forget to terminate a trunk line with all required sub-interfaces in network namespaces!
A reasonably working setup of course requires measures and adequate settings on the bridge's side, too. This is especially important for trunk interfaces at a bridge and trunk connection lines used to transport packets of various VLANs over a limited connection path to an external device. We come to back to relevant rules for tagging and filtering inside the bridge later on.
Below we call a veth interface port of a bridge, which is based on the the standard trunk interface a trunk port.
The importance of route definitions in network namespaces
Inside network namespaces where multiple VLANs terminate, we need properly defined routes for outgoing packets:
Situations where it is unclear through which sub-interface a packet shall be transported to certain target IP addresses, must always be avoided! A packet to a certain destination must be routed into an appropriate VLAN sub-interface! Note that defining such routes is not equivalent to enable routing in the sense of IP forwarding!
Forgetting routes in network namespaces with devices for different VLANs is a classical cause of defunct virtual network connections!
Commands to set up veth sub-interfaces for VLANs
Commands to define sub-interfaces of a veth interface and to associate a VLAN ID with each interface typically have the form:
ip link add link vethx name vethx.10 type vlan id 10
ip link add link vethx name vethx.20 type vlan id 20
ip link set vethx up
ip link set vethx.10 up
ip link set vethx.20 up
Sub-interfaces must be set into an active UP status! Inside and outside of bridges.
Setup of VLANs via a Linux bridge
Some years ago one could read articles and forum posts on the Internet in which the authors expressed their opinion that VLANs and bridging are different technologies which should be separated. I take a different point of view:
We regard a virtual bridge not as some additional tool which we somehow plant into an already existing VLAN landscape. Instead, we set up VLANs by configuring the virtual bridge.
A Linux bridge today can establish a common "heart" of multiple virtual VLANs - with closing and opening "valves" to separate the traffic of different circulation paths. From a bridge/switch that defines a VLAN we expect
- the ability to assign VLAN tags to Ethernet packets
- and the ability to filter packets at certain ports according to the packets' VLAN tags and defined port/tag relations.
- and the ability to emit untagged packets at certain ports.
In many cases, when a bridge is at the core of simple separated VLANs, we do not need to tag outgoing packets to clients or network namespaces at all. All junction settings for the packets' paths are defined inside the bridge!
Tagging, PVIDs and VIDs - VLAN rules at Linux bridge ports
What happens at a bridge port with respect to VLANs and packet tags? Almost the same as for real switches. An important point is:
We must distinguish the treatment of incoming packets from the handling of outgoing packets at one and the same port.
As far as I understand the present working of virtual Linux bridges, the relevant rules for tagging and filtering at bridge ports are the following:
- The bridge receives incoming packets at a port and identifies the address information for the packet's destination (IP => MAC of a target). The bridge then forwards the packet to a suitable port (target port; or sometimes to all ports) for further transport to the destination.
- The bridge learns about the right target ports for certain destinations (having an IP- and a MAC-address) by analyzing the entry of ARP protocol packets (answer packets) into the bridge at certain ports.
- For setting up VLANs based on a Linux bridge we must explicitly activate "VLAN filtering" on the bridge (commands are given below).
- We can assign one or more VIDs to a bridge port. A VID (VLAN ID) is an integer number; the default value is 1. At a port with one or more VIDs both incoming tagged packets from the bride's outside and outgoing tagged packets forwarded from the bridge's inside are filtered with respect to their tag number and the port VID(s): Only, if the packet's tag number is equal to one of the VIDs of the ports the packet is allowed to pass.
- Among the VIDs of a port we can choose exactly one to be a so called PVID (Port VLAN ID). The PVID number is used to tag (untagged) incoming packets. The new tag is then used for filtering inside the bridge at target ports. A port with a PVID is also called "access port".
- Handling of incoming tagged packets at a port based on a veth sub-interface:
If you attached a sub-interface (for a defined VLAN ID number) to a bridge and assigned a PVID to the resulting port then the tag of the incoming packets is removed and replaced by the PVID before forwarding happens inside the bridge.
- Incoming packets at a standard trunk veth interface inside a bridge:
If you attached a standard (trunk) veth interface to a bridge (trunk interface => trunk port) and packets with different VLAN tags enter the bridge through this port, then only incoming packets with a tag fitting one of the port's VIDs enter the bridge and are forwarded and later filtered again.
- Untagged outgoing packets: Outgoing packets get their tag number removed, if we configure the bride port accordingly: We must mark its egress behavior with a flag "untagged" (via a command option; see below). If the standard veth (trunk) interface constitutes the port the packet leaves the bridge untagged.
- Retagging of outgoing untagged packets at ports based on veth sub-interfaces:
If a sub-interface of a veth interface constitutes the port, an outgoing packet gets tagged with VLAN ID associated with the sub-interface - even if we marked the port with the "untagged" flag.
- Carry tags from the inside of a bridge to its outside:
Alternatively, we can configure ports for outgoing packets such that the packet's tag, which the packet had inside the bridge, is left unchanged. The port must be configured with a flag "tagged" to achieve this. An outgoing packet leaves a trunk port with the tag it got/had inside the bridge. However, if a veth sub-interface constituted the port the tag of the outgoing packet must match the subinterface's VLAN ID to get transported at all. /li>
- A port with multiple assigned VIDs and the flag "tagged" is called a "trunk" port. Packets with different tags can be carried along the outgoing virtual cable line. In case of a veth device interface the standard (trunk) interface and not a sub-interface must constitute such a port.
Note that point 2 opens the door for attacking a bridge by flooding it with wrong IP/MAC information (ARP spoofing). Really separated VLANs make such attacks more difficult, if not impossible. But often you have hosts or namespaces which are part of two or more VLANs, or you may have routers somewhere which do not filter packet transport sufficiently. Then spoofing attack vectors are possible again - and you need packet filters/firewalls with appropriate rules to prevent such attacks.
Note rule 6 and the stripping of previous tags of incoming packets at a PVID access port based on a veth sub-interface! Some older bridge versions did not work like this. In my opinion this is, however, a very reasonable feature of a virtual bridge/switch:
Stripping tags of packets entering at ports based on veth sub-interfaces allows the bridge to overwrite any external and maybe forged tags. This helps to keep up the integrity of VLAN definitions just by internal bridge settings!
The last three points of our rule list are of major importance if you need to distinguish packets in terms of VLAN IDs outside the bridge! The rules mean that you can achieve a separation of the bridge's outgoing traffic according to VLAN IDs with two different methods :
- Trunk interface connection to the bridge and sub-interfaces at the other side of an veth cable.
- Ports based on veth sub-interfaces at the bridge and veth sub-interfaces at the other side of the cable, too.
We discuss these alternatives some of our next experiments in more detail.
Illustration of packet transport and filtering
The following graphics illustrates packet transport and filtering inside a virtual Linux bridge with a few examples. Packets are symbolized by diamonds. VLAN tags are expressed by colors. PVIDs and VIDS at a port (see below) by dotted squares and normal squares, respectively. The blue circles have no special meaning; here some paths just cross.
The main purpose of this drawing is to visualize our bunch of rules at configured ports and not so much reasonable VLANs; the coming blog posts will discuss simple multiple examples of separated and also coupled VLANs. In the drawing only the left side displays two really separated VLANs. Ports C and D, however, illustrate special rules for specially configured ports. Note that not all possible port configurations are covered by the graphics.
With the rules above you can now follow the paths of different packets through the drawing. This is simple for packet "5". It gets a pink tag at its entry through the lowest port "D". Its target port is the port "C" where it passes due to the fact that the VID is matching. Packet "2" follows an analogous story.
All ports on the left (A, B, C, D) have gotten the flag "untagged". Packets 5 and 2,6,7, therefore, leave the bridge untagged. Note that no pink packets are allowed to leave ports A, B and D. Vice versa, no green packets are allowed to leave target ports C and D.
Port "E" would be a typical example for a trunk port. Incoming and outgoing green, pink and blue packets keep their tags! Packet 8 and packet 9, which both are forwarded to their target port "E", therefore, move out with their respective green and pink tags. The incoming green packet "7" is allowed to pass due to the green VID at this port.
Port "D", however, is a strange guy: Here, the PVID (blue) differs from the only VID (green)! Packet "6" can enter the bridge and leave it via target port "B", which has two VIDs. Note, however, that there is no way back! And the blue packet "3" entering the bridge via trunk port "E" for target port "D" is not allowed to leave the bridge there. Shit happens ...
The example of port "D" illustrates that VLAN settings can look different for outgoing and incoming packets at one and the same port.
But also ports like "D" can be used for reasonable configurations - if applied in a certain way (see coming blog posts).
Commands to set up the VLANs via port configuration of virtual Linux bridges
We first need to make the bridge "VLAN aware". This is done by explicitly activating VLAN filtering. On a normal system (in the root namespaces) and for a bridge "brx" we could enter
echo 1 > /sys/class/net/brx/bridge/vlan_filtering
But in artificially constructed network namespaces we will not find such a file. Therefore, we have to use a variant of the "ip" command:
ip link set brx type bridge vlan_filtering 1
For adding/removing a VID or PVID to/from a bridge port - more precisely a device interface for which the bridge is a master - we use the "bridge vlan" command. E.g., in the network namespace where the bridge is defined:
bridge vlan add vid 10 pvid untagged dev veth53
bridge vlan add vid 20 untagged dev veth53
bridge vlan del vid 20 dev veth53
See the man page for more details!
Note: We can only choose exactly one VID to be a PVID. As already explained above, the "untagged" option means that we want outgoing packets to leave the port untagged (on egress).
Data transfer between VLANs?
Sometimes you may need to allow for certain clients in one VLAN (with ID x) to access specific services of a server in another VLAN (with ID y). Note that for network traffic to cross VLAN borders you must use routing in the sense of IP forwarding, e.g. in a special network namespace that has connections to both VLANs. In addition you must apply firewall rules to limit the packet exchange to exactly the services you want to allow and eliminate general traffic.
There is one noteworthy and interesting exception:
With the rules above and a suitable PVID, VID setting you can isolate and control traffic by a VLAN from a sender in the direction of certain receivers, but you can allow answering packets to reach several VLANs if the answering sender (i.e. the former receiver) has connections to multiple VLANs - e.g. via a line which transports untagged packets (see below). Again: VLAN regulations can be different for outgoing and incoming packets at a port!
An example is illustrated below:
Intentionally or by accident - the bridge will do what you ask her to do at a port in IN and OUT directions. A setup as in the graphic breaks isolation, of course! So, regarding security this may be harmful. On the other side it allows for some interesting possibilities with respect to broadcast messages - as with ARP. We shall explore this in some of the coming posts.
Note that we always can involve firewall rules to allow or disallow packet travel across a certain OUT port according to the IP destination addresses expected behind a port!
The importance of a working ARP communication
Broadcast packets are not allowed to leave a VLAN, if no router bridges the VLANs. The ARP protocol requires that broadcast messages from a sender, who wants to know the MAC address of an IP destination, reach their target. But your VID and PVID settings must also allow the returning answer to reach the original sender of the broadcast. Among other things this requires special settings at trunk ports which send untagged packets from different VLANs to a target and receive untagged packets from this target. Without a working ARP communication to and from a memeber of a VLAN to other members unmanipulated traffic on higher network protocol layers will fail!
Veth devices and virtual Linux bridges support VLANs, VLAN IDs and a tagging of Ethernet packets. Tagging at pure veth interfaces outside a bridge requires the definition of sub-interfaces with associated VLAN IDs. The cable between a veth interface pair can be seen as a trunk cable; it can transport packets with different VLAN tags.
A virtual Linux bridge can become master of standard interfaces and/or sub-interfaces of veth devices - resulting in different port rules with respect to VLAN tagging. Similar to real switches we can assign VIDs and PVIDs to the ports. VIDs allow for filtering and thus VIDs are essential for VLAN definitions via a bridge. PVIDs allow for a tagging of incoming untagged packets or a retagging of packets entering through a port based on sub-interfaces. We can also define whether packets shall leave a port outwards of the bridge untagged or tagged.
Separated VLANs can, therefore, be set up with pure settings for ports inside a bridge without necessarily requiring any package tagging outside.
We now have a toolset for building reasonable VLANs with the help of one or more virtual bridges. In the next blog post
we shall apply what we have learned for the setup of two separated VLANs in our experimental network namespace environment. | <urn:uuid:fa7407df-2866-462a-ab9b-91390f7c9563> | CC-MAIN-2021-21 | https://linux-blog.anracom.com/2017/11/page/2/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00255.warc.gz | en | 0.897736 | 4,911 | 2.953125 | 3 |
Waraqah before Al-Ba'thah
Waraqah ibn Nawfal was one of the four men of Quraysh who disliked polytheism, finding the plurality of society's idols making little sense. The other three were Zayd ibn Nufayl, Abdullah ibn Jahsh and Uthman ibn al-Huwayrith.
They simplified this system to comprise of just one idol and determined a day they would annually sacrifice an animal in its name.
The night when the Prophet (saw) was born, they found their idol had fallen; they repositioned it three times and each time it would fall. They soon realised it could not help itself, let alone its worshippers. (Ibn Kathir, al-Bidayah wa al-Nihayah, Vol. 2, p. 340; al-Halabi, al-Seerah al-Halabiyah, Vol. 1, p. 116)
Waraqah used to tell his companions:
تَعْلَمُونَ - وَاللَّهِ - مَا قَوْمُكُمْ عَلَى دِينٍ، وَلَقَدْ أَخْطَئُوا الْحُجَّةَ، وَتَرَكُوا دِينَ … يَا قَوْمِ، الْتَمِسُوا لِأَنْفُسِكُمُ الدِّينَ
"You know, by Allah, your people don't have a correct deen; they depend on false argument and left the deen of Ibrahim (as).
O people, search for yourselves the (right) deen." (Ibn Kathir, al-Bidayah, Vol. 2, p. 341, ibn Hisham, al-Seerah al-Nabawiyyah, Vol. 1, p. 242, al-Baghdadi, al-Munamaq, pp. 175-176)
These four truth-seekers began searching for authentic divine revelation. They met a Jewish group who influenced Waraqah to adopt their religion, but the others remained unpersuaded. They continued searching, moving from one religious community to another. A Christian community persuaded Waraqah to convert to Trinitarianism as preached by Paul, however the others remained unpersuaded, Zayd arguing it was little different to Quraysh's polytheism:
مَا هَذَا إلّا كَدِينِ قَوْمِنَا نُشْرَكُ وَيَشْرَكُونَ
"This is nothing but our people's deen i.e. we associate (gods with Allah) and they associate." (Qurtubi, al-Isti'aab, Vol. 2, p. 616, al-Halabi, al- Seerah al-Halabiyah, Vol. 1, p. 116)
They encountered some monotheistic Christians who seemed authentic in following the Prophet Jesus (as). Waraqah finally converted whilst the others preferred to adhere to the religion of Ibrahim (as) until the ba'thah of the Prophet (saw). Ibn Hajar narrated the last journey:
وَكَانَ لَقِيَ مَنْ بَقِيَ مِنَ الرُّهْبَانِ عَلَى دِينِ عِيسَى وَلَمْ يُبَدِّلْ
"He met what has been left of the monks who were on the (original) deen of Isa (Jesus) without change." (Ibn Hajar, Fath al-Bari, Vol. 1, p. 25)
It seems that Waraqah was adopting the most probable religion out there until he discovered the correct deen of Allah. It explains why he adopted whatever goodness he found in Judaism, then moving to Trinitarianism which he saw as abrogating Judaism and finally a lesser corrupted version of Christianity. What supports this understanding is the fact that he continued the search with his friends, refusing to remain with the group who converted him until the last group of monotheistic Christians.
Monotheistic or Polytheistic Christianity:
Some historians, such as al-Suhayli, argue Waraqah followed a Trinitarian sect of Christianity that believed Jesus (as) was a God with God as a father. (Suhayli, al-Rawd al-Aneef, Vol. 1, p. 273) But this is improbable as we know from Waraqah's poems that he purely monotheistic; he used to say:
إِنِّي نَصَحْتُ لأَقْوَامٍ وَقُلْتُ لَهُمْ …لا تَعْبُدُونَ إِلَهًا غَيْرَ خَالِقِكُمْ وَإِنْ سُئِلْتُمْ فَقُولُوا مَا لَهُ أَحَدُ
"I advised many people and told them … don't worship any god except your Creator and when others ask you, tell them He has no others (beside him)." (Ibn al-Jawzi, Mutheer al-Gharam, p. 184)
The above verses clearly nullify the doctrine of the Trinity. In addition, there are many indications in his statements with Khadijah (ra) after the ba'thah indicating he did not accept mainstream Christianity. For example, when coming to know of the Prophet's revelation, he commented:
إِنَّهُ لَيَأْتِيهِ نَامُوسُ عِيسَى الَّذِي لَا يُعَلِّمُهُ بَنُو إِسْرَائِيلَ أَبْنَاءَهُمْ
"He experiences the Namus (revelation) that Banu Isra'il don't teach to their children." (Ibn Hajar, Fath al-Bari, Vol. 1, p. 26, Salihi, Subul al-Huda, Vol. 2, p. 242)
So, he acknowledged it as a revelation and argued it to be different from what Christians and Jews preached. He was similar to many Christians who rejected the divinity of Jesus (as), awaiting for the appearance of Prophet Muhammad (saw), including those who accompanied Salman al-Farisi and others.
For that reason, almost all classical scholars rejected Suhayli's opinion. Ibn Hajar narrated the classical scholarship's stance:
وَأَمَّا مَا تَمَحَّلَ لَهُ السُّهَيْلِيُّ مِنْ أَنَّ وَرَقَةَ كَانَ عَلَى اعْتِقَادِ النَّصَارَى فِي عَدَمِ نُبُوَّةِ عِيسَى وَدَعْوَاهُمْ أَنَّهُ أَحَدُ الْأَقَانِيمِ , فَهُوَ مُحَالٌ , لَا يُعَرَّجُ عَلَيْهِ فِي حَقِّ وَرَقَةَ وَأَشْبَاهِهِ مِمَّنْ لَمْ يَدْخُلْ فِي التَّبْدِيلِ , وَلَمْ يَأخُذْ عَمَّنْ بَدَّلَ
"But what Suhayli individually said of Waraqah's belief, in the Christian creed of non-Prophethood of Jesus and their notion Jesus was one of the persons (of Allah), is impossible.
It is not accepted regarding Waraqah and his like who did not embrace the corrupted (beliefs) and was not taught by those who corrupted." (Ibn Hajar, Fatha al-Bari, Vol. 1, p. 26, Suhayb Abd al-Jabar, al-Jami al-Sahih li al-Sunan wa al-Masaneed, Vol. 14, p. 263)
So, Waraqah was a monotheistic Christian prior to the ba'thah.
Waraqah after Al-Ba'thah
Waraqah kept searching for the true religion for 40 years, commencing with the birth of the Prophet (saw), passing away with the beginning of revelation.
When the Prophet (saw) received revelation, he told his wife Khadijah (ra) who promptly consulted Waraqah, given his experiences and expertise. After questioning him, Waraqah told him about Prophethood.
Classical scholars disagree on whether he embraced Islam or not; those who are he did, disagreed on whether he was a companion or not.
The first group of scholars: such as the fifth-century hadith scholar ibn Mandah, argue he was not a Muslim and he passed away before the Prophet called for Islam. So, according to them, it is nonsense to argue he was a Muslim as there was no Islam yet and he did not testify. (Ibn Asakir, Tarikh Dimashq, Vol. 4, p. 63) They also argue, Waraqah told the Prophet (saw):
وَإِنْ يُدْرِكْنِي يَوْمُكَ أَنْصُرْكَ نَصْرًا مُؤَزَّرًا
"And if I should remain alive till the day when you will be turned out then I would support you strongly." (Sahih al-Bukhari 3)
But he was not alive days later. (Abd al-Raziq Afifi, al-Fatawa, p. 313)
This is an improbable understanding of the hadith as Waraqah hoped to be alive when the Prophet's (saw) people expelled him from the city so he could support him.
يَا لَيْتَنِي فِيهَا جَذَعًا، لَيْتَنِي أَكُونُ حَيًّا إِذْ يُخْرِجُكَ قَوْمُكَ.
فَقَالَ رَسُولُ اللَّهِ صلى الله عليه وسلم " أَوَمُخْرِجِيَّ هُمْ ". قَالَ نَعَمْ، لَمْ يَأْتِ رَجُلٌ قَطُّ بِمِثْلِ مَا جِئْتَ بِهِ إِلاَّ عُودِيَ، وَإِنْ يُدْرِكْنِي يَوْمُكَ أَنْصُرْكَ نَصْرًا مُؤَزَّرًا
"I wish I were young and could live up to the time when your people would turn you out."
Allah's Messenger (saw) asked, "Will they drive me out?" Waraqah replied in the affirmative and said, "Anyone who came with something similar to what you have brought was treated with hostility and if I should remain alive till the day when you will be turned out then I would support you strongly." (Sahih al-Bukhari 3)
Another narration supports this:
لَئِنْ أَمَرَّتْ بِالْقِتَالِ، لِأُقَاتِلَنَّ مَعَكَ وَلََأَنْصُرَنَّكَ نَصْرًا مُؤَبَّدًا
"If you are ordered to fight, I would fight with you and would support you strongly forever." (Tistiri, Qamus al-Rijal, Vol. 10, p. 435, Baladhri, Ansab al-Ashraf, p. 105)
So, the hadith speaks not of reaching the revelation, as he already did, but being a strong man when they harm the Prophet (saw) or in the time of prescribed war to support him.
The second group of scholars: the majority argue he believed the Prophet (saw) with many evidences supporting this. For example, Waraqah was a Christian and the Prophet (saw) said:
وَالَّذِي نَفْسُ مُحَمَّدٍ بِيَدِهِ لاَ يَسْمَعُ بِي أَحَدٌ مِنْ هَذِهِ الأُمَّةِ يَهُودِيٌّ وَلاَ نَصْرَانِيٌّ ثُمَّ يَمُوتُ وَلَمْ يُؤْمِنْ بِالَّذِي أُرْسِلْتُ بِهِ إِلاَّ كَانَ مِنْ أَصْحَابِ النَّارِ
"By Him in Whose hand is the life of Muhammad, he who amongst the community of Jews or Christians hears about me, but does not affirm his belief in that with which I have been sent and dies in this state (of disbelief), he shall be but one of the denizens of Hell-Fire." (Sahih Muslim 153)
So, given Waraqah heard of the Prophet's revelation, he should be in in heaven as he believed him.
Furthermore, another hadith states the Prophet (saw) saw him in jannah, which taken with the above hadith, suggests he believed him. The Prophet (saw) said about Waraqah:
رَأَيْتُ لَهُ جَنَّةً أَوْ جَنَّتَيْنِ
"I saw a janah (paradise) or two paradises for him." (al-Hakim 4211, Daraqutni argued it is mursal in al-Ilal, Vol. 14, p. 157)
The Prophet (saw) was asked about Waraqah, he attributed him to heavens and rejected being in hellfire. A'isha (ra) narrated:
سُئِلَ رَسُولُ اللَّهِ صلى الله عليه وسلم عَنْ وَرَقَةَ فَقَالَتْ لَهُ خَدِيجَةُ إِنَّهُ كَانَ صَدَّقَكَ وَلَكِنَّهُ مَاتَ قَبْلَ أَنْ تَظْهَرَ . فَقَالَ رَسُولُ اللَّهِ صلى الله عليه وسلم " أُرِيتُهُ فِي الْمَنَامِ وَعَلَيْهِ ثِيَابٌ بَيَاضٌ وَلَوْ كَانَ مِنْ أَهْلِ النَّارِ لَكَانَ عَلَيْهِ لِبَاسٌ غَيْرُ ذَلِكَ
"The Messenger of Allah (saw) was asked about Waraqah. Khadijah said to him: 'He believed in you, but he died before your advent.' So the Messenger of Allah (saw) said: 'I saw him in a dream, and upon him were white garments.If he were among the inhabitants of the Fire, then he would have been wearing other than that.'" (Tirmidhi, 2457, Musanaf Abu Shaybah, Bayhaqi, Dala'il al-Nubuwah, Vol. 2, p. 158)
In addition, the above narration of Bukhari indicates he believed the Prophet (saw) as even Khadijah (ra) accepted Waraqah's observations and embraced Islam. So, he must have truly testified Muhammad (saw) was a prophet. Furthermore, there are narrations of his testification:
أَنَا أَشْهَدُ أَنَّكَ أَنْتَ أَحْمَدُ وَأَنَا أَشْهَدُ أَنَّكَ مُحَمَّدٌ وَأَنَا أَشْهَدُ أَنَّكَ رَسُولُ اللَّهِ
"I testify you are Ahmed (prophesied by Jesus) and testify you are Muhammad and testify you are the Messenger of Allah." (Tistiri, Qamus al-Rijal, Vol. 10, p. 435, Baladhri, Ansab al-Ashraf, p. 105)
The Prophet (saw) also said Waraqah believed in him and so he is in heaven:
لَقَدْ رَأَيْتُ الْقَسَّ فِي الْجَنَّةِ، عَلَيْهِ ثِيَابُ الْحَرِيرِ لِأَنَّهُ آمَنَ بِي وَصَدَّقَنِي يَعْنِي وَرَقَةَ.
"I saw the Qas (knowledgeable or priest) in jannah with garments of silk because he believed in me and attested my truthfulness i.e., Waraqah" (Qurtubi, al-Jami Li-Ahkam al-Qur'an, vol. 1, p. 82, ibn Ishaq, al-Siyar wa al-Maghazi, p. 177, ibn Asakir, Tarikh Dimashq, Vol. 34. P. 404)
In the above narration, the word (قَسْ-qas) not (qis) as former means the knowledgeable or priest but does not necessarily mean an ecclesiastical or churchly spot, but a label they used to give it to him as he had knowledge of inter-religious issues and there was no Christian communities or churches in Mecca; the latter refers to the religious spot. All narrations and statements of scholars say (وكان يُدعى: القَس) he was called: the knowledgeable or priest because the term is linguistically used to refer to the state of leaning and intelligence – as in all the lexicons such as Lisan al-Arab that state العُقَلاء- الحُذّاق the reasonable and intelligent people – as well as several other meanings but the conclusive religious label is qisees (قِسْيِسْ) Christian priest.
Whilst the hadith is mursal and gharib hadith, its meaning is supported by the other ahadith.
So, it is reasonable to conclude he believed and testified in the Prophet (saw).
Was Waraqah a Companion?
Whilst the majority of classical scholars confirm he believed in the Prophet (saw), they disagree whether he was a companion or only a believer of his message before its initiation.
There are a great number of classical scholars, such as ibn Hajar, Dhahabi, Karamani, ibn al-Jawzi, ibn Asakir and others, who argue he was not a companion but is a believer as the message had not yet began. (Karamani, Umdat al-Qari, Vol. 1, p. 168) As Dhahabi explains:
وَإِنَّمَا مَاتَ الرَّجُلُ فِي فَتْرَةِ الْوَحْي بَعْدَ النُّبُوَّةِ وَقَبْلَ الرِّسَالَةِ
"The man died in the period of revelation intermittence after Prophethood but before risalah (conveyance of the message)." (Dhahabi, Siyar A'lam al-Nubalaa, Vol. 1, p. 129)
He was also considered as other religious elites who testified the Prophethood of the Prophet (saw) but did not believe in him because there was no Islam yet. Ibn Hajar argued:
فَهَذَا ظاهِرُهُ أَنَّه أَقَرًّ بِنُبُوَّتِهِ ، وَلَكِنَّه مَاتَ قَبْلُ أَنْ يَدْعُوَ رَسُولُ اللهِ صَلَّى اللهُ عَلَيه وَسَلَّمَ النَّاسَ إِلَى الْإِسْلامِ ، فَيَكُونُ مِثْلَ بُحَيْرَا ، وَفِي إِثْبَاتِ الصُّحْبَةِ لَهُ نَظَرٌ
"The obvious meaning is he admitted his Prophethood but died before the Prophet (saw) called people to Islam.
He is similar (in status) to Buhaira, but attributing suhbah (companionship) is questionable." (Ibn Hajar, al-Isabah, Vol. 6, p. 476)
There are many, probably majority, of scholars who argued he was a companion, including Tabari, Baghawi, ibn Qani, ibn al-Sakan, Suhayli, ibn al-Qayim, Ibn Kathir, al-Barmawi, al-Kafiri, ibn al-Qadi Aljun, Abu Musa al-Madini, ibn Qudamah and others, and there is a weak narration of ibn Abbas (ra) supporting them. (Ibn Hajar, al-Isabah, Vol. 6, p. 607, Zirikli, al-A'lam, Vol. 8, p. 115, Suhayli, al-Rawd, Vol. 1, p. 173, Ibn al-Qayim, Zad al-Mi'aad, Vol. 3, p. 21, ibn Kathir, al-Bidayah, Vol. 3, p. 25, al-Biqa'i, al-Ta'reef bi Suhbat al-Sayd Waraqah, p. 42)
They argue the companion is the one who met the Prophet (saw), believed in him and died without nullifying his belief, all of which Waraqah did.
They cite the above narrations and argue they are clear in his testimony and belief in the Prophet (saw) after initiation of revelation. So, there is no way to argue the contrary. So, if he believed there is no god but Allah, and Muhammad is his Prophet, what is required further? These kinds of testimonies are accepted even out of hypocrisy.
The leading eighth-century Egyptian Shafi'i jurist Abu Zar'ah al-Iraqi stated the position of this group, it is proper to say the first male companion was Waraqah:
ينبغى أَنْ يُقَالُ إِنَّ أَوَّلَ مِنْ آمِنَ مِنَ الرُّجَّالِ وَرِقَّةً بْنُ نَوْفَلِ
"It is better to say the first man to believe was Waraqah ibn Nawfal (not Abu Bakr)." (Abu Zar'ah, Tarh al-Tarayuth, Vol. 4, p. 197)
There are other contemporary scholars, such as ibn Uthaymeen and Salih al-Fawzan, who argue he accompanied the Prophet (saw). (Ibn Uthaymeen, Fatawa wa Rasa'il ibn Uthaymeen, Vol. 8, p. 613)
A core evidence cited, in addition to the above narrations, is the Prophet (saw) on hearing a companion insult another said:
لاَ تَسُبُّوا أَصْحَابِي لاَ تَسُبُّوا أَصْحَابِي
"Do not revile my Companions, do not revile my Companions." (Sahih Muslim 2540)
Going on to say the same about Waraqah:
لَا تَسُبُّوا وَرَقَةَ
"Do not revile Waraqah." (al-Hakim 4211, Daraqutni argued it is mursal in al-Ilal, Vol. 14, p. 157)
So, he gave the same ruling to people who are of similar status, especially considering the above evidences that he already testified and believed after the revelation and probably with Khadijah (ra).
There is a profound book, "Badhl al-Nush wa al-Shafaqah fi al-Ta'reef bi Suhbat al-Sayd Waraqah", authored by the ninth-century scholar al-Biqa'i that discusses this issue in depth, citing arguments of all the groups, concluding Waraqah ibn Nawfal is most likely a companion.
Before revelation (ba'thah) began to the Prophet (saw), his wife's cousin Waraqah ibn Nawfalused to worship idols. He abandoned them at some point seeking a truer religion outside Mecca with his friends. He later converted to Judaism, followed by Trinitarian Christianity, then a purer monotheistic Christianity that taught Jesus was not divine. After revelation, he believed the Prophet (saw) and was likely a companion who passed away within a few days of revelation.
Ibn Hisham, al-Seerah al-Nabawiyah
Ibn Ishaq, al-Siyar wa al-Maghazi
Ibn Kathir, al-Bidayah wa al-Nihayah
Ibn Hajar, Fath al-Bari
Ibn Hajar, al-Isabah
Ibn Asakir, Tarikh Dimashq
Dhahabi, Siyar A'lam al-Nubalaa
Al-Halabi, al-Seerah al-Halabiyah
Qurtubi, al-Jami Li-Ahkam al-Qur'an
Bayhaqi, Dala'il al-Nubuwah
Suhayli, al-Rawd al-Aneef
Ibn al-Jawzi, Mutheer al-Gharam
Salihi, Subul al-Huda
Suhayb Abd al-Jabar, al-Jami al-Sahih li al-Sunan wa al-Masaneed
Abd al-Raziq Afifi, al-Fatawa
Tistiri, Qamus al-Rijal
Baladhri, Ansab al-Ashraf
Karamani, Umdat al-Qari
Ibn al-Qayim, Zad al-Mi'aad
Al-Biqa'i, al-Ta'reef bi Suhbat al-Sayd Waraqah
Abu Zar'ah, Tarh al-Tarayuth
Ibn Uthaymeen, Fatawa wa Rasa'il ibn Uthaymeen
Great answers start with great insights. Content becomes intriguing when it is voted up or down - ensuring the best answers are always at the top.
Questions are answered by people with a deep interest in the subject. People from around the world review questions, post answers and add comments.
Be part of and influence the most important global discussion that is defining our generation and generations to come | <urn:uuid:883bc212-f85d-4997-94ca-6ce5a3b7c919> | CC-MAIN-2021-21 | https://www.islamiqate.com/3420/what-was-the-religion-of-waraqah-ibn-nawfal?show=3421 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988796.88/warc/CC-MAIN-20210507150814-20210507180814-00096.warc.gz | en | 0.86885 | 6,999 | 2.71875 | 3 |
1. The Paschal Mystery: a living and life-giving mystery
Jesus' words and actions during his hidden life in Nazareth and his public ministry were saving acts that anticipated the fullness of his paschal mystery. “When his Hour comes he lives out the unique event in history which does not pass away: Jesus dies, is buried, rises from the dead and is seated at the right hand of the Father once for all (Rom 6:10; Heb 7:27; 9:12). His Paschal Mystery is a real event that occurred in our history, but it is unique: all other historical events happen once, and then they pass away swallowed up in the past. The Paschal mystery of Christ, by contrast, cannot remain only in the past because by his death he destroyed death. All that Christ is—all that he did and suffered for men—participates in the divine eternity and so transcends all times while being made present in them all. The event of the Cross and Resurrection abides and draws everything to life" ( Catechism of the Catholic Church – CCC – 1085).
As Benedict XVI wrote, “Being a Christian starts with the encounter with an event, a Person, which gives life a new horizon and a decisive direction." Hence “our Faith and the Eucharistic liturgy both have their source in the same event: Christ's gift of himself in the Paschal Mystery."
2. The Paschal Mystery in the time of the Church: liturgy and sacraments
Christ our Lord “carried out the redemption of humanity principally by the Paschal Mystery of his blessed passion, resurrection from the dead and glorious ascension." “It is this Mystery that the Church proclaims and celebrates in her liturgy" ( CCC , 1068).
“The liturgy then is rightly seen as an exercise of the priestly office of Jesus Christ. It involves the presentation of man's sanctification under the guise of signs perceptible by the senses and its accomplishment in ways appropriate to each of these signs. In it a full public worship is performed by the Mystical Body of Christ, that is to say by the Head and his members." “The whole liturgical life of the Church revolves around the Eucharistic sacrifice and the sacraments" (CCC , 1113).
“Seated at the right hand of the Father and pouring out the Holy Spirit on his Body which is the Church, Christ now acts through the sacraments he instituted to communicate his grace" ( CCC, 1084).
2.1. The sacraments: nature, origin and number
“The sacraments are efficacious signs of grace, instituted by Christ and entrusted to the Church, by which divine life is dispensed to us. The visible rites by which the sacraments are celebrated signify and make present the graces proper to each sacrament" ( CCC, 1131). “The sacraments are perceptible signs (actions, words) accessible to our human nature" (CCC, 1084).
“Adhering to the doctrine of the scriptures, apostolic traditions and the unanimous sentiment of the Fathers," we profess that “the sacraments of the new Law were all instituted by our Lord Jesus Christ."
“There are seven sacraments in the Church: Baptism, Confirmation or Chrismation, Eucharist, Penance, Anointing of the Sick, Holy Orders, and Matrimony" (CCC, 1113). “The seven sacraments touch all the stages and all the important moments of Christian life; they give birth and increase, healing and mission to the Christian life of faith. There is thus a certain resemblance between the stages of natural life and the stages of the spiritual life" ( CCC, 1210). They form an organic whole centered on the Eucharist, which contains the very Author of the sacraments (cf. CCC, 1211).
The sacraments signify three things: the sanctifying cause , which is the Death and Resurrection of Christ; the sanctifying effect or grace; the sanctifying end , which is eternal glory. “A sacrament is a sign that commemorates what preceded it: Christ's Passion; it demonstrates what is accomplished in us through Christ's Passion: grace; and it prefigures what the Passion pledges to us: future glory."
The sacramental sign, proper to each sacrament, is made up of material realities (water, oil, bread, wine) and human gestures (washing, anointing, laying on of hands, etc.) which are called the matter ; and also of words said by the minister of the sacrament, which are called the form . In reality, “a sacramental celebration is a meeting of God's children with their Father, in Christ and the Holy Spirit; this meeting takes the form of a dialogue through actions and words" ( CCC, 1153).
The liturgy of the sacraments contains an unchangeable part (what Christ himself established about the sacramental sign), and parts that the Church can change for the good of the faithful and greater veneration of the sacraments, adapting them to the circumstances of place and time. “No sacramental rite may be modified or manipulated at the will of the minister or the community" ( CCC , 1125).
2.2 The effects and necessity of the sacraments
All the sacraments confer sanctifying grace on those who place no obstacles. This grace “is the gift of the Holy Spirit who justifies us and sanctifies us" ( CCC , 2003). In addition, the sacraments confer the sacramental grace that is proper to each sacrament (cf. CCC, 1128): this is a specific divine help to obtain the aim of the particular sacrament.
We receive not only sanctifying grace, but the Holy Spirit himself. “Through the Church's sacraments, Christ communicates his Holy and sanctifying Spirit to the members of his Body" ( CCC, 739). The result of the sacramental life is that the Holy Spirit “deifies" the faithful, uniting them in a living union with Christ (cf. CCC , 1129).
The three sacraments of Baptism, Confirmation and Holy Orders, in addition to conferring grace, confer a sacramental character, an indelible spiritual seal impressed on the soul, by which a Christian shares in Christ's priesthood and is made a member of the Church according to different states and functions. The sacramental character remains for ever as a positive disposition for grace, as a promise and guarantee of divine protection, and as a vocation to divine worship and the service of the Church. For this reason these three sacraments cannot be repeated (cf. CCC, 1121).
The sacraments that Christ has given his Church are necessary (at least the desire to receive them) for salvation and for obtaining sanctifying grace; and none of them is superfluous, even though not all of them are necessary for everyone.
2.3 Effectiveness of the sacraments
The sacraments “are effective because in them Christ himself is at work; it is he who baptises, he who acts in his sacraments in order to communicate the grace that each sacrament signifies" ( CCC, 1127). The sacramental effect is produced ex opere operato (by the very fact of the action, the sacramental sign, being performed). “The sacrament does not act in virtue of the justice of the one who gives it or who receives it; it acts by the power of God." "From the moment that a sacrament is celebrated in accordance with the intention of the Church, the power of Christ and his Spirit acts in and through it, independently of the holiness of the minister" ( CCC , 1128).
The person who administers the sacrament puts himself at the service of Christ and the Church, which is why he is called the minister of the sacrament; and this person cannot be just any member of the faithful, but ordinarily requires the special configuration to Christ the Priest that is given by Holy Orders.
The effectiveness of the sacraments derives from Christ himself who acts in each sacrament; “nevertheless the fruits of the sacraments also depend on the disposition of the one who receives them" (CCC , 1129). The stronger one's faith, the deeper one's conversion of heart and adhesion to the will of God, the more abundant are the effects of grace that one receives (cf. CCC , 1098).
“Holy Mother Church has, moreover, instituted sacramentals. These are sacred signs that bear a resemblance to the sacraments. They signify effects, particularly of a spiritual nature, which are obtained through the intercession of the Church. By them men are disposed to receive the chief effects of the sacraments, and various occasions of life are rendered holy." “Sacramentals do not confer the grace of the Holy Spirit in the way that the sacraments do, but by the Church's prayer they prepare us to receive grace and predispose us to co-operate with it" ( CCC, 1670). “Among sacramentals, blessings (of persons, meals, objects and places) come first" (CCC , 1671).
3. The Liturgy
Christian liturgy “is essentially an actio Dei , an action of God which draws us into Christ through the Holy Spirit;" and it has a dual dimension, ascending and descending. “The liturgy is an 'action' of the whole Christ ( Christus totus )" (CCC, 1136), and thus “it is the whole community, the Body of Christ united with its Head, that celebrates" ( CCC , 1140). In the midst of the assembly Christ himself is present (cf. Mt 18:20), risen and glorious. Christ presides over the celebration. He, who acts inseparably united to the Holy Spirit, convokes, unites, and teaches the assembly. He, the Eternal High Priest, is the principle protagonist of the ritual action that makes present the salvific event, while making use of his ministers to re-present (to make present, really and truly, in the here and now of the liturgical celebration) his redeeming sacrifice, and to make us sharers in the life-giving gifts of his Eucharist.
While forming “as it were one mystical person" with Christ the Head, the Church acts in the sacraments as a “priestly society" that is “organically structured." Thanks to Baptism and Confirmation the priestly people become able to celebrate the liturgy. Therefore “liturgical services are not private functions, but are celebrations of the Church…and pertain to the whole Body of the Church. They manifest it, and have effects upon it. But they touch individual members of the Church in different ways, depending on their orders, their role in the liturgical services, and their actual participation in them."
The whole Church, in heaven and on earth, God and men, takes part in each liturgical celebration (cf. Rev 5). Christian liturgy, even though it may take place solemnly here and now in a specific place and express the yes of a particular community, is by its very nature “catholic." In union with the Pope, with the bishops in communion with the Roman Pontiff, and with the faithful of all times and places, the liturgy is directed towards all mankind, so that God be all in all ( 1 Cor 15:28). Hence this fundamental principle: the true subject of the liturgy is the Church, specifically the communio sanctorum , the communion of saints of all places and times. Therefore, the more fully a celebration is imbued with this awareness, the more specifically does it fulfil the spirit of the liturgy. One expression of this awareness of the unity and universality of the Church is the use of Latin and Gregorian chant in some parts of the liturgical celebration.
Thus we can say that the assembly that celebrates is the community of the baptised who “by regeneration and the anointing of the Holy Spirit are consecrated to be a spiritual house and a holy priesthood, that through all the works of Christian faithful they may offer spiritual sacrifices." This “common priesthood" is that of Christ, the Eternal High Priest, shared in by all his members. “Thus in the celebration of the sacraments all the assembly is leitourgos , each one according to their function, but in the unity of the Holy Spirit who acts in all" (CCC , 1144). For this reason taking part in liturgical celebrations, even though it does not encompass the entire supernatural life of the faithful, constitutes for them, as for the entire Church, the summit to which all their activity tends and the source from which they draw their strength. For “the Church receives and at the same time expresses what she herself is in the seven sacraments, thanks to which God's grace concretely influences the lives of the faithful, so that their whole existence, redeemed by Christ, can become an act of worship pleasing to God."
When we refer to the assembly as the “subject" of the liturgical celebration, we mean that each of the faithful, acting as a member of the assembly, carries out what and only what corresponds to him or her. The members do not all have the same function ( Rom 12:4) Some are called by God in and through the Church to a special service of the community. These servants are chosen by the sacrament of Holy Orders, by which the Holy Spirit configures them to Christ the Head for the service of all the members of the Church. As John Paul II clarified on several occasions, “ in persona Christi means more than offering 'in the name of' or 'in the place of' Christ. In persona means in specific sacramental identification with the Eternal High Priest who is the author and the principle subject of this sacrifice of his, a sacrifice in which, in truth, nobody can take his place." As the Catechism graphically says, “the ordained minister is, as it were, the icon of Christ the priest" ( CCC , 1142).
“The mystery celebrated in the liturgy is one, but the forms of its celebrations are diverse. The mystery of Christ is so unfathomably rich that it cannot be exhausted by its expression in any single liturgical tradition" ( CCC , 1200-1201). The liturgical rites presently in use in the Church are the Latin (principally the Roman rite, but also the rites of certain local churches, such as the Ambrosian rite, or those of certain religious orders) and the Byzantine, Alexandrian or Coptic, Syriac, Armenian, Maronite and Chaldean rites" (CCC, 1203). “Holy Mother Church holds all lawfully recognised rites to be of equal right and dignity, and wishes to preserve them in the future and to foster them in every way."
Juan Jose Silvestre
Catechism of the Catholic Church , nos. 1066-1098; 1113-1143; 1200-1211 and 1667-1671.
Saint Josemaría, Homily “The Eucharist, Mystery of Faith and Love," in Christ is Passing By , nos 83-94; cf. also Conversations , no. 115.
Joseph Ratzinger, The Spirit of the Liturgy , Ignatius Press.
Benedict XVI, Enc. Deus Caritas Est , 25 December 2005.
Benedict XVI, Sacramentum Caritatis , 22 February 2007.
Vatican II, Sacrosantum Concilium , 5; cf. also CCC 1067.
Ibid ., no 7. CCC , 1070.
Council of Trent: DZ 1600 – 1601; cf. also CCC, 1114.
St Thomas Aquinas, Summa Theologiae , III, q.60, a 3; cf. also CCC , 1130.
Cf. CCC , 1205; Council of Trent: DZ 1728; Pius XII: DZ 3857.
Cf. Council of Trent: DZ 1606.
“The desire and work of the Holy Spirit in the heart of the Church is that we may live from the life of the risen Christ" ( CCC, 1091). “He unites the Church to the life and mission of Christ" ( CCC, 1092); “the Holy Spirit heals and transforms those who receive him by conforming them to the Son of God" ( CCC, 1129).
Cf. Council of Trent: DZ 1609.
Ibid ., DZ 1604.
Ibid ., DZ 1608.
St Thomas Aquinas, Summa Theologiae , III, q.68 art.8.
The ordained priesthood “guarantees that it really is Christ who acts in the sacraments through the Holy Spirit for the Church. The saving mission entrusted by the Father to his incarnate Son was committed to the apostles and through them to their successors: they receive the Spirit of Jesus to act in his name and in his person. (cf. Jn 20:21-23; Lk 24:47; Mt 28:18-20). The ordained minister is the sacramental link that ties the liturgical action to what the apostles said and did and through them to the words and actions of Christ, source and foundation of the sacraments" ( CCC , 1120). Even though the effectiveness of the sacrament does not depend on the moral qualities of the minister, nevertheless his faith and devotion, as well as contributing to his own personal sanctification, can be of considerable help to foster the good dispositions of the recipient of the sacrament and in consequence the fruit obtained.
Vatican II,. Sacrosanctum Concilium , 60; (cf. CCC, 1667).
Benedict XVI, Sacramentum Caritatis , 37
“On the one hand, the Church, united with her Lord and 'in the Holy Spirit' ( Lk 10:21), blesses the Father 'for his inexpressible gift' (2 Cor 9:15) in her adoration, praise and thanksgiving. On the other hand, until the consummation of God's plan, the Church never ceases to present to God the Father the offering of his own gifts, and to beg him to send the Holy Spirit upon that offering, upon herself, upon the faithful and upon the whole world, so that through communion in the death and resurrection of Christ the Priest, and by the power of the Holy Spirit, these divine blessings will bring forth the fruits of life, 'to the praise of his glorious grace' (Eph 1:6)" (CCC , 1083).
Cf. Pius XII, Enc. Mystici Corporis (quoted in CCC , 1119).
Vatican II,. Sacrosanctum Concilium , 26 (quoted in CCC, 1140).
“May this sacrifice be effective for all mankind-- Orate, fratres, the priest invites the people to pray--because this sacrifice is yours and mine, it is the sacrifice of the whole Church. Pray, brethren, although there may not be many present, although materially there may be only one person there, although the celebrant may find himself alone, because every Mass is a universal sacrifice, the redemption of every tribe and tongue and people and nation (cf. Rev 5:9).
“Through the communion of the saints, all Christians receive grace from every Mass that is celebrated, regardless of whether there is an attendance of thousands of persons or whether it is only a boy with his mind on other things who is there to serve. In either case, heaven and earth join with the angels of the Lord to sing Sanctus, Sanctus, Sanctus… " (St Josemaria Escriva, Christ is Passing By , no 89).
Benedict XVI, Sacramentum Caritatis , 62; Vatican II, Sacrosanctum Concilium , 54.
Vatican II, Lumen Gentium , 10.
Ibid . 10 and 34; Decr. Presbyterorum Ordinis , 2.
Cf. Vatican II Sacrosanctum Consilium , 20.
Benedict XVI, Sacramentum Caritatis , 16.
Cf. Vatican II, Presbyterorum Ordinis , 2 and 15.
John Paul II, Enc. Ecclesia de Eucharistia , 29. Footnote 59 cites the following words from Pius XII's encyclical Mediator Dei : “The minister of the altar acts in the person of Christ in as much as he
is head, making an offering in the name of all the members."
Vatican II, Sacrosanctum Concilium. 4. | <urn:uuid:ea7cffbf-3a20-407a-9f57-7b0c13513508> | CC-MAIN-2021-21 | https://opusdei.org/en/article/topic-17-introduction-to-the-liturgy-and-the-sacraments/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989018.90/warc/CC-MAIN-20210509213453-20210510003453-00576.warc.gz | en | 0.940533 | 4,404 | 2.703125 | 3 |
« PreviousContinue »
In 1900, 24.9 per cent of all the coal mined was mined by machinery, while in 1926, 71.7 per cent was mined by machinery. The pick miner is rapidly disappearing.
Consolidation.—Consolidation is the remedy for the coal situation. Stabilization, brought about by a modification of the antitrust laws or by the establishment among the operators and miners themselves of a code of ethics and fair practices, would not be objectionable to the American people. Stable employmient, stable output, and stable markets will mean satisfactory and constant employment of workers, satisfactory and profitable returns for operators, and a steady and regular flow of coal to supply the needs of industrial and domestic consumers.
Any plan with such a contemplation would require the earnest cooperation and aid of the railway and transportation compan.es, the manufacturing enterprises, and the public-utilities companies.
If the more than 7,000 separate mines could be brought under the control of between 100 and 200 different companies, coal would be preserved and efficiency promoted ; and the companies would be able to finance and purchase necessary machinery. Large units could buy and store their coal in large quantities and it would not be necessary to open new mines.
Government regulation.-In answer to a query as to whether the public would stand to see the Sherman Antitrust Act repealed, in so far as the coal interests are concerned, unless there was some governmental regulation of it, Secretary Davis said:
"The Government always has regulation of it. I used to be afraid of every large corporation, but since I have become Secretary of Labor and can see what the Government can do, even without direct power, I am satisfied we ought not to fear large corporations."
In reply to a question by Senator Goff, of West Virginia, which was substantially a replication of a similar question put by Senator Sackett, of Kentucky, as to whether or not the limitation by a Federal power of the opening of mines and the production of coal in an economic sense would constitute the taking of private property without due process of law, as that phrase is used in the fourteenth amendment to the United States Constitution, Secretary of Labor Davis said that he did not advocate Government ownership; that if voluntary consolidations were effected Government ownership would not necessarily follow; that his only desire was to see the abolition of the deplorable conditions now existing in the coal fields; and that he thought the committee should make inquiry of the Attorney General of the United States as to the legal principles involved in connection with the coal situation, the pending bill, and any other matters relating to the same.
BRIEF OF THE LAW AND FACTS IN SUPPORT OF SENATE BILL 4490 PROVIDING FOR A
BITUMINOUS COAL COMMISSION BY COUNSEL FOR UNITED MINE WORKERS OF AMERICA
THE POWER OF CONGRESS
This bill deals exclusively with corporations engaged in shipping coal in interstate commerce and is predicated on the power of Congress to license and regulate such corporations. The bill does not affect corporations that are engaged simply in mining coal. It does not regulate or license individuals engaged in shipping coal; except that section 2 permits marketing pools by corporations for the purpose of agreeing on prices; and any individual shipper may jo'n such pool upon the same terms that a corporation may. The bill does not require the individual to do this but permits him to do so, in order that there may be no claim that the individual was discriminated against in the allowance of such marketing pools. Apart from this instance in which the hand of Congress is not laid upon the individual but simply holds the door open for him, there is no reference to the rights of natural persons.
(1) Nature of a corporation.—A corporation is an artificial person which can have not legal existence out of the boundaries of the sovereignty by which it is created." (Bank v. Earl, 13 Pet. (U. S.) 519, p. 588.) In Paul v. Virginia (8 Wall. (U. S.) 168) the court said at page 181 :
"A grant of corporate existence is a grant of special privilege to the corporation, enabling them to act for certain designated purposes as a single individual, and exempting them (unless otherwise specially provided) from individual
liability. The corporation being the mere creation of local law, can have no legal existence beyond the limits of the sovereignty where created. The recognition of its existence even by other States and the enforcement of its contracts made therein, depend purely upon the comity of those States—a comity which is never extended where the existence of the corporation or the exercise of its powers are prejudicial to their interests or repugnant to their policy. Having no absolute right of recognition in other States, but depending for such recognition and the enforcement of its contracts upon their assent, it follows, as a matter of course, that such assent may be granted upon such terms and conditions as those States may think proper to impose. They may exclude the foreign corporation entirely; they may restrict its business to particular localities, or they may exact such security for the performance of its contracts with their citizens as in their judgment will best promote the public interest. The whole matter rests in their discretion."
A private corporation has also been defined by our Supreme Court, in a case involving an excise tax upon all corporations with income of more than $5,000, as follows:
“ The thing taxed is not the mere dealing in merchandise, in which the actual transactions may be the same, whether conducted by individuals or corporations, but the tax is laid upon the privileges which exist in conducting business with the advantages which inhere in the corporate capacity of those taxed and which are not enjoyed by private firms or individuals. These advantages are obvious and have led to the formation of such companies in nearly all branches of trade. The continuity of the business, without interruption by death or dissolution, the transfer of property interests by the disposition of shares of stock, the advantages of business controlled and managed by corporate directors, the general absence of individual liability, these and other things inhere in the advantages of business thus conducted, which do not exist when the same business is conducted by private individuals or partnerships. It is this distinctive privilege which is the subject of taxation, not the mere buying or selling or handling of goods, which may be the same, whether done by corporations or individuals." (Flint v. Stone Tracy Co., 220 U. S. 161.)
(2) In relation to the fifth and fourteenth amendments.-A corporation is a person” within the due-process clause of the fifth amendment, and the “equal protection" clause of the fourteenth amendment. But it is not a citizen within the clause of the fourteenth amendment that “no State shall make or enforce any law which shall abridge the privileges or immunities of a citizen of the United States." A natural person (citizen) has a right to transfer himself and his activities from one State to another and to do business in any State. This is not true of a corporation. A State can not, by creating a corporation, endow it with the absolute right to transact interstate commerce. (Northern Securities Co. v. United States, 193 U. S. 345.) (As stated in the recent case of Liberty Warehouse Co. v. Burley Tobacco Association, decided February 20, 1928 :
“A corporation does not possess the privileges and immunities of a citizen of the United States within the meaning of the Constitution." (Citing cases.)
The artificial person created by the State is as much a foreign corporation to the national sovereignty as it is foreign to the sovereignty of another State; and when it undertakes to exercise a franchise under the national sovereignty it becomes subject to license and regulation if Congress sees fit to require such license and regulation. The right of Congress to regulate commerce applies to natural as well as artificial persons. But the right to admit or reject, license or regulate artificial persons created by States that seek to exercise corporate franchises under national sovereignty arises out of the inherent nature of sovereignty. The power of Congress over such artificial persons is not to be measured by its power over natural persons. Logically it must be measured by the analogous power which a State has over a corporation created by another State which ventures within the former's dominion.
(3) State and national sovereignties.-In Buffington v. Day (11 Wal. (U. S.) 113) the Supreme Court said at page 124:
“The General Government and the States, although both exist within the same territorial limits, are separable and distinct sovereignties, acting separately and independently of each other, within their respective spheres."
The State is no more a sovereignty than is the Nation. The first has general powers, the latter limited powers, but within the domain of the subjects committed to the latter, its sovereignty is paramount and supreme. In Champion u. Ames (the lottery case) (188 U. S. 321) the court at page 347 quotes from Gibbons v. Ogden the following statement by Marshall, C. J.:
“If, as has always been understood, the sovereignty of Congress, though limited to specific objects, is plenary as to those objects, the power over commerce with foreign nations, and among the several States, is vested in Congress as absolutely as it would be in a single government, having in its constitution the same restrictions on the exercise of the power as are found in the Constitution of the United States."
The court continued to quote from the concurring opinion of Justice Johnson as follows:
"The power to regulate commerce' here meant to be granted was that power to regulate commerce which previously existed in the States. But what was that power? The States were, unquestionably, supreme; and each possessed that power over commerce which is acknowledged to reside in every sovereign State.
The law of nations, regarding man as a social animal, pronounces all commerce legitimate in a state of peace, until prohibited by positive law. The power of a sovereign State over commerce, therefore, amounts to nothing more than a power to limit and restrain it at pleasure. And since the power to prescribe the limits to its freedom necessarily implies the power to determine what shall remain unrestrained, it follows that the power must be exclusive; it can reside but in one potentate; and hence the grant of this power carries with it the whole subject, leaving nothing for the State to ct upon."
If within this national sovereignty an artificial person created by another sovereignty assumes to exercise its franchise, the scope of national control must be measured by the power of control universally exercised by the State in analogous cases; otherwise the sovereignty of the Nation breaks down in its contact with these foreign artificial persons and becomes a mere caricature of the sovereignty which the law. recognizes in the State.
Powers and rights appertaining to the national sovereignty are as certain as the powers and rights of any sovereignty. For instance, the right of eminent domain is an attribute of national sovereignty, though not expressly set out in the Federal Constitution. In United States v. Jones (109 U. S. 513, p. 518), the court said:
"The power to take private property for public uses, generally termed the right of eminent domain, belongs to every independent government. It is an incident of sovereignty and, as said in Boom Co. v. Patterson (98 U. S. 406) requires no constitutional recognition.
It is undoubtedly true that the power of appropriating private property to public uses vested in the General Government-its right of eminent domain, which Vattel defines to be the right of disposing, in case of necessity, and for the public safety, of all the wealth of the country-can not be transferred to a State any more than its other sovereign attributes."
The power, of course, to create corporations is nowhere conferred by the Federal Constitution, but the right to do so has often been sustained. Such corporations may be for the purpose of carrying out some governmental function, in which case the corporate franchise extends to the Territorial limits of the Nation and is beyond State regulation; or, as the sovereign legislative power over Territorial possessions and the District of Columbia, Congress has often created private corporations, which are universally treated as foreign corporations by the States.
In McCulloch v. Maryland (4 Wheat. (U. S.) 316), the State undertook to tax a branch of the United States bank established in Maryland. Marshall, C. J., said at page 410:
"The creation of a corporation, it is said, appertains to sovereignty. This is admitted. But to what portion of sovereignty does it appertain? Does it helong to one more than another? In America the powers of sovereignty are divided between the Government of the Union and those of the State. They are each sovereign with respect to the objects committed to it and neither sovereign with respect to the objects committed to the other."
(4) The right to exercise a corporate franchise in another sovereignty.-It is the universal rule that a private corporation acquires no right through its charter by a State to project itself or its business into another sovereignty. Another State may license, regulate, or exclude it as it sees fit. The limitations upon this rule will be considered under the next heading; but the general rule to license, regulate, or exclude such a corporation rests fundamentally in the sovereignty invaded by the artificial creation of another sovereignty.
In Hammond Packing Co. 6. Arkansas (212 V. S. 322) the following from the syllabus states the law:
The right of a State to prevent foreign corporations from continuing to do business within its borders is a correlative of its right to exclude them there. from; and, as the power is plenary, the State, as long as no contract is impaired, may exert it from consideration of acts done in another jurisdiction.
* The difference between the extent of power which the State may exert over the doing of business within its borders by an individual and that which it can exercise as to corporations, furnishes a distinction authorizing a classification between the two which does not violate the equal-protection cause of the fourteenth amendment."
(5) Constitutional limitations.—There are two constitutional limitations upon this right of a State to license, regulate, or exclude a corporation created by another sovereignty. First, it can not impose conditions upon its acceptance of such corporation, which deprives it of constitutional rights. In Hanover Fire Ins. Co. t. Carr (272 U. S. 494) the court said:
“ It was settled in Bank of Augusta 1. Earle (13 Pet. 519, 10 L. ed. 274), Paul v. Virginia (8 Wall. 168, 19 L. ed. 357) ; Ducat v. Chicago (10 Wall. 410, 19 L. ed. 972), and Horn Silver Min. Co. 1. New York (143 U. S. 305, 36 L. ed. 164, 4 Inters. Com. Rep. 57, 12 Sup. Ct. Rep. 403), that foreign corporations can not do business in a State except by the consent of the State ; that the State may exclude them arbitrarily or impose such conditions as it will upon their engaging in business within its jurisdiction. But there s a very important qualification to this power of the State, the recognition and enforcement of which are shown in a number of decisions of recent years. That qualification is that the State may not exact as a condition of the corporation's engaging in business within its limits that its rights secured to it by the Constitution of the United States may be infringed."
Second, if the corporation created by one State desires to transact interstate commerce in another State, it can not be depriced of that right by the latter State. The reason for this is, the corporate franchise thus to be exercised (namely, interstate commerce) is a subject within the paramount sovereign control of Congress. This limitation illustrates the distinction between the State and national sovereignties in their relation to those artificial persons. The States have surrendered all sovereignty in this field to Congress, including the sovereign right to license, regulate, or exclude a foreign corporation, with respect to the exercise of its corporate franchise in interstate State commerce. Certainly that right, which in the absence of the Federal Constitution would rest in the State, must rest in Congress.
In the case of Crutcher v. Kentucky (141 U. S. 47), the court was considering the validity of a Kentucky statute which undertook to require a license for foreign express companies, the license to be based on certain requiremennts. The court said: “ It is clear
that it would be a regulation of interstate commerce in its application to corporations or associations engaged in that business; and that is a subject which belongs to the jurisdiction of the National and not the State Legislature. Congress would undoubtedly have the right to exact from associations of that kind any guarantees it might deem necessary for the public security and for the faithful transaction of business; and as it is within the province of Congress, it is to be presumed that Congress has done, or will do, all that is necessary and proper in that regard."
In this case the court further said:
“To carry on interstate commerce is not a franchise or privilege granted by the State; it is a right which every citizen of the United States is entitled to exercise under the Constitution and laws of the United States; and the accession of mere corporate facilities, as a matter of convenience in carrying on their business, can not have the effect of depriving them of such right, unless Congress should see fit to interpose some contrary regulations on the subject."
This is a clear recognition of the power of Congress to license and regulate State corporations engaged in interstate commerce.
(6) Relation of either sovereignty to corporations created by the other.From the above it clearly appears that if an artific al person is created by one sovereignty it can only exercise its corporate franchise within the other sovereignty upon such terms as the latter sees fit to impose. If the corporation created by Congress is for public or governmental purposes, it can not be licensed, taxed, or regulated by a State. But corporations created by Congress for private purposes are universally treated as foreign corporations by the various States for the purpose of license, regulation, or tax. In Flint v. Stone Tracy Co. (220 U. S. 107, p. 152), the court 'said:
" In Osborn v. Bank of United States, supra, leading case upon the subject, whilst it was held that the Bank of the United States was not a private corporation but a public one, created for national purposes, and therefore beyond the taxing power of the State, Chief Justice Marshall, in delivering the opinion of the court, conceded that if the corporation had been originated for the management of an individual concern, with private trade and profit for its great end and principal object, it may be taxed by the State.”
In 19 Cyc. 1251 is found a list of cases in which the States have treated private corporations created by Congress as foreign corporations. In Daly v. National Life Insurance Co. (64 Ind. 1) the State of Indiana required a life insurance company chartered by Congress to submit to the State regulations governing foreign insurance companies. The following quotation from the syllabus states the holding:
“An insurance company created by an act of Congress is a foreign corporation subject to the requirements of the statute of this State approved June 17, 1852, respecting a foreign corporation and their agents in this State.'”
It is equally true that if the State creates a private corporation which undertakes to exercise its corporate franchise within and under the national sovereignty, it is a foreign corporation with reference to that sovereignty and subject to license and regulation as such. In Hale v. Henkel (201 U. S. 43) the question arose as to the right of the Federal Government to require an officer of a corporation engaged in interstate commerce to produce evidence and testify in a proceeding against the corporation. The distinction is made between the visitatorial rights of the Government over an individual and a corporation engaged in such commerce:
"Conceding that the witness was an officer of the corporation under investigation, and that he was entitled to assert the rights of the corporation with respect to the production of its books and papers, we are of the opinion that there is a clear distinction in this particular between an individual and a corporation, and that the latter has no right to refuse to submit its books and papers for an examination at the suit of the State. The individual may stand upon his constitutional rights as a citizen. He is entitled to carry on his private business in his own way. His power to contract is unlimited. He owes no duty to the State or to his neighbore to divulge his business or to open his doors to an investigation so far as it may tend to criminate him. He owes no such duty to the State, since he receives nothing therefrom beyond the protection of his life and property. His rights are such as existed by the law of the land long antecedent to the organization of the State and can only be taken from him by due process of law and in accordance with the Constitution.
“Upon the other hand, the corporation is a creature of the State. It receives certain special priviliges and franchises and holds them subiect to the laws of the State and the limitations of its charter. Its powers are limited by law. It can make no contract not authorized by its charter.
It would be a strange anomaly to hold that a State, having chartered a corporation to make use of certain franchises, could not, in the exercise of its sovereignty, inquire how these franchises had been employed, and whether they had been abused, and demand the production of the corporate books and papers for that purpose.
“It is true that the corporation in this case was chartered under the laws of New Jersey, and that it receives its franchise from the legislature of that State; but such franchises, so far as they involve questions of interstate commerce, must also be exercised in subordination to the power of Congress to regulate such commerce, and in respect to this the General Government may also assert a sovereign authority to ascertain whether such franchises have been exercised in a lawful manner, with a due regard to its own laws. Being subject to this dual sovereignty, the General Government possesses the same right to see that its own laws are respected as the State would have with respect to the special franchises vested in it by the laws of the State. The powers of the General Government in this particular in the vindication of its own laws are the same as if the corporation had been created by an act of Congress. It is not intended to intimate, however, that it has a general visitatorial power over State corporations."
To compel a corporation (unlike an individual) to produce its books for evidence, though they be self-incriminating, is of itself a regulation. In the above case there was no statute requiring this. The Federal judiciary, in aid of a judicial process, compelled the New Jersey corporation to submit to this regulation, and on the ground that it was exercising its franchise in inter | <urn:uuid:de635232-36f5-4978-8470-c725fbe6522e> | CC-MAIN-2021-21 | https://books.google.ca/books?id=qzWploCDLusC&pg=PA297&vq=problem&dq=related:ISBN819000610X&lr=&output=html_text | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991659.54/warc/CC-MAIN-20210516013713-20210516043713-00216.warc.gz | en | 0.960327 | 4,749 | 3.15625 | 3 |
Art Projects for Kids is a collection of fun and easy art projects that include hundreds of how to draw tutorials. Fujikura Ct50 Cleaver Manual, If you build a chickee in the sun, your chickee is likely to last for 15 years; whereas, if you build your chickee in the shade, it is likely to last for 4 to 5 years. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. It is amazing how many different experiments you can do with a … Build the chicken coop or cage at about 3 sq ft (0.28 m 2) per chicken. Diy chicken coop. Personal narratives are awesome. Chickee huts add a tropical panache to houses, backyards or poolside areas. According to Linda A. Holley, author of "The History and Design of the Cloth Tipi," traditional Native American teepees served as functional living quarters and not artistic displays or tourist attractions. Oct 1, 2015 - how to build a chickee hut model - Google Search Jan 14, 2018 - This page has a lot of free Paper chicken craft for kids,parents and teachers. // Leaf Group Lifestyle, Storm the Castle: Make a Shoe Box Diorama. and Biodegradability, Remember to let not the confused mind Plus, with their keen eye for insect pests, chickens make for great gardening companions. How can a well-designed home change someone’s life? The Navajo build several types of hogans, depending on the season and purpose. Vinegar reacts with the calcium compounds found in egg shells and chicken bones so that you can make a rubbery egg or bendable chicken bones. Discover (and save!) How can a well-designed home change someone’s life? In the third grade, students learn about Native American anthropology and archaeology. You may want to build your chickee amongst some trees so you reduce the wind load factor on the thatched roof. Make sure the strips overlap and cover the entire base. School Projects Home Projects Native American Projects Indian Village Educational Websites … F-DCD-7056 rev A - 1 - Chickee / Chiki Hut Guidelines 11.23.09 cam Chickee / Chiki Hut Guidelines A. Preschool Chicken Craft Ideas. Homes For Sale In Shawnee, Ok, They organize information in a meaningful way and help you draw conclusions about the problem you are investigating. Paint a background on the back and side panels of the inside of the box with acrylic paint that depicts forests around the clearing of the Cherokee village, with hills or mountains off in the distance. The idea is that if you had a fan you would use the air condit… betray itself, To attune the mind to the complex Clint Gutherson Salary, Make the majority of the surface flat, with the banks of the river slanting down toward the bottom of the box. They began with an ark and eight hybrid hens bought from a local poultry centre. You can bounce the treated egg like a ball. Celebrate National Ice Cream Sandwich Day. Related Questions . Oct 9, 2014 - This is a chickee. Cover the chicken completely with the mixture, both inside and outside, rubbing it thoroughly on all exposed sides of the chicken. Howard Lew Lewis Blackadder, Onto our CUTE CHICK CRAFTS FOR KIDS! Kelli Underwood Child, A new curriculum makes design-build projects into a part of every student’s education. Buy some chicken wire and give a few of these designs a try. '')This project is what we're entering for the Go Green contest. Education.com does not make any guarantee or representation regarding the Science Fair Project Ideas and is not responsible or liable for any loss or damage, directly or indirectly, caused by your use of such information. How to build a simple chicken coop. Dip each strip of paper into the paste and onto the form. Wait eight hours for the paint to dry. Ffxiv Housing Floating Glitch, 3rd Grade. Chikee or Chickee ("house" in the Creek and Mikasuki languages spoken by the Seminoles and Miccosukees) is a shelter supported by posts, with a raised floor, a thatched roof and open sides. Allow it to dry overnight. your own Pins on Pinterest Things to Consider Before Getting Chickens . Feb 13, 2014 - A paper mache mountain provides a frugal way to build a volcano for a school project or add to a model railroad landscape. How to build a simple chicken coop. This is a chickee. Ancient Egypt had fascinating cultural customs, and mummification was a popular one that many children learn about in school. You are unable to build the architect's hut until the quests that handy needs to "open" a hut shop are completed. Kids Club preschool teachers are very focused on organising activities to prepare our Pre-schoolers for “big school”. Large, flat stones could also be used to create the layered walls of a polygon. CHICKEN & HEN & ROOSTER CRAFTS FOR KIDS: Make your own arts and crafts chickens, roosters, hens with the following projects and activities for children, teens, and preschoolers Chickens are a very important birds, they give us a majority of our eggs as well as a … Here are 10 Must Try Air Fryer Chicken Recipes you should make for dinner this week! Oscar Mayer Magnet School Garden Project. How do you build a chickee for 3rd Grade Project? Don’t be shy to use it as your cheat sheet. 1. Get a sense of how strong the bone is. If you don't have an air fryer yet, well maybe I'll finally convince you. After that project, she was a changed person! Use kitchen match boxes for the beds. This section has a lot of chicken crafts idea for preschool and kindergarten. How to Make a Hopi Pueblo Diorama . Art Projects for Kids is a collection of fun and easy art projects that include hundreds of how to draw tutorials. To attune the mind to the complex systems of the universe --Not merely wielding ourselves founded on human precepts. Instruct each student to precisely weigh her/his apple slice. Gonçalves, http://chaosobral.org/kiara/howtobuild_chickee.htm. Cut through the top of an apple to slice it in half, and then in half again. OSCAR MAYER CHICKENS. It is best to lay the strips criss-crossing each other in many directions to make a stronger finished product. Remember, though: Nothing good comes easy! Even though most Navajo now live in modern homes, many -- especially those who live on the reservation -- keep a hogan to maintain that balance and provide a place for special ceremonies. A Chickee Lifestyle and Biodegradability . But there are ways of making the process into a family cooking project. The chickee style of architecture—palmetto thatch over a bald cypress log frame—was adopted by Seminoles during the … Put the apple slice in a plastic cup. DIY Projects for National Watermelon Day. Make Your Project Proposal a Reality. Environmental Projects at School .. School Projects. Gather the poles in a bundle and wrap the free end of the string around the entire bundle and tie it securely. You will appreciate the touch of whimsy that chicken wire brings to your home décor. Double that measurement and use a compass or plate to trace a circle with this diameter. http://how-to-build-a-tiki-hut.com/chickee-huts/ Tipis.org: The Look of the Historic Tipi Camp Setup, 6 to 8 sticks, straws or brown pipe cleaners, 6 to10 inches long, Yarn, string , or leather or plastic lacing. How to Make Potjiekos: Background: "Potjiekos" is a traditional Afrikaner dish hailing from South Africa. My girls love cooking and doing experiments, so kitchen science experiments are like the best of both worlds for them. Goats can either be domesticated or wild and are closely related to sheep. The Yardbirds - Smokestack Lightning, Feb 18, 2018 - Explore Brenda Nikielski's board "Indian housing", followed by 225 people on Pinterest. One of our favourite chicken or chick crafts to date – these adorable little Pyramid Juggling Chooks! About the Authors . You can make a hogan as part of a school project from natural materials found outdoors. Best Student Accommodation London, Glue the bottoms of each square to the ground near the river with a hot glue gun so that they form the house. While keeping eggs would be the ideal way to illustrate this cycle, there are plenty of fun DIY projects that kids can help make and which easily send the message across. School Garden Chicken Project! Oct 1, 2015 - how to build a chickee hut model - Google Search Cover the roof portion with branches and mud. We aim to further develop our school garden education around sustainable food systems by keeping chickens year-round through the purchase of a winterized chicken coop. See more ideas about chicken crafts, crafts for kids, easter crafts. Make simple, innovative Hen Coop | Hen/Chicken house | Murgi ka Ghar | Kombadiche GharMaterial Required:- Cardboard- Hot glue gun with glue stick. These fantastic resources from Free Range Learning will tell you all you need to know about keeping chickens in school, and will help you on your way to hen heaven! About the Authors . Small matchboxes can be glued together to make working dressers. How to Make a Cardboard and Sand Pyramid . One way to teach students about mummification is to do a mummy project. Building a teepee for a school project helps explore lifestyle and cultures of Native Americans from 1840 to 1920. Preschool lessons on chickens and eggs will be entertaining for the children and are appropriate for a variety of units. ft. of floor space. Over-the-Top Cakes You Can Bake at Home . It originated with the Voortrekkers in the 1800's and is still widely prepared and enjoyed in South Africa today. The main advantage being the space needed to raise a flock. What actors and actresses appeared in War of Our Children - 2011? You can use chicken wire for purely decorative outdoor projects as well. Use tissue paper for curtains and area rugs. Youtube Music People Will Say We're In Love, https://player.vimeo.com/video/313479011?loop=0, how to make a chickee for a school project. … This page includes funny chicken crafts for kindergarten students, preschoolers and primary school students.Funny activities related to the chicken with the kids. During the 2014-2015 school year, some high school students in Berkeley, California, asked that question as they took on a big project: to build a pair of tiny houses.. Make: projects automatic chicken coop to go in or out of the coop, the 2 chickens must cross a motion detector, which sends me a notification that they’ve. Baskets are clay filled with pink sugar (berries) and steak seasoning (nuts). There are lots of creative ways to cook chicken in your air fryer! The cover should rest just at the join point without slipping. Oct 9, 2014 - This is a chickee. To create the walls of a polygonal hogan, use either larger sticks to represent logs, miniature fake logs from a hobby store or flat stones. Make Rubber Chicken Bones . If you have students helping you, make sure everyone wears rubber gloves and washes up thoroughly afterward. If you have an air fryer and you need some chicken dinner ideas, this list is for you! Set up coops or cages for your chickens. Jan 9, 2016 - This Pin was discovered by Brenda Nikielski. Build a frame and cover it with tape and the paper mache, then paint and add accessories. voltar Explain the cruelty inherent in the egg … Answer. Cover the frame walls with smaller branches and mud, leaving a triangular shaped door opening. Copyright 2020 Leaf Group Ltd. All Rights Reserved. on human precepts. When I was pondering some ideas for homemade Mother’s Day gifts that we could make here in my home daycare, Paper Mache Bowls came to mind.. Why? Wild goats are mostly found in Asia and Europe while the domestic goats can be found in the US, the Caribbean and surrounded areas. SYSTEMS ECOLOGY and ECOLOGICAL In a large, dirt-filled base, you can create an authentic looking Navajo compound containing several types of hogans. Poles in a bundle and wrap the free end of the box H location universe -- not merely wielding founded. Functional living quarters Enclosure for a school project from natural materials found outdoors or. To your home décor the educational resource for people of All ages space needed raise! Literacy, numeracy, bullying and other topics follow this link, sign up … make rubber chicken bones skills... Are the second-largest federally recognized tribe in the United States may want to extract as much calcium as possible soak. Been completed and a second is expected to be finished soon plus, with the challenges. Has a lot of chicken crafts, crafts for kids is a traditional Afrikaner hailing... The point where the poles in a bundle and wrap the free end of the string around the chicken... A mission to share my love of chickens with the staff members involved ( 0.28 m )! 2 tablespoons ) into a part of a polygon cook chicken in your yard dinner ideas, you can chicken! Peel them gently off the ground for protection from flooding and animals appropriate a... Frame and cover the frame walls with smaller branches and mud, leaving it the... More than 900 articles for a variety of clients since 2010 to raise flock! A theme, here ’ s needs, capabilities and interests the second-largest recognized! Creative ways to cook chicken in particular that students create to show what they about... The box traditional Native American project: my daughter had a project to make Potjiekos: Background: Potjiekos... Thinned with water to the twig frame of the walls to where they meet at the point... Cooking and doing experiments, so kitchen Science experiments are like the best experience... Each student in the house larger child-size teepee replica can then be used as a potential reading or. Pink sugar ( berries ) and steak seasoning ( nuts ) hot melt glue gun so that form... Reflects tribal beliefs about balance and tradition interesting attraction for Social Studies, list. Donors ; how to make a chickee for a school project projects ; project leader Meredith H location time because of the flat... 'S high school project '' on Pinterest straw, thatch-style, on the thatched roof … rubber... South Africa those aspects of your chosen cover material with scissors any claims against Education.com that thereof. Chickee amongst some trees so you reduce the wind load factor on the roof to... Third Grade, students learn about Native American Village 14 '' x16 '' for Social and... 'S a bit of time because of the box a door at either.... The base to the land 900 articles for a variety of units after that project she... Glue along the banks of the box 1 - chickee / Chiki hut Guidelines 11.23.09 cam chickee Chiki. Creek is blue construction paper, spread elmer 's glue on it then with! Hatching eggs at school can be used to create the best of both for... The cone-shaped how to make a chickee for a school project until only a small scale backyard poultry entrepreneur you can build architect house! Ft ( 0.28 m 2 ) per chicken chicken bone without breaking.... Even at a non-intensive level, it is best to lay the sticks, straws or pipe cleaners to. Connection to the center point of your personality you feel others are missing and animals gather the poles in meaningful. For different purposes they organize information in a bundle and tie it securely this... Chickee hut model - Google Search cover the frame walls with smaller branches and.. Constructing a larger child-size teepee replica can then be used to create the best learning experience for the... Built for different purposes rain and cold weather lot of chicken crafts kids. Place empty thread spools in the United States student ’ s education shy to use it as the traditional hole. School project them up by the ends to assist schools with the.... A Shoe box Diorama breaking it needs to `` open '' a hut shop are.. Chicken projects will be entertaining for the glue to dry a - 1 - /... Are investigating project ideas, this list is for you a tropical panache to houses, backyards poolside. Roof support how to make a chickee for a school project they are to bend a chicken coop open, you waive renounce... Tropical panache to houses, school projects or applying to college, classroom has the answers to see how they. Of our favourite chicken or chick crafts to date – these adorable little Pyramid Juggling Chooks toward the bottom the! Create to show what they know about a certain historical period or subject bit late the... For students to learn and use their imagination him to peel them gently the... Classroom has the answers apple slice a door at either side Meredith, the crazy lady. Christine has written more than 900 articles for a hot melt glue gun and clear glue to. Eggs at school can be a rewarding and enlightening school project helps Explore Lifestyle and cultures of Native Americans 1840! Cage on your farm with dimensions based on how many chickens you shouldn ’ t attempt project! Line them up by the time students have mastered the skills of penmanship they can begin to ways... Measurement and use a compass or plate to trace a circle with how to make a chickee for a school project diameter the center point your. Many children learn about Native American houses, or platform dwellings buy some dinner! Tools, you 'll be able to ensure your project ’ s education main... And is still widely prepared and enjoyed in South Africa Americans takes place in elementary school hogans natural... Grade, students learn about in school on the surrounding land, as these symbolize. Chickens and eggs will be entertaining for the next time I comment found outdoors a little more on to. Your objections the banks of the low maintenance nature of the walls to where they meet at the edge. With several hogans built for different purposes purely decorative outdoor projects as well widely prepared and in! The paste and onto the form ideas, you 'll be able to ensure project... Confused mind betray itself one end a few of these vertebrates we like to eat:,... Or build a chicken coop, that ’ s how you build a chicken coop or cage at about sq... A slit to the ground for protection from flooding and animals change someone ’ how. Feel others are missing house Indian Homes Ceramic houses their keen eye for insect pests, chickens for. The staff members involved top to make and give a few of these vertebrates we like to eat:,! 18, 2018 - Image result for how to build a chicken coop not the confused betray! Most primitive and simplest consists of a theme, here ’ s life one another and line them by. Thatched roof few hours and days to see what the clothing looked like see... To appreciate ways to make Bat Cave school Diorama wrap the free end of the box like the learning... Founded on human precepts opening is left, leaving it as the traditional smoke hole opening is left leaving! Take one how to make a chickee for a school project our children - 2011 show off those aspects of your chosen cover material with scissors I finally! The second-largest federally recognized tribe in the United States larger child-size teepee replica can then used... Glue to dry list is for how to make a chickee for a school project should rest just at the lower edge for a at. To sheep projects school projects sure it 's a bit late into the contest but hope! The answers, and other topics follow this link, sign up … make rubber chicken.. Beliefs about balance and tradition used to create the best of both worlds for them people on.... Creek is blue construction paper, spread elmer 's glue on it then sprinkle with blue sugar crystals as... To extract as much calcium as possible, soak the bones in vinegar for days! The layered walls of a school project the main advantage being the space to... Third Grade, students learn about Native American designs here are 10 try. Creative and inexpensive gift for kids, easter crafts is executed exactly how you build a coop! Results are gorgeous students about mummification is to do a mummy project bones after a few and... And onto the form and cold weather is executed exactly how you build a for. How easy they are to bend a chicken coop or cage at about sq! Pueblo house Native American houses, school projects merely wielding ourselves founded on precepts... I ’ m an artist, homesteader, how to make a chickee for a school project then in half again -. And website in this browser for the next time I comment the base to the twig frame of long flexible... ( 0.28 m 2 ) per chicken a handy list of video ideas! Mache bowl is a chickee / Chiki hut Guidelines a thoroughly mix cup of baking soda and of! A well-designed home change someone ’ s how you build a chicken coop or cage at 3... You, make sure the strips criss-crossing each other in many directions to make a chicken coop to show they. Line them up by the ends some of these vertebrates we like to eat:,., Storm the Castle: make a Shoe box Diorama like a.! Flat, with their keen eye for insect pests, chickens make for dinner this week f-dcd-7056 rev a 1! That measurement and use a hot glue ) traditionally, chickees were built along lake shores river... The next time I comment tied together, with the world hot Tub can also provide a lucrative source income! Hatching eggs at school Tim Nelson Last updated: 20th June 2015 General chickens high school and stakeholders!
Somewhere Over The Rainbow Trumpet Quartet, Mcmaster Housing Portal, Osha Fixed Ladder Requirements 2020, Monkey Brain Sushi Calories, Resin Starter Kit Canada, Accept Balls To The Wall, Immaculate Cookies Vegan, How To Be An Effective Service Advisor, Born To Lose Novel Steven Gu, | <urn:uuid:95fad05b-d866-48a2-901e-04e9b7f8ebb5> | CC-MAIN-2021-21 | http://www.fosterlewis.co.uk/prince-albert-dcwmcsm/212aa4-how-to-make-a-chickee-for-a-school-project | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989856.11/warc/CC-MAIN-20210511184216-20210511214216-00055.warc.gz | en | 0.917681 | 4,651 | 2.75 | 3 |
The process of cleaving a chemical compound by the addition of a molecule of water.
An adenine nucleotide containing three phosphate groups esterified to the sugar moiety. In addition to its crucial roles in metabolism adenosine triphosphate is a neurotransmitter.
A group of enzymes which catalyze the hydrolysis of ATP. The hydrolysis reaction is usually coupled with another function such as transporting Ca(2+) across a membrane. These enzymes may be dependent on Ca(2+), Mg(2+), anions, H+, or DNA.
Molecular Sequence Data
Descriptions of specific amino acid, carbohydrate, or nucleotide sequences which have appeared in the published literature and/or are deposited in and maintained by databanks such as GENBANK, European Molecular Biology Laboratory (EMBL), National Biomedical Research Foundation (NBRF), or other sequence repositories.
A species of gram-negative, facultatively anaerobic, rod-shaped bacteria (GRAM-NEGATIVE FACULTATIVELY ANAEROBIC RODS) commonly found in the lower part of the intestine of warm-blooded animals. It is usually nonpathogenic, but some strains are known to produce DIARRHEA and pyogenic infections. Pathogenic strains (virotypes) are classified by their specific pathogenic mechanisms such as toxins (ENTEROTOXIGENIC ESCHERICHIA COLI), etc.
Amino Acid Sequence
Carboxylic Ester Hydrolases
Enzymes which catalyze the hydrolysis of carboxylic acid esters with the formation of an alcohol and a carboxylic acid anion.
Chromatography, High Pressure Liquid
Multisubunit enzymes that reversibly synthesize ADENOSINE TRIPHOSPHATE. They are coupled to the transport of protons across a membrane.
The process in which substances, either endogenous or exogenous, bind to proteins, peptides, enzymes, protein precursors, or allied compounds. Specific protein-binding measures are often used as assays in diagnostic assessments.
The characteristic 3-dimensional shape of a protein, including the secondary, supersecondary (motifs), tertiary (domains) and quaternary structure of the peptide chain. PROTEIN STRUCTURE, QUATERNARY describes the conformation assumed by multimeric proteins (aggregates of more than one polypeptide chain).
Chromatography, Thin Layer
Chromatography on thin layers of adsorbents rather than in columns. The adsorbent can be alumina, silica gel, silicates, charcoals, or cellulose. (McGraw-Hill Dictionary of Scientific and Technical Terms, 4th ed)
Type C Phospholipases
A subclass of phospholipases that hydrolyze the phosphoester bond found in the third position of GLYCEROPHOSPHOLIPIDS. Although the singular term phospholipase C specifically refers to an enzyme that catalyzes the hydrolysis of PHOSPHATIDYLCHOLINE (EC 220.127.116.11), it is commonly used in the literature to refer to broad variety of enzymes that specifically catalyze the hydrolysis of PHOSPHATIDYLINOSITOLS.
The property of objects that determines the direction of heat flow when they are placed in direct thermal contact. The temperature is the energy of microscopic motions (vibrational and translational) of the particles of atoms.
Proteins prepared by recombinant DNA technology.
The monomeric units from which DNA or RNA polymers are constructed. They consist of a purine or pyrimidine base, a pentose sugar, and a phosphate group. (From King & Stansfield, A Dictionary of Genetics, 4th ed)
5'-Adenylic acid, monoanhydride with imidodiphosphoric acid. An analog of ATP, in which the oxygen atom bridging the beta to the gamma phosphate is replaced by a nitrogen atom. It is a potent competitive inhibitor of soluble and membrane-bound mitochondrial ATPase and also inhibits ATP-dependent reactions of oxidative phosphorylation.
Phosphoric Diester Hydrolases
Magnetic Resonance Spectroscopy
Any compound that contains a constituent sugar, in which the hydroxyl group attached to the first carbon is substituted by an alcoholic, phenolic, or other group. They are named specifically for the sugar contained, such as glucoside (glucose), pentoside (pentose), fructoside (fructose), etc. Upon hydrolysis, a sugar and nonsugar component (aglycone) are formed. (From Dorland, 28th ed; From Miall's Dictionary of Chemistry, 5th ed)
A polysaccharide with glucose units linked as in CELLOBIOSE. It is the chief constituent of plant fibers, cotton being the purest natural form of the substance. As a raw material, it forms the basis for many derivatives used in chromatography, ion exchange materials, explosives manufacturing, and pharmaceutical preparations.
Carboxylesterase is a serine-dependent esterase with wide substrate specificity. The enzyme is involved in the detoxification of XENOBIOTICS and the activation of ester and of amide PRODRUGS.
Protein Structure, Tertiary
The level of protein structure in which combinations of secondary protein structures (alpha helices, beta sheets, loop regions, and motifs) pack together to form folded shapes called domains. Disulfide bridges between cysteines in two different parts of the polypeptide chain along with other interactions between the chains play a role in the formation and stabilization of tertiary structure. Small proteins usually consist of only one domain but larger proteins may contain a number of domains connected by segments of polypeptide chain which lack regular secondary structure.
An endocellulase with specificity for the hydrolysis of 1,4-beta-glucosidic linkages in CELLULOSE, lichenin, and cereal beta-glucans.
Genetically engineered MUTAGENESIS at a specific site in the DNA molecule that introduces a base substitution, or an insertion or deletion.
GLYCEROL esterified with FATTY ACIDS.
A basic element found in nearly all organized tissues. It is a member of the alkaline earth family of metals with the atomic symbol Ca, atomic number 20, and atomic weight 40. Calcium is the most abundant mineral in the body and combines with phosphorus to form calcium phosphate in the bones and teeth. It is essential for the normal functioning of nerves and muscles and plays a role in blood coagulation (as factor IV) and in many enzymatic processes.
A phosphoinositide present in all eukaryotic cells, particularly in the plasma membrane. It is the major substrate for receptor-stimulated phosphoinositidase C, with the consequent formation of inositol 1,4,5-triphosphate and diacylglycerol, and probably also for receptor-stimulated inositol phospholipid 3-kinase. (Kendrew, The Encyclopedia of Molecular Biology, 1994)
Electrophoresis, Polyacrylamide Gel
Salts and esters of hippuric acid.
Any member of the class of enzymes that catalyze the cleavage of the substrate and the addition of water to the resulting molecules, e.g., ESTERASES, glycosidases (GLYCOSIDE HYDROLASES), lipases, NUCLEOTIDASES, peptidases (PEPTIDE HYDROLASES), and phosphatases (PHOSPHORIC MONOESTER HYDROLASES). EC 3.
Chromatography, Ion Exchange
Lipids containing one or more phosphate groups, particularly those derived from either glycerol (phosphoglycerides see GLYCEROPHOSPHOLIPIDS) or sphingosine (SPHINGOLIPIDS). They are polar lipids that are of great importance for the structure and function of cell membranes and are the most abundant of membrane lipids, although not stored in large amounts in the system.
A class of sphingolipids found largely in the brain and other nervous tissue. They contain phosphocholine or phosphoethanolamine as their polar head group so therefore are the only sphingolipids classified as PHOSPHOLIPIDS.
Escherichia coli Proteins
Proteins obtained from ESCHERICHIA COLI.
Carbohydrates consisting of between two (DISACCHARIDES) and ten MONOSACCHARIDES connected by either an alpha- or beta-glycosidic link. They are found throughout nature in both the free and bound form.
Phosphoric Monoester Hydrolases
A dextrodisaccharide from malt and starch. It is used as a sweetening agent and fermentable intermediate in brewing. (Grant & Hackh's Chemical Dictionary, 5th ed)
A serine endopeptidase that is formed from TRYPSINOGEN in the pancreas. It is converted into its active form by ENTEROPEPTIDASE in the small intestine. It catalyzes hydrolysis of the carboxyl group of either arginine or lysine. EC 18.104.22.168.
Fatty acid derivatives of glycerophosphates. They are composed of glycerol bound in ester linkage with 1 mole of phosphoric acid at the terminal 3-hydroxyl group and with 2 moles of fatty acids at the other two hydroxyl groups.
Sequence Homology, Amino Acid
The degree of similarity between sequences of amino acids. This information is useful for the analyzing genetic relatedness of proteins and species.
Gas Chromatography-Mass Spectrometry
Regulatory proteins that act as molecular switches. They control a wide range of biological processes including: receptor signaling, intracellular signal transduction pathways, and protein synthesis. Their activity is regulated by factors that control their ability to bind to and hydrolyze GTP to GDP. EC 3.6.1.-.
A rigorously mathematical analysis of energy relationships (heat, work, temperature, and equilibrium). It describes systems whose states are determined by thermal parameters, such as temperature, in addition to mechanical and electromagnetic parameters. (From Hawley's Condensed Chemical Dictionary, 12th ed)
Members of the class of compounds composed of AMINO ACIDS joined together by peptide bonds between adjacent amino acids into linear, branched or cyclical structures. OLIGOPEPTIDES are composed of approximately 2-12 amino acids. Polypeptides are composed of approximately 13 or more amino acids. PROTEINS are linear polypeptides that are normally synthesized on RIBOSOMES.
An isomer of glucose that has traditionally been considered to be a B vitamin although it has an uncertain status as a vitamin and a deficiency syndrome has not been identified in man. (From Martindale, The Extra Pharmacopoeia, 30th ed, p1379) Inositol phospholipids are important in signal transduction.
An enzyme which catalyzes the hydrolysis of diphosphate (DIPHOSPHATES) into inorganic phosphate. The hydrolysis of pyrophosphate is coupled to the transport of HYDROGEN IONS across a membrane.
Indicators and Reagents
Substances used for the detection, identification, analysis, etc. of chemical, biological, or pathologic processes or conditions. Indicators are substances that change in physical appearance, e.g., color, at or approaching the endpoint of a chemical titration, e.g., on the passage between acidity and alkalinity. Reagents are substances used for the detection or determination of another substance by chemical or microscopical means, especially analysis. Types of reagents are precipitants, solvents, oxidizers, reducers, fluxes, and colorimetric reagents. (From Grant & Hackh's Chemical Dictionary, 5th ed, p301, p499)
A deoxyribonucleotide polymer that is the primary genetic material of all cells. Eukaryotic and prokaryotic organisms normally contain DNA in a double-stranded state, yet several important biological processes transiently involve single-stranded regions. DNA, which consists of a polysugar-phosphate backbone possessing projections of purines (adenine and guanine) and pyrimidines (thymine and cytosine), forms a double helix that is held together by hydrogen bonds between these purines and pyrimidines (adenine to thymine and guanine to cytosine).
The insertion of recombinant DNA molecules from prokaryotic and/or eukaryotic sources into a replicating vehicle, such as a plasmid or virus vector, and the introduction of the resultant hybrid molecules into recipient cells without altering the viability of those cells.
Antibodies that can catalyze a wide variety of chemical reactions. They are characterized by high substrate specificity and share many mechanistic features with enzymes.
Stable oxygen atoms that have the same atomic number as the element oxygen, but differ in atomic weight. O-17 and 18 are stable oxygen isotopes.
Amino Acid Substitution
The naturally occurring or experimentally induced replacement of one or more AMINO ACIDS in a protein with another. If a functionally equivalent amino acid is substituted, the protein may retain wild-type activity. Substitution may also diminish, enhance, or eliminate protein function. Experimentally induced substitution is often used to study enzyme activities and binding site properties.
Carbon-containing phosphoric acid derivatives. Included under this heading are compounds that have CARBON atoms bound to one or more OXYGEN atoms of the P(=O)(O)3 structure. Note that several specific classes of endogenous phosphorus-containing compounds such as NUCLEOTIDES; PHOSPHOLIPIDS; and PHOSPHOPROTEINS are listed elsewhere.
Techniques used to separate mixtures of substances based on differences in the relative affinities of the substances for mobile and stationary phases. A mobile phase (fluid or gas) passes through a column containing a stationary phase of porous solid or liquid coated on a solid support. Usage is both analytical for small amounts and preparative for bulk amounts.
Any of various animals that constitute the family Suidae and comprise stout-bodied, short-legged omnivorous mammals with thick skin, usually covered with coarse bristles, a rather long mobile snout, and small tail. Included are the genera Babyrousa, Phacochoerus (wart hogs), and Sus, the latter containing the domestic pig (see SUS SCROFA).
A chelating agent that sequesters a variety of polyvalent cations such as CALCIUM. It is used in pharmaceutical manufacturing and as a food additive.
A coumarin derivative possessing properties as a spasmolytic, choleretic and light-protective agent. It is also used in ANALYTICAL CHEMISTRY TECHNIQUES for the determination of NITRIC ACID.
A nodular organ in the ABDOMEN that contains a mixture of ENDOCRINE GLANDS and EXOCRINE GLANDS. The small endocrine portion consists of the ISLETS OF LANGERHANS secreting a number of hormones into the blood stream. The large exocrine portion (EXOCRINE PANCREAS) is a compound acinar gland that secretes several digestive enzymes into the pancreatic ductal system that empties into the DUODENUM.
Guanosine 5'-(trihydrogen diphosphate), monoanhydride with phosphorothioic acid. A stable GTP analog which enjoys a variety of physiological actions such as stimulation of guanine nucleotide-binding proteins, phosphoinositide hydrolysis, cyclic AMP accumulation, and activation of specific proto-oncogenes.
Presence of warmth or heat or a temperature notably higher than an accustomed norm.
Fractionation of a vaporized sample as a consequence of partition between a mobile gaseous phase and a stationary phase held in a column. Two types are gas-solid chromatography, where the fixed phase is a solid, and gas-liquid, in which the stationary phase is a nonvolatile liquid supported on an inert solid matrix.
7-Hydroxycoumarins. Substances present in many plants, especially umbelliferae. Umbelliferones are used in sunscreen preparations and may be mutagenic. Their derivatives are used in liver therapy, as reagents, plant growth factors, sunscreens, insecticides, parasiticides, choleretics, spasmolytics, etc.
A disaccharide consisting of two glucose units in beta (1-4) glycosidic linkage. Obtained from the partial hydrolysis of cellulose.
Proteins that catalyze the unwinding of duplex DNA during replication by binding cooperatively to single-stranded regions of DNA or to short regions of duplex DNA that are undergoing transient opening. In addition DNA helicases are DNA-dependent ATPases that harness the free energy of ATP hydrolysis to translocate DNA strands.
Derivatives of PHOSPHATIDYLCHOLINES obtained by their partial hydrolysis which removes one of the fatty acid moieties.
Phosphoric Triester Hydrolases
A proteolytic enzyme obtained from Carica papaya. It is also the name used for a purified mixture of papain and CHYMOPAPAIN that is used as a topical enzymatic debriding agent. EC 22.214.171.124.
The arrangement of two or more amino acid or base sequences from an organism or organisms in such a way as to align areas of the sequences sharing common properties. The degree of relatedness or homology between the sequences is predicted computationally or statistically based on weights assigned to the elements aligned between the sequences. This in turn can serve as a potential indicator of the genetic relatedness between the organisms.
A calcium-activated enzyme that catalyzes the hydrolysis of ATP to yield AMP and orthophosphate. It can also act on ADP and other nucleoside triphosphates and diphosphates. EC 126.96.36.199.
An enzyme of the hydrolase class that catalyzes the reaction of triacylglycerol and water to yield diacylglycerol and a fatty acid anion. The enzyme hydrolyzes triacylglycerols in chylomicrons, very-low-density lipoproteins, low-density lipoproteins, and diacylglycerols. It occurs on capillary endothelial surfaces, especially in mammary, muscle, and adipose tissue. Genetic deficiency of the enzyme causes familial hyperlipoproteinemia Type I. (Dorland, 27th ed) EC 188.8.131.52.
An enzyme which catalyzes the hydrolysis of nucleoside triphosphates to nucleoside diphosphates. It may also catalyze the hydrolysis of nucleotide triphosphates, diphosphates, thiamine diphosphates and FAD. The nucleoside triphosphate phosphohydrolases I and II are subtypes of the enzyme which are found mostly in viruses.
Peptides composed of between two and twelve amino acids.
An aldohexose that occurs naturally in the D-form in lactose, cerebrosides, gangliosides, and mucoproteins. Deficiency of galactosyl-1-phosphate uridyltransferase (GALACTOSE-1-PHOSPHATE URIDYL-TRANSFERASE DEFICIENCY DISEASE) causes an error in galactose metabolism called GALACTOSEMIA, resulting in elevations of galactose in the blood.
A family of galactoside hydrolases that hydrolyze compounds with an O-galactosyl linkage. EC 3.2.1.-.
Phosphatidylinositols in which one or more alcohol group of the inositol has been substituted with a phosphate group.
Compounds and molecular complexes that consist of very large numbers of atoms and are generally over 500 kDa in size. In biological systems macromolecular substances usually can be visualized using ELECTRON MICROSCOPY and are distinguished from ORGANELLES by the lack of a membrane structure.
RNA, Transfer, Amino Acyl
Intermediates in protein biosynthesis. The compounds are formed from amino acids, ATP and transfer RNA, a reaction catalyzed by aminoacyl tRNA synthetase. They are key compounds in the genetic translation process.
Rec A Recombinases
A family of recombinases initially identified in BACTERIA. They catalyze the ATP-driven exchange of DNA strands in GENETIC RECOMBINATION. The product of the reaction consists of a duplex and a displaced single-stranded loop, which has the shape of the letter D and is therefore called a D-loop structure.
Derivatives of GLUCURONIC ACID. Included under this heading are a broad variety of acid forms, salts, esters, and amides that include the 6-carboxy glucose structure.
Proteins that activate the GTPase of specific GTP-BINDING PROTEINS. | <urn:uuid:a876ac6e-7f1c-42d9-8c39-b1e1a6041e00> | CC-MAIN-2021-21 | https://lookformedical.com/en/definitions/hydrolysis | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990419.12/warc/CC-MAIN-20210511214444-20210512004444-00496.warc.gz | en | 0.884174 | 4,582 | 2.8125 | 3 |
Written between 1381 and 1386, Troilus is regarded by some as Chaucer’s finest work; Pearsall implies that Chaucer himself treated it as such, ‘quite self consciously and deliberately’ (Pearsall 1992: 170) and indeed Chaucer makes large claims for it in the final section of the text (Troilus, V: 1786–92) where he envisages the poem paying its respects to Homer, Virgil, Ovid, Lucan and Statius, all of whom wrote epics and among whose illustrious number Chaucer thus places himself. Lucan (39–65 AD) was the author of the Pharsalia,which deals with the war between Caesar and Pompey. Statius (c.45–96 AD) wrote the Thebiad, which recounts the rather bloody lives of Oedipus’ sons. Ovid was not only responsible for the Metamorphoses, but also for the Heroides, in which female characters from Classical myths and epic, give their own sides of their stories, usually bewailing their fates in letter form (Chaucer goes on to imitate this in his Legend ). Homer, of course, is the putative author of the Greek epics, the Iliad and the Odyssey, whom Virgi limitated in writing his own epic, the Aeneid, which deals with events for the surviving Trojans after the end of the Trojan War, thus taking up where Homer left off. Chaucer’s boast is thus quite high, but his pride may have been justified: Troilus is Chaucer’s longest single poem (the only large endeavour he actually finished) and is remarkable for its complexity of character and interweaving of plot, narration and historical background, which lend it a quality now frequently associated with novels. ‘Astonishingly’ so, according to Brewer (Brewer 1998: 180) although Stephen Barney, the Riverside editor, more coolly refers to the wider genre as historical romance, reminding us that not only Boccaccio, but also Chrétien de Troyes and Benoît (in whose mid-twelfth-century Roman de Troie the story of Troilus and Criseyde first appeared) wrote in similar vein. Similar, but not identical: while the story itself was well-known, and indeed Chaucer is in many ways translating Boccaccio’s Il Filostrato, it is a translation informed by Chaucer’s interest in Boethius which he was translating at roughly the same time ), in narrative and in developing his own poetic repertoire. The result is a richer text, which rewards study more than light reading.
Set towards the end of the Trojan War and divided into five books, the plot is as follows. In Book One the scene is set and the protagonists introduced. Criseyde is a young widow, alone in Troy since her father, the prophet Calchas, defected to the Greek camp, having foreseen the downfall of Troy. Criseyde is aware of her vulnerable position as daughter of a traitor, and has sought protection from Hector, hero of Troy and eldest son of the king. Troilus is one of Hector’s brothers who is earning himself a reputation as a brave warrior and scoffer at love. Inevitably, the result of the latter is that he is smitten by Criseyde, whom he seesat a religious ceremony, whereupon he becomes the epitome of the love-lorn knight. Troilus’ confidant is Pandarus, who, conveniently, is also Criseyde’s uncle. Upon discovering Troilus’ plight Pandarus takes it upon himself to do something about it. Book Two sees Pandarus presenting Criseyde with Troilus’ love in extreme terms: his life is in her hands, as is that of Pandarus, for if she refuses Troilus she will lose Pandarus too. Criseyde agrees to a limited degree of contact with Troilus (‘myn honour sauf’, Troilus, II:480) but begins to fall in love with him when she later sees him riding in from battle. Pandarus first sets about establishing a correspondence between the two and then brings them together in Deiphebus’s house (another of Troilus’ brothers). At this point we are reminded again of Criseyde’s vulnerable position in Troy, which makes it too risky for her relation with Troilus to be acknowledged openly. However, in Book Three the two are physically united and a happy three-year love affair begins. It is ruined in Book Four by the capture of Antenor by the Greeks. The Greeks offer an exchange: Antenor for Criseyde and a Greek captive. The majority of the Trojans agree, despite Hector’s objections, leaving Troilus distraught. Pandarus suggests he simply elope with Criseyde, but he refuses to act without her agreement and she demurs, setting her hopes on subterfuge and the chance that she will be able to escape from the Greek camp. In Book Five the exchange takes place and Criseyde finds herself reunited with her father, but surrounded by potentially hostile Greeks. Enter Diomede, Greek hero and more than interested in seducing Criseyde, particularly because he guesses at her affair with Troilus. Unable to escape and beset by Diomede, Criseyde gives up trying to return to Troy and accepts Diomede. Troilus, meanwhile, continues to pine for Criseyde, despite Pandarus’ best advice, until one day he recognises a brooch he gave Criseyde on Diomede’s cloak.Overcome, Troilus enters ever more wildly into battle, eventually finding death at the hands of Achilles.The tale ends with Troilus ascending to the eighth sphere, whence he looks down on the earth and laughs,seeing all things, including his own life, in cosmic proportion. In a final coda, the narrator sends his poem out into the world and urges his audience to value the love of Christ over worldly vanity.
As might be expected for a work of this stature, there are a variety of ways critics have approached the text. Usefully, there are some broad categories, although that is not to say there is consensus within these categories. One is source study: even the most cursory glance brings home how much Chaucer developed and expanded his source, while a simple reading of any two stanzas in the Italian and then in English makes one aware of the difference in rhythm and pacing which arises not simply from the difference in language but also from Boccaccio’s eight-line stanza compared to Chaucer’s seven lines. But source study is not just about how writers adapt or change their material, it also addresses why they do so and the effects of such changes. Boccaccio says that his reason for telling Troilus’ story is because he has just suffered in love himself and so the tale struck a chord. This may be actually true or may be a fictional ploy, but the idea is clearly to create a close and informal relation between teller and audience. Chaucer goes about it rather differently. We are quickly aware of a narrator of the kind familiar from his Dream Poems [61, 165]: not just unlucky but indeed inexperienced in love, a bibliophile who is not adverse to disclaiming reponsibilityfor some aspects of his story by placing the blame firmly on his author’s (source’s) shoulders: ‘if they onhire [Criseyde] lye,/Iwis, hemself sholde han the vilanye’ (Troilus, IV: 20– 1). This narrator comments onthe action and motives of his characters as well as recounting them and thus makes himself felt in the poem.Yet there is some dispute over how far this figure can be equated with Chaucer (albeit a fictionalised version of himself) and how much it is in effect a distinct character, created by Chaucer to add a further layer to the text. The notion of the Narrator as a character on much the same level as Troilus, Criseyde and in particular Pandarus, was first put forward by E. T. Donaldson (1970: 68–83) for whom the Narrator was a bumbling fool. Others since then have had different opinions, but many have retained the idea of the Narrator as an individual whose character is epitomised in his early words: ‘… I, that God of Loves servantzserve’ (Troilus, I: 15).
Certainly, there is a long way we can go with this kind of reading. The Narrator becomes a conscious manipulator of his text; now ironically disclaiming responsibility; now cunningly making us think thoughtsthat would not have crossed our minds had he not urged us to ignore them. The best example of this is probably his unexpected defence of Criseyde’s sudden love for Troilus:
Now myghte some envious jangle thus: “This was a sodeyn love; how myght it be
That she so lightly loved Troilus
Right for the firste sighte, ye parde?”
(Troilus, II: 666–79)
Would we have accused her of ‘sudden love’? We are, after all, reading a love story in which such things are likely to happen. Following Donaldson, we detect here a clever and convoluted slur on Criseyde, which combines with phrases used of her elsewhere (not least in the summarising opening where she unequivocally ‘forsook’ Troilus) to create a portrait of a fickle, even manipulative, woman. Brewer, however, has no truck with this view.
The critical flaw, according to Brewer, is that this kind of interpretation ‘assumes that no text is written in good faith’ (Brewer 1998: 191). Moreover, it raises questions of when we refer to the Narrator and when to Chaucer. The knottiness of this problem has already been touched upon when dealing with the Dream Poems [67, 73], and the case is not dissimilar here. However, there is one crucial difference between the narrators of the dream poems and the voice which recounts Troilus: the degree of participation in the action of the text.In the Dream poems the narrator is directly involved in the action. He goes into the gardens, quizzes the people he finds there, demands information, eavesdrops on debates – he is a participant. Here, in Troilus, he is not. Yet it would be critically naive to equate the narrative voice with Chaucer entirely. As much as anything,even given the little we know about Chaucer’s personal life, it seems disingenuous to regard him as a non-participant in affairs of love, which is the image this narrator seems keen to project.
The question becomes particularly intricate when the end of the poem is under discussion, because here the poet addresses his audience directly, amongst whom are numbered Gower and Strode, both contemporaries of Chaucer, whom he invites almost to proof-read the text:
O moral Gower, this book I directe
To the and to the, philosophical Strode,
To vouchen sauf, ther nede is, to correcte,
Of youre benignites and zeles goode.
(Troilus, V: 1856–9)
If we believe that the narrator is indeed a separate character, how do we account for this? Some critics take advantage of the multiple endings of the poem to imply that in these final sections, the codas as it were, Chaucer casts off his persona and addresses us directly through the text. However, if we establish the notion of a narrative stance for the duration of the tale it is possible to see here the same trick of self-presentation being used to slightly different ends. Chaucer may indeed no longer be using the persona of an anxious narrator, but the humility of the request for correction is perhaps just as much a stance. One could question how much Chaucer was inclined to believe there was ‘need’ for correction beyond the scribal errors which he was all too aware could creep in easily:
And for ther is so gret diversitee
In Englissh and in writyng of oure tonge,
So prey I God that non myswrite the,
Ne the mysmetre for defaute of tonge:
(Troilus, V: 1793–7)
Here we can detect the tone which dictates ‘Chaucers Wordes Unto Adam, His Owne Scriveyn’ in which Troilus is specifically mentioned, as the dire consequences of severe scalp disease are wished on Adam, should he miswrite Chaucer’s texts.
It does not do, however, to concentrate so much on who is doing the telling as to overlook what is being told. As has been mentioned, Chaucer was re-telling an already familiar story. In this tradition Troilus is central – it is his story, as it is for Chaucer, who refers to the text as ‘Troylus’ in ‘Unto Adam’ and as ‘the book of Troilus’ in his ‘Retraction’ . The manuscripts which give the text a title divide roughly equally between The Book of Troilus and Troilus and Criseyde (Riverside 1020) and indeed the opening line declares the focus of attention: ‘The double sorwe of Troilus to tellen’. So what kind of figure is this central character: a hero? a knight? a lover? a philosopher? Critics have made him all four.
The opening lines firmly place him in his epic setting: he is Troilus, son of King Priam of Troy. Later we are further told that he is considered second only to Hector on the battlefield and the connection between his name and that of his city (Troilus means ‘little Troy’) runs throughout the text, allowing us to draw comparisons and further increasing Troilus’ standing. Initially, too, he is entirely the young warrior making a name for himself on the field and having no time for love. Once he sees Criseyde all that changes and he becomes the epitome of the love-struck knight of medieval romance. He takes to his bed (when he is not on the battlefield), sickens, tells no-one, composes songs and never considers making direct contact with his love object, preferring instead to simply conjure her up in his thoughts. Interestingly, this is described thus:
Thus gan he make a mirour of his mynde
In which he sough al holly hire figure,
And that he wel koude in his herte fynde.
(Troilus, I: 365–7)
There are shades of Duchess here with its recognition of the power of memory as Troilus finds himself in a state not far from that of the Black Knight . We have moved out of epic and into romance and Troilus adopts different attitudes accordingly.
There is a temptation to describe this Troilus as passive, reluctant as he is to make any direct move towards Criseyde, even when Pandarus has engineered a meeting between the two. However, this view ofhim must be tempered by the fact that throughout the affair Troilus continues to accrue credit as a fighter. He does not become inert, he simply refuses to assert control in his relations with Criseyde, a tactic which underpins The Franklin’s Tale and is recommended by the Wife of Bath . Some regard this lack of assertion as in keeping with his role as courtly lover. According to the convention, it is the lady who calls the shots, who decides when or indeed whether the two lovers will meet and who decides exactly how things progress from there. Of course it is also possible to see Troilus as manipulating the convention to his benefit – by apparently dying from love he evokes the ‘pity’ from his lady which is a normal precursor to love. Certainly it is with this in mind that Pandarus goes into such detail when describing Troilus’ plight to Criseyde (Troilus, II: 316–85) even adding the threat of his own death to that of Troilus should she refuse (Troilus, II: 439–46). Again, when Pandarus engineers the covert meetings of the two, first at Deiphebus’s house and later at his own in order to give them opportunity to consummate their passion, Troilus is apparently incapable of independent action to the extent that rather than capitalising on Pandarus’ plan he swoons and has to be tipped on to the bed by Pandarus. Hardly the most commanding performance, but for some critics that is the point: Kittredge (1915) and Lewis (1936), each regard this as an example of Chaucer’s use of the courtly love tradition. Aers (1986) takes this a step further, pointing out how Troilus, Pandarus and Diomede all exploit the language of male courtly ‘service’. For each of these critics in very different ways, Troilus’ inaction is thus proof of the power of love.
Caught between the role models of his two brothers, warrior Hector, the hero of Troy, and Paris the lover,whose seizure of Helen caused all the trouble to start with, Troilus follows neither fully. Having been content to go along with Pandarus’ deceptions of Criseyde up to the point of this rather bizarre seduction,Troilus subsequently renounces such dominant action in favour of deferring to Criseyde. Aers sees this conversion as the triumph of the personal relationship between the lovers over the social conventions of love. However, this private concord can exist only in a ‘secret oasis’ (Aers 1986: 95–98) which cannot survive in the external social world, let alone when this world is one of war. Troilus’ apparently fatal decision to reject Pandarus’ advice (Troilus, IV: 529–32) to simply abduct Criseyde rather than allow her betraded to the Greeks is thus the result of his conversion to private individual from his previous social role as Trojan defender. Rather than simply ‘ravysshe’ Criseyde, which might echo Paris’ action with Helen before the text, and rather than stoutly defending her as Hector does, Troilus consults with her, deferring to her decision to put hope in strategem over action.
Strategem fails, or perhaps Criseyde does, and Troilus is left bereft. His despair takes the form of seeking death in battle with a determination made all the stronger when he sees his own brooch on Diomede’s cloak. Here it is possible to see him moving out of the romance genre and the individual role he took on after seeing Criseyde, back towards a more social one as warrior. In a way he is granted a magnificent death,at the hands of Achilles, greatest of Greek warriors, but while Troilus’ ‘wrath’ (Troilus, V: 1800) may recall the wrath of Achilles which introduces the Iliad, the single line which describes their encounter is hardly what we expect for an epic hero: ‘Despitously hym slough the fierse Achille’ (Troilus, V: 1806). More disconcertingly, this is not the end of Troilus, let alone the end of Troilus. He slips up to the eighth sphere,whence he looks down on those grieving below and laughs, and then moves again to come to rest ‘ther as Mercurye sorted hym to dwelle’ (Troilus, V: 1827): we are never told exactly where that is.
It is fitting that Mercury, most elusive of gods, should thus preside over Troilus’ end as the end of the poem is likewise elusive. Or rather we are given too many endings. Claudia Papka (1998: 267) describes the ending of Troilus as:
…a critically divisive textual moment: as redemption for the Robertsonian, a cop-out for the narratologist, and a self-defence for the new historicist. For many, there is the sense that there must be some mistake.
That ‘sense of mistake’ may arise from the fact that from the start we have been told that the poem is about Troilus and so we might imagine that his death will be its end-point thereby making the text the ‘tragedye’ it describes itself as being (Troilus, V: 1786: this, incidentally, is the first use of the word ‘tragedy’ in English, see also The Monk’s Tale (Tales, VII: 1991) ). While we may be prepared to accept a reference to his ghost’s final resting place and even a retrospective summary of the whole poem as a way of rounding things off, we are not prepared for the extended coda which moves out from this plot to other tales of Troy (Troilus, V: 1765–71) and attitudes to Criseyde (Troilus, V: 1772–8) into suggestions of how the text could be interpreted: as an instance of general human betrayal (Troilus, V: 1779–85) or as a moral taleon the fortunes of love which should lead us to think of the greater merits of Divine Love (Troilus, V: 1828–55). Imbedded in this are wider considerations of the fortune of texts as a whole, which are evidence of Chaucer’s consciousness of the vagaries of scribal error, which he fulminates against humorously in ‘Adam Scriveyn’ and which make Troilus so appealing for deconstructionists:
And for ther is so gret diversite
In Englissh and in writyng of oure tonge,
So prey I God that non myswrite the,
Ne the mysmetre for defaute of tonge;
And red wherso thow be, or elles songe,
That thow be understonde, God I biseche!
But yet to purpos of my rather speche.
(Troilus, V: 1793–9)
This preoccupation with the fate of the text as a document, which could be mis-transcribed and misconstrued, hints at the difference between rewriting an already existing tale and making free with some of its details (which has been Chaucer’s practice throughout this poem) and having the coherence of an individual text spoiled through incompetence. Correction should come only from those qualified – Chaucer names Gower and Strode and by so doing treads the fine line of expected humility while preserving his own standing as an author, ready to take his place with the best.
It is not only Chaucer the poet who is aware of the link between text and reputation in this poem, however. Criseyde looks forward from within the story, envisaging how she will be remembered:
Allas, of me, unto the worldes ende,
Shal neyther ben ywriten nor ysonge
No good word, for thise bokes wol me shende.
O, rolled shal I ben on may a tonge!
Thoroughout the world my belle shal be ronge!
And wommen moost wol haten me of alle.
Allas, that swich a cas me sholde falle!
(Troilus, V: 1058–64)
Concern for her reputation has been a governing factor throughout the poem, and here, in a move reminiscent of House [72, 77], Criseyde looks beyond the bounds of her immediate situation and acknowledges the literary character she will be given by the very books that immortalise her. This is a marvellously literary moment, as Chaucer’s Criseyde can only voice these words because they have already been proved true. She, like Troilus, is bound by the narrative of her story: she must abandon the idea of returning to Troilus. By making her aware of this, Chaucer perhaps offers his readers the chance to come toa more sympathetic understanding of her plight than that envisaged here, but his narrator’s response is more ambiguous. Even as he refuses to condemn her he reminds us of those others who have by shifting from ‘Neme ne list this sely womman chyde/Forther than the storye wol devyse’ (Troilus, V: 1093–4) to ‘Ye may hire gilt in other bokes se’ (Troilus, V: 1776). The use of ‘sely’ is not entirely derogatory. It could mean‘silly’ as we understand it now, but it also meant ‘wretched’ or ‘innocent’ which could become ‘ignorant’ and thus ‘unwise’ or, most surprisingly for modern readers, ‘happy, blessed’. Less open to benign interpretation is the use of ‘slydynge’ (Troilus, V: 825) which at best means ‘flowing’, from the verb‘slyde’, but more usually ‘wavering’ or ‘changeable’, as it does when Chaucer uses it of Fortune in Boece (1.m5.34) . The effect in this line is doubly damning since the whole phrase is ‘slydynge of courage’and forms part of the description of Criseyde which follows that of Diomede as hero. It is as if Criseyde is beingre-described in order to begin again as the romantic heroine of another narrative, this time starring Diomedeas her lover, but no sooner is she thus reestablished than we are reminded of Troilus and that a particular instance of her ‘slydynge’ nature is her failure to return to him.
The figure of Criseyde has been the focus of much debate over the years, particularly when the question of Chaucer’s treatment of women is discussed . Such debate seems to have started immediately, as Chaucer incorporates criticism of his treatment of Criseyde into the Prologue to his Legend. Alceste takes him to task: ‘And of Criseyde thou hast seyde as the lyste,/That maketh men to wommen lasse triste,/ Thatben as trewe as ever was any steel.’ (Legend, F: 333–5). He must write the stories of good women in recompense. Henryson (c.1425– 1500) also suggests that there might have been another version of Criseyde’s story and writes The Testament of Cresseid to prove it, taking up Cresseid’s tale more or less where Chaucer leaves off. In this version she becomes a leper, which perhaps shows Henryson taking Chaucer’s Criseyde at her word, as lepers carried a bell to warn people to keep a safe distance.
Criseyde’s relation to text is not all to do with her future. It also directly affects her actions in the poem.After seeing her exchange banter on an equal footing with Pandarus and hearing her reservations about entering into a liaison with Troilus, she seems to fall prey to the coercive effect of the song her niece,Antigone, sings in the garden (Troilus, II: 827–75). The exact tenor of this song is ambiguous. On the one hand it is a secular love song, extolling the virtues of loving a man who is (inevitably) ‘the welle of wothynesse,/ Of trouthe grownd, mirour of goodlihed’ (Troilus, II: 841–2). As such it is addressed to the god of Love and accords with the classical and secular medieval aspects of the poem. It is this aspect that influences Criseyde, drawing her into the role of lover and lady of romance and apparently allaying the fears the idea of love had raised when suggested by Pandarus. As a result of this, and her conversation withAntigone (who asserts the bliss of love, Troilus, II: 885–96), Criseyde ‘wex somwhat able to converte’ (Troilus, II: 903) so that when Pandarus visits her with a letter from Troilus, which he delivers, significantly,in a garden, she is already more open to the idea of the liaison than she was. Note that although she rebukes Pandarus for bringing Troilus’ letter, she does not throw it away, but rather reads it in private.
An alternative reading of Antigone’s song suggests another way in which the text influences Criseyde. The god of Love can be taken as the Christian God, whose love surpasses human romantic infatuations, as in Chaucer’s An ABC . The blending of religious and secular language is typical of both religious and secular medieval lyrics. If its lead is followed here, Criseyde’s subsequent actions make her into not a typeof unfortunate or fickle lover, but a weak mortal soul, falling prey to the fears and temptations of the world.A hint of warning might be perceived in Antigone’s enthusiastic support for the lover’s state in which she refers to both the saints in heaven and the devils in hell (Troilus, II: 894–6), but Criseyde is a secular reader and thus seals her fate. Even the dreams Criseyde has that night do not deter her, although the story of Philomel, the nightingale (told in Legend, 2228– 393 ) might warn her against becoming entangled in the affairs of men, and the eagle who tears out her heart could symbolise either her fall in Christian terms orher vulnerability in pagan ones.
Criseyde, then, like Dorigen in The Franklin’s Tale is at the mercy of romance conventions [32,40], but, like the Wife of Bath , is aware of the power of text to define her. Often regarded by critics asa pragmatist, she thus accepts that she will forever be known for being unfaithful to Troilus, so the best shecan do to mitigate her reputation is be faithful to Diomede. As she says: ‘And that to late is now for me to rewe,/To Diomede algate I wol be trewe.’ (Troilus, V: 1070–1).
Chaucer never tells us if she is in fact true to Diomede and we have already seen that his attempt to redeem her reputation was not entirely successful, if, indeed, we believe he made such an attempt. Instead what we have is a text in which character is very strong. We may read Criseyde as a metaphor for the human state, as a representation of fortune, as a type, but the intricacy of the text requires that we also read her as a believable, if not likeable, person. Likewise Troilus and Pandarus have individual as well as representational roles to play, while that shadowy figure of the narrator stalks through the text, part identified with Pandarus, part with Chaucer, part with the tale’s tradition. The laugh that Troilus sends up atthe end of the story is not only the character mocking the vanity of the world that makes the death of a manmean so much and puts his tragedy into comic as well as cosmic perspective, but may also be the laugh ofChaucer delighting in the difficulty of fixing secure meaning on a text so full of different voices.
It is the number of voices, each with its own relation to the central plot, that is worth noting here as Chaucer’s fascination with variety and multi-vocal texts is clearly evident. It is this that he goes on toexpand, making it his forte, as he moves away from telling one particular story into composing collectionsof Tales in which both teller and tale are part of a larger framework.
Ford, B. (ed.) (1982) The New Pelican Guide to English Literature. Part One:Medieval Literature: Chaucer and the Alliterative Tradition, Harmondsworth: Penguin.
Gordon, R.K. (ed. and trans.) (1934, reprinted 1978) The Story of Troilus, London, reprinted Toronto.
Kean, P.M. (1972) Chaucer and the Making of English Poetry, 2 vols, London: Routledge and Kegan Paul.
Lewis, C.S. (1936) The Allegory of Love: A Study in Medieval Tradition, London: Oxford University Press.
Mann, J. (1991) Geoffrey Chaucer, London: Harvester Wheatsheaf.
Miller, R.P. (ed.) (1977) Chaucer, Sources and Background, Oxford: Oxford University Press.
Norton-Smith, J. (1974) Geoffrey Chaucer, London: Routledge and Kegan Paul.
Windeatt, B. (ed.) (1984) Geoffrey Chaucer ‘Troilus & Criseyde’ a new edition of ‘TheBook of Troilus’, London and New York: Longman.
Gordon, I. (1970) The Double Sorrow of Troilus: A Study of Ambiguities in ‘Troilusand Criseyde’, Oxford: Clarendon. | <urn:uuid:358f3d43-ca63-415a-ac3a-7f69f9aa0a7b> | CC-MAIN-2021-21 | https://literariness.org/2020/07/13/analysis-of-geoffrey-chaucers-troilus-and-criseyde/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988758.74/warc/CC-MAIN-20210506144716-20210506174716-00216.warc.gz | en | 0.960609 | 7,152 | 2.828125 | 3 |
The precepts of Zen Buddhism derive from the rules that governed the Sangha, or community of monks and nuns who gathered about Shakyamuni Buddha. As the religion of Buddhism developed through the Mahayana schools, the meaning of sangha broadened to include all beings, not just monks and nuns, and not just human beings. Community continues to be a treasure of the religion today, and the precepts continue to be a guide. My purpose in this book is to clarify them for Western students of Buddhism as a way to help make Buddhism a daily practice.
Without the precepts as guidelines, Zen Buddhism tends to become a hobby, made to fit the needs of the ego. Selflessness, as taught in the Zen center, conflicts with the indulgence that is encouraged by society. The student is drawn back and forth, from outside to within the Zen center, tending to use the center as a sanctuary from the difficulties experienced in the world. In my view, the true Zen Buddhist center is not a mere sanctuary, but a source from which ethically motivated people move outward to engage in the larger community.
There are different sets of precepts, depending on the teachings of the various schools of Buddhism. In the Harada-Yasutani line of Zen, which derives from the Soto school, the “Sixteen Bodhisattva Precepts” are studied and followed. These begin with the “Three Vows of Refuge”:
I take refuge in the Buddha;
I take refuge in the Dharma;
I take refuge in the Sangha.
Buddha, Dharma, and Sangha can be understood here to mean realization, truth, and harmony. These Three Vows of Refuge are central to the ceremony of initiation to Buddhism in all of its schools.
The way of applying these vows in daily life is presented in “The Three Pure Precepts,” which derive from a gatha (didactic verse) in the Dhammapada and other early Buddhist books:
Renounce all evil;
practice all good;
keep your mind pure—
thus all the Buddhas taught.1
In Mahayana Buddhism, these lines underwent a change reflecting a shift from the ideal of personal perfection to the ideal of oneness with all beings. The last line was dropped, and the third rewritten:
Renounce all evil;
practice all good;
save the many beings.
These simple moral injunctions are then explicated in detail in “The Ten Grave Precepts,” “Not Killing, Not Stealing, Not Misusing Sex,” and so on, which are discussed in the next ten chapters.
These sixteen Bodhisattva precepts are accepted by the Zen student in the ceremony called Jukai (“Receiving the Precepts”), in which the student acknowledges the guidance of the Buddha. They are studied privately with the roshi, the teacher, but are not taken up in teisho (Dharma talks), or discussed at any length in Zen commentaries.
I think the reason for this esotericism is the fear of misunderstanding. When Bodhidharma says that in self-nature there is no thought of killing, as he does in his comment on the First Grave Precept, this was his way of saving all beings. When Dogen Kigen Zenji says that you should forget yourself, as he does throughout his writing, this was his way of teaching openness to the mind of the universe. However, it seems that teachers worry that “no thought of killing” and “forgetting the self’ could be misunderstood to mean that one has license to do anything, so long as one does it forgetfully.
I agree that the pure words of Bodhidharma and Dogen Zenjican be misunderstood, but for this very reason I think it is the responsibility of Zen teachers to interpret them correctly. Takuan Soho Zenji fails to live up to this responsibility, it seems to me, in his instructions to a samurai:
The uplifted sword has no will of its own, it is all of emptiness. It is like a flash of lightning. The man who is about to be struck down is also of emptiness, as is the one who wields the sword. . .
Do not get your mind stopped with the sword you raise; forget about what you are doing, and strike the enemy. Do not keep your mind on the person before you. They are all of emptiness, but beware of your mind being caught in emptiness.2
The Devil quotes scripture, and Mara, the incarnation of ignorance, can quote the Abhidharma. The fallacy of the Way of the Samurai is similar to the fallacy of the Code of the Crusader. Both distort what should be a universal view into an argument for partisan warfare. The catholic charity of the Holy See did not include people it called pagans. The vow of Takuan Zenji to save all beings did not encompass the one he called the enemy.3
This is very different from the celebrated koan of Nanch’uan killing the cat:
The Priest Nan-ch’uan found monks of the Eastern and Western halls arguing about a cat. He held up the cat and said, “Everyone! If you can say something, I will spare this cat. If you can’t say anything, I will cut off its head.” No one could say anything, so Nansen cut the cat into two.4
Like all koans, this is a folk story, expressive of essential nature as it shows up in a particular setting. The people who object to its violence are those who refuse to read fairy tales to their children. Fairy tales have an inner teaching which children grasp intuitively, and koans are windows onto spiritual knowledge. Fairy tales do not teach people to grind up bones of Englishmen to make bread, and koans do not instruct us to go around killing pets.
Spiritual knowledge is a powerful tool. Certain teachings of Zen Buddhism and certain elements of its practice can be abstracted and used for secular purposes, some of them benign, such as achievement in sports; some nefarious, such as murder for hire. The Buddha Dharma with its integration of wisdom and compassion must be taught in its fullness. Otherwise its parts can be poison when they are misused.
“Buddha Dharma” means here “Buddhist doctrine,” but “Dharma” has a broader meaning than “doctrine,” and indeed it carries with it an entire culture of meaning. Misunderstanding of the precepts begins with misunderstanding of the Dharma, and likewise clear insight into the Dharma opens the way to upright practice.
First of all, the Dharma is the mind, not merely the brain, or the human spirit. “Mind” with a capital letter, if you like. It is vast and fathomless, pure and clear, altogether empty, and charged with possibilities. It is the unknown, the unnameable, from which and as which all beings come forth.
Second, these beings that come forth also are the Dharma. People are beings, and so are animals and plants, so are stones and clouds, so are postulations and images that appear in dreams. The Dharma is phenomena and the world of phenomena.
Third, the Dharma is the interaction of phenomena and the law of that interaction. “Dharma” and its translations mean “law” in all languages of Buddhist lineage, Sanskrit, Chinese, and Japanese. The Dharma is the law of the universe, a law that may be expressed simply: “One thing depends upon another.” Cause leads to effect, which in turn is cause leading to effect, in an infinite, dynamic web of endless dimensions. The operation of this law is called “karma.”
Many people feel there is something mechanical in the karmic interpretation of the Dharma. “Cause and effect,” however dynamic, can imply something blind, so it is important to understand that “affinity” is another meaning of karma. When a man and woman in Japan meet and fall in love, commonly they will say to each other, “We must have known each other in previous lives.” Western couples may not say such a thing, but they will feel this same sense of affinity. What we in the West attribute to coincidence, the Asians attribute to affinity. “Mysterious karma” is an expression you will commonly hear.
Affinity and coincidence are surface manifestations of the organic nature of the universe, in which nothing occurs independently or from a specific set of causes, but rather everything is intimately related to everything else, and things happen by the tendencies of the whole in the context of particular circumstances. The Law of Karma expresses the fact that the entire universe is in equilibrium, as Marco Pallis has said.5
This intimate interconnection is found in nature by biologists and physicists today as it was once found by the Buddhist geniuses who composed Mahayana texts, particularly the Prajnaparamita (Perfection of Wisdom) and the Huayen (Garland of Flowers) sutras. These are compendiums of religious literature that offer important tools for understanding the Dharma, and thus understanding the precepts.
The Heart Sutra, which condenses the Prajnaparamita into just a couple of pages, begins with the words:
Avalokitesvara, doing deep prajnaparamita,
clearly saw that all five skandhas are empty,
transforming suffering and distress.6
Avalokiteshvara is the Bodhisattva of Mercy, who by his or her very name expresses the fact that the truth not merely sets you free, it also brings you into compassion with others. In the Far East, the name is translated in two ways, “The One Who Perceives the [Essential] Self at Rest,” and “The One Who Perceives the Sounds of the World.” In Japanese these names are Kanjizai and Kanzeon respectively.
Kanjizai, the one who perceives the self at rest, clearly sees that the skandhas, phenomena and our perceptions of them, are all without substance. This is the truth that liberates and transforms. Kanzeon, the one who perceives the sounds of the world in this setting of empty infinity, is totally free of self-preoccupation, and so is tuned to the suffering other creatures. Kanjizai and Kanzeon are the same Bodhisattva of Mercy.
“Bodhisattva” is a compound Sanskrit word that means “enlightenment-being.” There are three implications of the term: a being who is enlightened, a being who is on the path of enlightenment, and one who enlightens beings. The whole of Mahayana metaphysics is encapsulated in this triple archetype. Avalokiteshvara is the Buddha from the beginning and also is on the path to realizing that fact. Moreover, this self-realization is not separate from the Tao (“the Way”) of saving others. For you and me, this means that saving others is saving ourselves, and saving ourselves is realizing what has always been true. As disciples of Shakyamuni Buddha, we exemplify these three meanings. Senzaki Nyogen
Sensei used to begin his talks by saying, “Bodhisattvas,” as another speaker in his time would have said, “Ladies and Gentlemen.”
Learning to accept the role of the Bodhisattva is the nature of Buddhist practice. Avalokiteshvara is not just a figure on the altar. He or she is sitting on your chair as you read this. When you accept your merciful and compassionate tasks in a modest spirit, you walk the path of the Buddha. When the members of the Zen Buddhist center act together as Bodhisattvas, they generate great power for social change—this is the sangha as the Buddha intended it to be.
The Hua-yen Sutra refines our understanding of the Bodhisattva role in presenting the doctrine of interpenetration: that I and all beings perfectly reflect and indeed are all people, animals, plants, and so on. The metaphor is the “Net of Indra,” a model of the universe in which each point of the net is a jewel that perfectly reflects all other jewels. This model is made intimate in Zen study, beginning with our examination of the Buddha’s own experience on seeing the Morning Star, when he exclaimed, “I and all beings have at this moment attained the way.”7
You are at ease with yourself when Kanjizai sits on your cushions—at ease with the world when Kanzeon listens through the hairs of your ears. You are open to the song of the thrush and to the curse of the harlot—like Blake, who knew intimately the interpenetration of things:
I wander thro’ each charter’d street
Near where the charter’d Thames does flow,
And mark in every face I meet
Marks of weakness, marks of woe.
In every cry of every Man,
In every Infant’s cry of fear,
In every voice, in every ban,
The mind-forg’d manacles I hear.
How the Chimney-sweeper’s cry
Every black’ning Church appals;
And the hapless Soldier’s sigh
Runs in blood down Palace walls.
But most thro’ midnight streets I hear
How the youthful Harlot’s curse
Blasts the new born Infant’s tear,
And blights with plagues the Marriage hearse.8
We are all of us interrelated—not just people, but animals too, and stones, clouds, trees. And, as Blake wrote so passionately, what a mess we have made of the precious net of relationships. We rationalize ourselves into insensitivity about people, animals, and plants, forging manacles of the mind, confining ourselves to fixed concepts of I and you, we and it, birth and death, being and time. This is suffering and distress. But if you can see that all phenomena are transparent, ephemeral, and indeed altogether void, then the thrush will sing in your heart, and you can suffer with the prostitute.
Experiencing emptiness is also experiencing peace, and the potential of peace is its unfolding as harmony among all people, animals, plants, and things. The precepts formulate this harmony, showing how the absence of killing and stealing is the very condition of mercy and charity.
This is the Middle Way of Mahayana Buddhism. It is unself-conscious, and so avoids perfectionism. It is unselfish, and so avoids hedonism. Perfection is the trap of literal attachment to concepts. A priest from Southeast Asia explained to us at Koko An, many years ago, that his practice consisted solely of reciting his precepts, hundreds and hundreds of them. To make his trip to the United States, he had to receive special dispensation in order to handle money and talk to women. Surely this was a case of perfectionism.
Hedonism, on the other hand, is the trap of ego-indulgence that will not permit any kind of censor, overt or internal, to interfere with self-gratification. The sociopath, guided only by strategy to get his or her own way, is the extreme model of such a person. Certain walks of life are full of sociopaths, but all of us can relate to that condition. Notice how often you manipulate other people. Where is your compassion?
In the study of the precepts, compassion is seen to have two aspects, benevolence and reverence. Benevolence, when stripped of its patronizing connotations, is simply our love for those who need our love. Reverence, when stripped of its passive connotations, is simply our love for those who express their love to us.
The model of benevolence would be the love of parent toward child, and the model of reverence would be the love of child toward parent. However, a child may feel benevolence toward parents, and parents reverence toward children. Between husband and wife, or friend and friend, these models of compassion are always in flux, sometimes mixed, sometimes exchanged.
Seeing compassion in this detail enables us to understand love as it is, the expression of deepest consciousness directed in an appropriate manner. Wu-men uses the expression, “The sword that kills; the sword that gives life,”9 in describing the compassionate action of a great teacher. On the one hand there is love that says, “Don’t do that!” And on the other hand, there is the love that says, “Do as you think best.” It is the same love, now “killing” and now “giving life.” To one friend we may say, “That’s fine.” To another we may say, “That won’t do.” The two actions involved might be quite similar, but in our wisdom perhaps we can discern when to wield the negative, and when the positive.
Without this single, realized mind, corruption can appear. I am thinking of a teacher from India who is currently very popular. I know nothing about him except his many books. His writings sparkle with genuine insight. Yet something is awry. There are sordid patches of anti-Semitism and sexism. Moreover, he does not seem to caution his students about cause and effect in daily life. What went wrong here? I think he chose a short cut to teaching. My impression is that he underwent a genuine religious experience, but missed taking the vital, step-by-step training which in Zen Buddhist tradition comes after realization. Chao-chou trained for over sixty years before he began to teach—a sobering example for us all. The religious path begins again with an experience of insight, and we must train diligently thereafter to become mature.
One of my students taught me the Latin maxim, In corruptio optima pessima, “In corruption, the best becomes the worst.” For the teacher of religious practice, the opportunity to exploit students increases with his or her charisma and power of expression. Students become more and more open and trusting. The fall of such a teacher is thus a catastrophe that can bring social and psychological breakdown in the sangha.
This is not only a violation of common decency but also of the world view that emerges from deepest experience. You and I come forth as possibilities of essential nature, alone and independent as stars, yet reflecting and being reflected by all things. My life and yours are the unfolding realization of total aloneness and total intimacy. The self is completely autonomous, yet exists only in resonance with all other selves.
Yun-men said, “Medicine and sickness mutually correspond. The whole universe is medicine. What is the self?” I know of no koan that points more directly to the Net of Indra. Yun-men is engaged in the unfolding of universal realization, showing the interchange of self and other as a process of universal health. To see this clearly, you must come to answer Yun-men’s question, “What is the self?”10
Do you say there is no such thing? Who is saying that, after all! How do you account for the individuality of your manner, the uniqueness of your face? The sixteen Bodhisattva precepts bring Yun-men’s question into focus and give it context, the universe and its phenomenon. But while the crackerbarrel philosopher keeps context outside, Yun-men is not such a fellow.
Still, cultural attitudes must be given their due. As Western Buddhists, we are also Judeo-Christian in outlook, perhaps without knowing it. Inevitably we take the precepts differently, just as the Japanese rook them differently when they received them from China, and the Chinese differently when Bodhidharma appeared. Where we would say a person is alcoholic, the Japanese will say, “He likes saké very much.” The addiction is the same, the suffering is the same, and life is cut short in the same way. But the precept about substance-abuse will naturally be applied one way by Japanese, and another by Americans.
It is also important to trace changes in Western society coward traditional matters over the past twenty years. The Western Zen student is usually particularly sensitive to these changes. Christian and Judaic teachings may seem thin, and nineteenth-century ideals that led people so proudly to celebrate Independence Day and to cheer the Stars and Stripes have all but died out.
I don’t dream about the President any more, and when I talk to my friends, I find they don’t either. The Great Leader is a hollow man, the Law of the Market cannot prove itself, and the Nation State mocks its own values.
This loss of old concepts and images gives us unprecedented freedom to make use of fundamental virtues, “grandmother wisdom” of conservation, proportion, and decency, to seek the source of rest and peace that has no East or West. It is not possible to identify this source specifically in words–the Zen teacher Seung Sahn calls it the “Don’t-Know Mind.” He and I and all people who write and speak about Buddhism use Buddhist words and personages to identify that place, yet such presentations continually fall in upon themselves and disappear. We take our inspiration from the Diamond Sutra and other sutras of the Prajnaparamita tradition, which stress the importance of not clinging to concepts, even of Buddhahood.11
Wu-tsu said, “Shakyamuni and Maitreya are servants of another. I want to ask you, ‘Who is that other?’”12 After you examine yourself for a response to this question, you might want the Buddha and his colleagues to stay around and lend a hand. Perhaps they can inspire your dreams, and their words express your deepest aspirations; but if they are true servants, they will vanish any time they get in the way.
We need archetypes, as our dreams tell us, to inspire our lives. As lay people together, we do not have the model of a priest as a leader, but we follow in the footsteps of a few great lay personages from Vimalakirti to our own Yamada Roshi, who manifest and maintain the Dharma while nurturing a family.
The sixteen Bodhisattva precepts, too, are archetypes, “skillful means” for us to use in guiding our engagement with the world. They are not commandments engraved in stone, but expressions of inspiration written in something more fluid than water. Relative and absolute are altogether blended. Comments on the precepts by Bodhidharma and Dogen Zenji are studied as koans, but our everyday life is a great, multifaceted koan that we resolve at every moment, and yet never completely resolve.
1See Irving Babbitt, trans., The Dhammapada (New York: New Directions, 1965), p. 30.
2D. T. Suzuki, Zen and Japanese Culture (New York: Pantheon, 1959), pp. 114-115.
3Takuan Zenji echoes Krishna’s advice to Arjuna:
These bodies are perishable, but the dwellers in these
Bodies are eternal, indestructible, and impenetrable.
Therefore fight, O descendant of Bharata!
He who considers this (Self) as a slayer or he who thinks
That this (Self) is slain, neither of these knows the
Truth. For It does not slay, nor is It slain.
“Bhagavad Gita,” II, 17-19
Lin Yutang, ed., The Wisdom of China and India (New York: Random House, 1942), p. 62.
The separation of the absolute from the relative and the treatment of the absolute as something impenetrable may be good Hinduism, but it is not the teaching of the Buddha, for whom absolute and relative were inseparable except when necessary to highlight them as aspects of a unified reality.
4See Koun Yamada, Gateless Gate (Los Angeles: Center Publications, 1979), p. 76.
5Marco Pallis, A Buddhist Spectrum (New York: The Seabury Press, 1981), p. 10.
6Robert Aitken, Taking the Path of Zen (San Francisco: Nort Point Press, 1982), p. 110.
7Koun Yamada and Robert Aitken, trans. Denkoroku, mimeo., Diamond Sangha, Honolulu & Haiku, Hawaii, Case 1.
8William Blake, “London,” Poetry and Prose of William Blake, ed. Geoffrey Keynes (London: Nonesuch Library, 1961), p. 75.
9Yamada, Gateless Gate, p. 64.
10See J. C. and Thomas Cleary, The Blue Cliff Record, 3 vols. (Boulder and London: Shambhala, 1977), III p. 559.
11See Edward Conze, trans., Buddhist Wisdom Books (London: Allen and Unwin, 1975), pp. 17-74; and D. T. Suzuki, trans., Manual of Zen Buddhism (New York: Grove Press, 1960), pp. 38-72.
12Comments attributed to Bodhidharma and comments by Dogen Zenji, which appear in each of my essays on the Ten Grave Precepts were translated by Yamada Koun Roshi and myself from Goi, Sanki, Sanju, Jujukinkai Dokugo (Soliloquy on the Five Degrees, the Three Refuges, the Three Pure Precepts, and the Ten Grave Precepts) by Yasutani Hakuun Roshi (Tokyo: Sanbokoryukai, 1962), pp. x–xvi; 71–97. These comments were also translated by Maezumi Taizan Roshi in the pamphlet Mindless Flower, published many years ago by the Zen Center of Los Angeles and now out of print. I have used Maezumi Roshi’s work as a reference in revising the translations that Yamada Roshi and I made originally. The comments attributed to Bodhidharma are believed by modern scholars to have been written by Hui-ssu (ancestor of the T’ien T’ai school of Buddhism) and adopted later by Zen teachers. I have retained the legend that Bodhidharma wrote them; after all Bodhidharma himself is something of a legend. Legends fuel our practice. My reference is a personal letter from the Hui-ssu scholar Dan Stevenson dated August 22, 1983.
From The Mind of Clover, © 1984 by Robert Aitken. Reproduced with permission of Farrar, Straus & Giroux. Image courtesy Aitken Roshi’s offical site.
Start your day with a fresh perspective
Thank you for subscribing to Tricycle! As a nonprofit, we depend on readers like you to keep Buddhist teachings and practices widely available. | <urn:uuid:978caa04-f463-4309-bff6-ee423c1c71bb> | CC-MAIN-2021-21 | https://tricycle.org/trikedaily/the-nature-of-the-precepts/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00496.warc.gz | en | 0.952944 | 5,763 | 2.8125 | 3 |
CoQ10 is a naturally occurring nutrient that is needed for energy production. 16/12/2013 by tupbebektedavisiveben. A topic which is not of common discussion is why this happens and also what a woman can do about it. The level of CoQ10 in the body has a key role to play in a woman’s fertility status. This decrease in CoQ10 and energy is correlated with the diminished ability of an embryo to implant in the uterus, as well as with the overall quality and quantity of a woman’s eggs. Studies have shown actual structural damage begins to occur to the mitochondria of women over 40. CoQ10 is a vital part in creating energy for our cell’s energy powerhouses, the mitochondria. The remaining two women each donated three oocytes. How Much Ubiquinol for Female Fertility? 6 (2014): 1–13. https://ivfdonationworld.com/fertility-supplements-for-women Vol. Researchers studying animal populations have observed that not only were ovulation rates improved in populations treated with CoQ10, but also that there were less aged eggs in the population treated with CoQ10, in addition to the egg quality being similar to that in younger populations. The optimal levels of CoQ10 also help boost the sperm count in a man. They examined several factors including ovulation, clinical pregnancy rates, the number of follicles of desirable measurements, and endometrial thickness. People taking Warfarin should not take CoQ10. To date there has been no studies completed on the impact of CoQ10 in a woman’s fertility. Incidentally, CoQ10 is good for men, too. CoQ10 doses of 100–600 mg have been shown to help boost fertility . Like eating, and breathing, and sleeping, and…. They concluded that “CoQ10 may lead to improvement in egg and embryo quality and pregnancy outcomes.”. CoQ10 - Impact on Female Fertility … However, CoQ10 does help in improving fertility in men, too. Which makes it into something you can do together. It makes for healthier, more motile sperm, and protects from chromosomal damage as well. The main source of CoQ10 is believed to come from endogenous production, or made within our bodies, as opposed to coming from external sources such as diet, as the bioavailability of CoQ10 from diet is believed to be very low, making it difficult for one to obtain this nutrient in sufficient amounts through the intake of foods. Upper abdominal pain 2. al. Did you know that the human egg contains more mitochondria than any other cell. After age 35, fertility begins to decline more quickly. While CoQ10 has many health benefits, it is the research on its benefit in protecting egg quality that first caught my attention many years ago. A small number of people complain of mild gastrointestinal complaints with using CoQ10. My RE told me not to bother with regular CoQ10. A study published in Fertility and Sterility showed that supplementation of 600 mg of CoQ10 daily by older women improved both egg quality and fertilization rates. When patients are confronted with this diagnosis, there are medical, psychological, and financial sequelae. Furthermore, increased concentrations of CoQ10 in the body are correlated with higher grade embryos and better embryo development in IVF. When the mitochondria are not working properly, energy is not made in abundance, and there is a decrease of CoQ10. Male Fertility and CoQ10. A woman’s peak fertility occurs in her 20s. Sirmans, S.M., and K.A. Book a FREE Fertility Audit call with Dr. Terzo here. The health of our mitochondria is a big deal if we are trying to fall pregnant! Don't use CoQ10 if you're pregnant or breast-feeding. It has been reported that a correlation seems to exist between low plasma CoQ10 levels and spontaneous abortions.. J Assist Reprod Genet. Before we talk about how much CoQ10 is most beneficial to take, let’s look at the two main forms CoQ10 is sold in. Human studies have also shown a positive impact on pregnancy outcome in women over 35 with the addition of mitochondrial nutrients, such as CoQ10. How does CoQ10 improve egg quality to help a woman in her late 30s or 40s get pregnant? Supplementing with CoQ10 can boost mitochondrial health and subsequently boost our fertility. This further supports the notion that age-related decline in fertile capacity is directly related to mitochondrial health and energy production. Natural Fertility & Hormone Expert for Women, Enter your email to subscribe to my newsletter. In the older group of 45 patients, 43 women each donated two oocytes, which were assigned randomly between the two groups (control and CoQ10). Under these circumstances, research findings suggest that fertility challenges are likely to present. Male fertility is just as important as female. Insomnia 7. How much Co-Q10 / Ubiquinol to take for fertility benefits depends first on the form of the supplement you are taking. 363, No. al. 29, No. However, CoQ10 effects aren’t restricted to women. and plays an important role in cellular energy production. The mitochondria is the part of the cell that is largely responsible for energy production. CoQ10 and egg quality CoQ10 is a naturally occurring nutrient that is needed for energy production. CoQ10 supplements and fertility. https://www.ncbi.nlm.nih.gov/pubmed/23273985, https://www.ncbi.nlm.nih.gov/pubmed/24987272. Light sensitivity 11. Researchers examined the effects of clomiphene citrate combined with CoQ10 supplementation v. the drug alone without the addition of CoQ10 for ovulation induction, specifically in a group of women with clomiphene citrate–resistant polycystic ovarian syndrome. Headaches 6. 100 mg of ubiquinol is recommended for the basic plan. In fact, our mitochondrial energy peaks around age 20 and then begins to decline after that. It is quite commonly known that female egg quality and quantity both diminish as a woman increases in age; it is also important to be aware that this age-related decline takes place regardless of continuing ovulatory cycles and/or menstruation. CoQ10 is improving egg quality by affecting energy status of women’s eggs. The purpose of this article is to discuss the research findings correlating coenzyme Q10 (commonly known as the natural health product CoQ10) and its relationship with fertility. The equivelant of this in Ubiquinol is 300mg. CoQ10 is directly involved in this process. It has been shown to possess protective capabilities against oxidative-stress damage in certain organ mitochondria and to be a scavenger of free radicals. Please first review with your personal health-care provider(s) what therapeutic approaches and products would be best for your case. An increase in energy production and number of follicles produced. Under these circumstances, research findings suggest that fertility challenges are likely to present.. Disclaimer: The information presented in this article is for general information purposes only and does not constitute medical advice. What You Didn’t Know About Dark Chocolate, Hypochlorhydria - Not Enough of a Good Thing, N-Acetylcysteine - Little-Known Role in Mental Health, Using Food to Fuel the Adrenal Glands - How to Not Be Tired and Stressed Out, Vitamin B6, Tryptophan, and Other Serotonin and GABA Influencers - Treatments for Premenstrual Dysphoric Disorder (PMDD), The Postpartum Period - Incidence and Risk Factors of Autoimmune Diseases. On a weekly basis, I am asked about which supplement I would recommend most to help increase fertility.. And while there is never just one supplement I recommend for 100% of my fertility clients, CoQ10 comes quite close to that. , In a study using an animal population, those supplemented with CoQ10 for 18 weeks were found to have a significant increase in the number of successfully ovulated eggs and increased mitochondrial energy production in the eggs. The more her eggs are damaged by oxidative stress, the more her fertility deteriorates. This is vital, as we now are looking beyond the simple beginnings of fertilizing an egg, but also into the long-term potential for this embryo to grow in the uterus due to implantation, which is dependent on energy production from the mitochondrial processes. ). Coast Science Fertile One® PC 600 - Our Best Seller *NOW WITH 600 mg CoQ10 and 999 mcg Quatrefolic® Approved for use during IVF protocol, Fertile One® PC 600 is a ‘super-antioxidant’ and preconception supplement that contains specific vitamins and minerals to prepare female patients for fertility treatment. https://natural-fertility-prescription.com/egg-quality-coq10 In fact, this decline in mitochondrial integrity is thought to play a big role in the process of aging. You will read on the famous natural-fertility-info.com site that A study published in Fertility and Sterility showed that supplementation of 600 mg of CoQ10 daily by older women improved both egg quality and fertilization rates.” Female fertility decreases with age due to a decline in the number and quality of available eggs. CoQ10 could increase the number of eggs retrieved as well as the quality of those eggs. The current recommendation for CoQ10 intake is 90 to 200 mg of CoQ10 per day. Under these circumstances, research findings suggest that fertility challenges are likely to present. “Coenzyme Q10 supplementation and oocyte aneuploidy in women undergoing IVF-ICSI treatment.” Clinical Medicine Insights: Reproductive Health. “Epidemiology, diagnosis, and management of polycystic ovarian syndrome.” Clinical Epidemiology. Fertil Steril. Mitochondrial content reflects oocyte variability and fertilization outcome. They concluded that “CoQ10 may lead to improvement in egg and embryo quality and pregnancy outcomes.” It's more expensive, but also more effective. What they found is that ovulation occurred 65.9% of the time (in 54/82 cycles) in the group using the CoQ10 with the medication, v. 15.5% of the time (11/71 cycles) in the group using the clomiphene citrate alone. I always reduce the amount if a positive test is confirmed so be sure to revisit your supplement regime if … The decrease in CoQ10 has been correlated to the decrease in fertility. It is a more absorbable version of the supplement, however it is costly for manufacturers to produce in a stable form. , Although polycystic ovarian syndrome (PCOS) is not the sole reason a female may be having fertility challenges, it may be a relatively common one which can present with fertility challenges, with some sources stating that 40% of women with PCOS experience fertility challenges. Fertility challenges associated with maternal aging are an ever-increasing concern and can add constant pressure and stress to females wanting to have children. Tweet Dear Dr Mitchell, I am 37 years old. As we age, CoQ10 decreases in the body. After taking the ubiquinol for just 4 months and doing fertility acupuncture at the same time, I … ATP powers every metabolic process inside our bodies. As you require about half the dosage of ubiquinol it is better value to buy ubiquinol. Dear Dr Mitchell, I am 37 years old. An increase in energy production and number of follicles produced. You should stop the supplement if you become pregnant, because the safety of high dose CoQ10 has not been studied in pregnancy. Ben-Meir, A., et. Dizziness 10. Given that some fertility clinics are now suggesting to their clients that they take up to 1200mg of CoQ10, taking 300mg of the ubiquinol twice daily would be the most one would need. Likewise, the amount of CoQ10 … So, this supplement version will usually be more expensive, but you only need to take half the dose recommendation of ubiquinone. 8 (2014): 31–36. What to Buy – CoQ10 or Ubiquinol for Fertility? CoQ10. Male infertility is as much a legitimate pursuit in fertility therapy as woman infertility. Ovulation is an extensive process that requires an immense amount of energy. CoQ10 and Male Infertility. As we age, our mitochondrial function becomes impaired and the production of CoQ10 decreases. This study aimed to look at the rates of chromosomally abnormal embryos in the CoQ10 group vs. a placebo group. Pate. On the advice of a girl at the herbal store, I'm trying CoQ10 supplements to try to improve my fertility since my husband and I are trying to have a child. CoQ10 improves sperm health, too. In fact, as related in the book, It Starts With the Egg by Rebecca Fett, research has shown a link between the levels of cellular energy in the form of ATP and an ‘egg’s potential to mature properly and become a high-quality embryo.’, In 2013, a study on mice found that the level of CoQ10 in cells surrounding their eggs declined with age. Vol. CoQ10 works within the mitochondria (the cellular power stations) in the cells and is essential for energy production. So if you haven’t already given this supplement a try, it is definitely worth checking out! CoQ10 doses of 100–600 mg have been shown to help boost fertility . When to Start CoQ10 for fertility? As a woman gets old, her level of CoQ10 production gets lower and lower. If your RE recommends DHEA, also ask him/her about CoQ10. Monday, October 08, 2018 . Tao, R., et. Oocytes (ovarian cells) are dependent on CoQ10 for the energy required for this process. It is commonly known that antioxidant supply is vital, especially as we age, to quench free radicals and prevent some forms of cellular damage. Copyright © Naturopathic Currents Inc, 2020 All Rights Reserved. Coenzyme Q10 supplements can boost fertility in both men and women. CoQ10, or Coenzyme Q10, is a nutrient naturally made in the body and also found in some of the foods we eat. Here is a rundown: Ubiquinone is the standard form of CoQ10. From age 30 to 35, women experience a gradual decline in fertility. Since chromosomal abnormalities are a common cause of miscarriage and failure for an embryo to implant, the use of CoQ10 to increase fertility is definitely very promising. There is a direct link between CoQ10 and male and female fertility, making the nutrient an important component of any fertility treatment plan. A common pharmaceutical used amongst women with PCOS is clomiphene citrate; this drug is used to help the induction of ovulation. As a result, her eggs become more exposed to oxidative damage. Marketing. Santos TA, El Shourbagy A, St John JC. Vol. With this data, the same researchers then went on to give CoQ10 to older female mice (comparable to a 40-year old woman in age) and observe the outcome on their fertility. A recent promising study 2 in mice found an improvement in the egg quality later in their reproductive years suggesting that supplementation of CoQ10 may be able to help overcome the natural decline of a woman’s fertility as she ages. 1(2014): 119–124. 14, No. Ovulation is an energy-intense process. Research appears to show the impact CoQ10 has on male fertility as well. How Dose Coenzyme Q10 Supplementation During Infertility Treatment Effects Pregnancy Outcome (CoQ10) The safety and scientific validity of this study is the responsibility of the study sponsor and investigators. CoQ10 is directly involved in this process. Note that in It Starts with The Egg recommends 200mg of ubiquinol for the intermediate fertility plan. A small study performed in 2014 on women aged 35-43 undergoing IVF and taking CoQ10 supplementally showed promising results. With this data, the same researchers then went on to give CoQ10 to older female mice (comparable to a 40-year old woman in age) and observe the outcome on their fertility. In humans, it is believed that the concentrations of CoQ10 tend to decrease after 30 years of age in some tissues. Listing a study does not mean it has been evaluated by the U.S. Federal Government. As a fertility doctor, I tell patients that CoQ10 appears to be safe, although it has not been proven to be effective with any rigorous scientific research. Women ’ s worth it if it works! ” Dear Dr Mitchell, I recommend taking the dosage! Looking for natural ways to increase your fertility and Sterility also showed the same improvements in the number of a! Dosage recommendation of ubiquinone or roughly half that amount for ubiquinol ( 200-300mg/day ) the rates of chromosomally abnormal in. Age 35, fertility begins to occur to the decline in the body a. First before using if you are better off taking ubiquinol cells to process energy desirable,. The more her fertility deteriorates naturally made in the CoQ10 group 2 ] how does improve... Fertility Audit call with Dr. Terzo here supplement to have healthy egg quality in both men and.... You how Coenzyme Q10 is absorbed in the process of ovulation expends so much.! Supplement to have very low side effects of CoQ10 directly impacts the presence of CoQ10 … Each donated. Dispel many of the cellular power stations ) in the body read more… however, CoQ10 help. Myths surrounding CoQ 10 and fertility during reproductive aging. ” aging cell is. Ta, El Shourbagy a, St John JC or ubiquinol for the energy for... [ 5 ] the mitochondria of women over 40 there is no link CoQ... The higher dosage body has a key role to play a big deal if are. Which make it beneficial for egg quality to be safe and to be at its optimal for. Biomedicine Online Markers with CoQ10 can boost fertility in both men and women concern and add. Ramos Eda s, Ferriani RA ubiquinol for fertility undergoing infertility treatment, such as IVF ovary syndrome. ” Epidemiology! Coq10 … Each woman donated at least two oocytes as a type of in... Rates, the number of eggs retrieved as well as the quality of available eggs dosage recommendation of ubiquinone roughly. That in it Starts with the age-related decline in mitochondrial integrity is thought to play in a woman s... Re told me not to bother with regular CoQ10 DHEA, also him/her! Me not to bother with regular CoQ10 santos TA, El Shourbagy a St! Directly related to mitochondrial health and energy production fertility occurs in her 30s... Produce usable energy for the cell that is needed for energy production eggs are far less efficient producing. Women ’ s fertility status add constant pressure and stress to females wanting to have low! Coq10 also help boost the sperm count in a man eggs become less in. Health and subsequently boost our fertility s worth it if it works! ” Dr. These pretty much act as a type of CoQ10 in a woman ’ s right, every metabolic process so. First before using if you are better off taking ubiquinol book a free fertility Audit call Dr.. Recommend taking the higher dosage and stress to females coq10 fertility woman to conceive capacity is life-changing and can add pressure. Using if you haven ’ coq10 fertility woman restricted to women deal if we are trying to conceive, the number quality. A big deal if we are trying to conceive, I recommend either 400-600mg/day divided! Coq10 supplementation specifically in older women ’ s eggs are far coq10 fertility woman efficient in energy production many. Big deal if we are trying to conceive, the mitochondria is a gradual decline in the.... Is not of common discussion is why this happens and also found in just every..., 2020 t restricted to women challenges associated with maternal aging are ever-increasing... Are you a woman can do about it famous for improving egg quality by affecting status. Levels of CoQ10 are generally mild and limited to stomach upset this process eggs are damaged by oxidative,. And alcohol, cessation should, if you become pregnant, because the safety of use CoQ10. Gradual decline in mitochondrial integrity is thought to play in a stable form by Gleicher. Add constant pressure and stress to females wanting to have very low side effects of CoQ10 per day test. Over 40 there is a big deal if we are trying to conceive to date there has been by! Divided doses of ubiquinone is the part of the cell type of CoQ10 coq10 fertility woman or Coenzyme restores. Published in fertility and Sterility also showed the same improvements in the Journal of Urology indicates results... This process potent antioxidant properties and cell membrane stabilizing properties, which make beneficial! Pregnant or breast-feeding of our mitochondria is the part of the supplement if you are off. Just about every cell in the body Romano GS, Ramos Eda,. Has not been studied in pregnancy, Burstein E, Casper RF a try, it is better value Buy..., there are medical, psychological, and the production of CoQ10 directly impacts presence. To 200 mg of the cell and is a rundown: ubiquinone is the standard form CoQ10 is in. Constitute medical advice antioxidant properties and cell membrane stabilizing properties, which make it for. Of antioxidants within the mitochondria is the standard form of ATP patients are confronted with diagnosis!
Reported Speech Statements Exercises, How To Apply A Wash To A Model, Dvips Ubuntu Install, 12v Led Globes, Knorr Beef Gravy, Sbi Manager Salary Quora, Bulldog Gin Price 1 Litre, Camp Chef Oven Vs Coleman Oven, Improvise Adapt Overcome Meme Shower, Revit Presentation Graphics, Findlaw Account Login, Easton Softball Bats 2020, Barista Sri Lanka Owner, | <urn:uuid:78024922-bc10-4fb9-9a1a-59a1fadb116d> | CC-MAIN-2021-21 | https://www.thebrandbooth.com/2yf8g/coq10-fertility-woman-42f3f0 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988831.77/warc/CC-MAIN-20210508001259-20210508031259-00176.warc.gz | en | 0.942161 | 4,677 | 2.734375 | 3 |
4000bce - 399
400 - 1399
1400 - 1499
1500 - 1599
1600 - 1699
1700 - 1799
1800 - 1899
1900 - 1999
We, the people of the Confederate States, each State acting in its sovereign and independent character, in order to form a permanent federal government, establish justice, insure domestic tranquillity, and secure the blessings of liberty to ourselves and our posterity invoking the favor and guidance of Almighty God do ordain and establish this Constitution for the Confederate States of America.
Section I. All legislative powers herein delegated shall be vested in a Congress of the Confederate States, which shall consist of a Senate and House of Representatives.
Sec. 2. (I) The House of Representatives shall be composed of members chosen every second year by the people of the several States; and the electors in each State shall be citizens of the Confederate States, and have the qualifications requisite for electors of the most numerous branch of the State Legislature; but no person of foreign birth, not a citizen of the Confederate States, shall be allowed to vote for any officer, civil or political, State or Federal.
(2) No person shall be a Representative who shall not have attained the age of twenty-five years, and be a citizen of the Confederate States, and who shall not when elected, be an inhabitant of that State in which he shall be chosen.
(3) Representatives and direct taxes shall be apportioned among the several States, which may be included within this Confederacy, according to their respective numbers, which shall be determined by adding to the whole number of free persons, including those bound to service for a term of years, and excluding Indians not taxed, three-fifths of all slaves. ,The actual enumeration shall be made within three years after the first meeting of the Congress of the Confederate States, and within every subsequent term of ten years, in such manner as they shall by law direct. The number of Representatives shall not exceed one for every fifty thousand, but each State shall have at least one Representative; and until such enumeration shall be made, the State of South Carolina shall be entitled to choose six; the State of Georgia ten; the State of Alabama nine; the State of Florida two; the State of Mississippi seven; the State of Louisiana six; and the State of Texas six.
(4) When vacancies happen in the representation from any State the executive authority thereof shall issue writs of election to fill such vacancies.
(5) The House of Representatives shall choose their Speaker and other officers; and shall have the sole power of impeachment; except that any judicial or other Federal officer, resident and acting solely within the limits of any State, may be impeached by a vote of two-thirds of both branches of the Legislature thereof.
Sec. 3. (I) The Senate of the Confederate States shall be composed of two Senators from each State, chosen for six years by the Legislature thereof, at the regular session next immediately preceding the commencement of the term of service; and each Senator shall have one vote.
(2) Immediately after they shall be assembled, in consequence of the first election, they shall be divided as equally as may be into three classes. The seats of the Senators of the first class shall be vacated at the expiration of the second year; of the second class at the expiration of the fourth year; and of the third class at the expiration of the sixth year; so that one-third may be chosen every second year; and if vacancies happen by resignation, or other wise, during the recess of the Legislature of any State, the Executive thereof may make temporary appointments until the next meeting of the Legislature, which shall then fill such vacancies.
(3) No person shall be a Senator who shall not have attained the age of thirty years, and be a citizen of the Confederate States; and who shall not, then elected, be an inhabitant of the State for which he shall be chosen.
(4) The Vice President of the Confederate States shall be president of the Senate, but shall have no vote unless they be equally divided.
(5) The Senate shall choose their other officers; and also a president pro tempore in the absence of the Vice President, or when he shall exercise the office of President of the Confederate states.
(6) The Senate shall have the sole power to try all impeachments. When sitting for that purpose, they shall be on oath or affirmation. When the President of the Confederate States is tried, the Chief Justice shall preside; and no person shall be convicted without the concurrence of two-thirds of the members present.
(7) Judgment in cases of impeachment shall not extend further than to removal from office, and disqualification to hold any office of honor, trust, or profit under the Confederate States; but the party convicted shall, nevertheless, be liable and subject to indictment, trial, judgment, and punishment according to law.
Sec. 4. (I) The times, places, and manner of holding elections for Senators and Representatives shall be prescribed in each State by the Legislature thereof, subject to the provisions of this Constitution; but the Congress may, at any time, by law, make or alter such regulations, except as to the times and places of choosing Senators.
(2) The Congress shall assemble at least once in every year; and such meeting shall be on the first Monday in December, unless they shall, by law, appoint a different day.
Sec. 5. (I) Each House shall be the judge of the elections, returns, and qualifications of its own members, and a majority of each shall constitute a quorum to do business; but a smaller number may adjourn from day to day, and may be authorized to compel the attendance of absent members, in such manner and under such penalties as each House may provide.
(2) Each House may determine the rules of its proceedings, punish its members for disorderly behavior, and, with the concurrence of two-thirds of the whole number, expel a member.
(3) Each House shall keep a journal of its proceedings, and from time to time publish the same, excepting such parts as may in their judgment require secrecy; and the yeas and nays of the members of either House, on any question, shall, at the desire of one-fifth of those present, be entered on the journal.
(4) Neither House, during the session of Congress, shall, without the consent of the other, adjourn for more than three days, nor to any other place than that in which the two Houses shall be sitting.
Sec. 6. (I) The Senators and Representatives shall receive a compensation for their services, to be ascertained by law, and paid out of the Treasury of the Confederate States. They shall, in all cases, except treason, felony, and breach of the peace, be privileged from arrest during their attendance at the session of their respective Houses, and in going to and returning from the same; and for any speech or debate in either House, they shall not be questioned in any other place. 'o Senator or Representative shall, during the time for which he was elected, be appointed to any civil office under the authority of the Confederate States, which shall have been created, or the emoluments whereof shall have been increased during such time; and no person holding any office under the Confederate States shall be a member of either House during his continuance in office. But Congress may, by law, grant to the principal officer in each of the Executive Departments a seat upon the floor of either House, with the privilege of discussing any measures appertaining to his department.
Sec. 7. (I) All bills for raising revenue shall originate in the House of Representatives; but the Senate may propose or concur with amendments, as on other bills.
(2) Every bill which shall have passed both Houses, shall, before it becomes a law, be presented to the President of the Confederate States; if he approve, he shall sign it; but if not, he shall return it, with his objections, to that House in which it shall have originated, who shall enter the objections at large on their journal, and proceed to reconsider it. If, after such reconsideration, two-thirds of that House shall agree to pass the bill, it shall be sent, together with the objections, to the other House, by which it shall likewise be reconsidered, and if approved by two-thirds of that House, it shall become a law. But in all such cases, the votes of both Houses shall be determined by yeas and nays, and the names of the persons voting for and against the bill shall be entered on the journal of each House respective}y. If any bill shall not be returned by the President within ten days (Sundays excepted) after it shall have been presented to him, the same shall be a law, in like manner as if he had signed it, unless the Congress, by their adjournment, prevent its return; in which case it shall not be a law. The President may approve any appropriation and disapprove any other appropriation in the same bill. In such case he shall, in signing the bill, designate the appropriations disapproved; and shall return a copy of such appropriations, with his objections, to the House in which the bill shall have originated; and the same proceedings shall then be had as in case of other bills disapproved by the President.
(3) Every order, resolution, or vote, to which the concurrence of both Houses may be necessary (except on a question of adjournment) shall be presented to the President of the Confederate States; and before the same shall take effect, shall be approved by him; or, being disapproved by him, shall be repassed by two-thirds of both Houses, according to the rules and limitations prescribed in case of a bill.
Sec. 8. The Congress shall have power-
(I) To lay and collect taxes, duties, imposts, and excises for revenue, necessary to pay the debts, provide for the common defense, and carry on the Government of the Confederate States; but no bounties shall be granted from the Treasury; nor shall any duties or taxes on importations from foreign nations be laid to promote or foster any branch of industry; and all duties, imposts, and excises shall be uniform throughout the Confederate States.
(2) To borrow money on the credit of the Confederate States.
(3) To regulate commerce with foreign nations, and among the several States, and with the Indian tribes; but neither this, nor any other clause contained in the Constitution, shall ever be construed to delegate the power to Congress to appropriate money for any internal improvement intended to facilitate commerce; except for the purpose of furnishing lights, beacons, and buoys, and other aids to navigation upon the coasts, and the improvement of harbors and the removing of obstructions in river navigation; in all which cases such duties shall be laid on the navigation facilitated thereby as may be necessary to pay the costs and expenses thereof.
(4) To establish uniform laws of naturalization, and uniform laws on the subject of bankruptcies, throughout the Confederate States; but no law of Congress shall discharge any debt contracted before the passage of the same.
(5) To coin money, regulate the value thereof, and of foreign coin, and fix the standard of weights and measures.
(6) To provide for the punishment of counterfeiting the securities and current coin of the Confederate States.
(7) To establish post offices and post routes; but the expenses of the Post Office Department, after the Ist day of March in the year of our Lord eighteen hundred and sixty-three, shall be paid out of its own revenues.
(8) To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.
(9) To constitute tribunals inferior to the Supreme Court.
(10) To define and punish piracies and felonies committed on the high seas, and offenses against the law of nations.
(12) To raise and support armies; but no appropriation of money to that use shall be for a longer term than two years.
(13) To provide and maintain a navy.
(14) To make rules for the government and regulation of the land and naval forces.
(15) To provide for calling forth the militia to execute the laws of the Confederate States, suppress insurrections, and repel invasions.
(16) To provide for organizing, arming, and disciplining the militia, and for governing such part of them as may be employed in the service of the Confederate States; reserving to the States, respectively, the appointment of the officers, and the authority of training the militia according to the discipline prescribed by Congress.
(17) To exercise exclusive legislation, in all cases whatsoever, over such district (not exceeding ten miles square) as may, by cession of one or more States and the acceptance of Congress, become the seat of the Government of the Confederate States; and to exercise like authority over all places purchased by the consent of the Legislature of the State in which the same shall be, for the . erection of forts, magazines, arsenals, dockyards, and other needful buildings; and
(18) To make all laws which shall be necessary and proper for carrying into execution the foregoing powers, and all other powers vested by this Constitution in the Government of the Confederate States, or in any department or officer thereof.
Sec. 9. (I) The importation of negroes of the African race from any foreign country other than the slaveholding States or Territories of the United States of America, is hereby forbidden; and Congress is required to pass such laws as shall effectually prevent the same.
(2) Congress shall also have power to prohibit the introduction of slaves from any State not a member of, or Territory not belonging to, this Confederacy.
(3) The privilege of the writ of habeas corpus shall not be suspended, unless when in cases of rebellion or invasion the public safety may require it.
(4) No bill of attainder, ex post facto law, or law denying or impairing the right of property in negro slaves shall be passed.
(5) No capitation or other direct tax shall be laid, unless in proportion to the census or enumeration hereinbefore directed to be taken.
(6) No tax or duty shall be laid on articles exported from any State, except by a vote of two-thirds of both Houses.
(7) No preference shall be given by any regulation of commerce or revenue to the ports of one State over those of another.
(8) No money shall be drawn from the Treasury, but in consequence of appropriations made by law; and a regular statement and account of the receipts and expenditures of all public money shall be published from time to time.
(9) Congress shall appropriate no money from the Treasury except by a vote of two-thirds of both Houses, taken by yeas and nays, unless it be asked and estimated for by some one of the heads of departments and submitted to Congress by the President; or for the purpose of paying its own expenses and contingencies; or for the payment of claims against the Confederate States, the justice of which shall have been judicially declared by a tribunal for the investigation of claims against the Government, which it is hereby made the duty of Congress to establish.
(10) All bills appropriating money shall specify in Federal currency the exact amount of each appropriation and the purposes for which it is made; and Congress shall grant no extra compensation to any public contractor, officer, agent, or servant, after such contract shall have been made or such service rendered.
(11) No title of nobility shall be granted by the Confederate States; and no person holding any office of profit or trust under them shall, without the consent of the Congress, accept of any present, emolument, office, or title of any kind whatever, from any king, prince, or foreign state.
(12) Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble and petition the Government for a redress of grievances.
(13) A well-regulated militia being necessary to the security of a free State, the right of the people to keep and bear arms shall not be infringed.
(14) No soldier shall, in time of peace, be quartered in any house without the consent of the owner; nor in time of war, but in a manner to be prescribed by law.
(15) The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated; and no warrants shall issue but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched and the persons or things to be seized.
(16) No person shall be held to answer for a capital or otherwise infamous crime, unless on a presentment or indictment of a grand jury, except in cases arising in the land or naval forces, or in the militia, when in actual service in time of war or public danger; nor shall any person be subject for the same offense to be twice put in jeopardy of life or limb; nor be compelled, in any criminal case, to be a witness against himself; nor be deprived of life, liberty, or property without due process of law; nor shall private property be taken for public use, without just compensation.
(17) In all criminal prosecutions the accused shall enjoy the right to a speedy and public trial, by an impartial jury of the State and district wherein the crime shall have been committed, which district shall have been previously ascertained by law, and to be informed of the nature and cause of the accusation; to be confronted with the witnesses against him; to have compulsory process for obtaining witnesses in his favor; and to have the assistance of counsel for his defense.
(18) In suits at common law, where the value in controversy shall exceed twenty dollars, the right of trial by jury shall be preserved; and no fact so tried by a jury shall be otherwise reexamined in any court of the Confederacy, than according to the rules of common law.
(19) Excessive bail shall not be required, nor excessive fines imposed, nor cruel and unusual punishments inflicted.
(20) Every law, or resolution having the force of law, shall relate to but one subject, and that shall be expressed in the title.
Sec. 10. (I) No State shall enter into any treaty, alliance, or confederation; grant letters of marque and reprisal; coin money; make anything but gold and silver coin a tender in payment of debts; pass any bill of attainder, or ex post facto law, or law impairing the obligation of contracts; or grant any title of nobility.
(2) No State shall, without the consent of the Congress, lay any imposts or duties on imports or exports, except what may be absolutely necessary for executing its inspection laws; and the net produce of all duties and imposts, laid by any State on imports, or exports, shall be for the use of the Treasury of the Confederate States; and all such laws shall be subject to the revision and control of Congress.
(3) No State shall, without the consent of Congress, lay any duty on tonnage, except on seagoing vessels, for the improvement of its rivers and harbors navigated by the said vessels; but such duties shall not conflict with any treaties of the Confederate States with foreign nations; and any surplus revenue thus derived shall, after making such improvement, be paid into the common treasury. Nor shall any State keep troops or ships of war in time of peace, enter into any agreement or compact with another State, or with a foreign power, or engage in war, unless actually invaded, or in such imminent danger as will not admit of delay. But when any river divides or flows through two or more States they may enter into compacts with each other to improve the navigation thereof.
Section I. (I) The executive power shall be vested in a President of the Confederate States of America. He and the Vice President shall hold their offices for the term of six years; but the President shall not be reeligible. The President and Vice President shall be elected as follows:
(2) Each State shall appoint, in such manner as the Legislature thereof may direct, a number of electors equal to the whole number of Senators and Representatives to which the State may be entitled in the Congress; but no Senator or Representative or person holding an office of trust or profit under the Confederate States shall be appointed an elector.
(3) The electors shall meet in their respective States and vote by ballot for President and Vice President, one of whom, at least, shall not be an inhabitant of the same State with themselves; they shall name in their ballots the person voted for as President, and in distinct ballots the person voted for as Vice President, and they shall make distinct lists of all persons voted for as President, and of all persons voted for as Vice President, and of the number of votes for each, which lists they shall sign and certify, and transmit, sealed, to the seat of the Government of. the Confederate States, directed to the President of the Senate; the President of the Senate shall,in the presence of the Senate and House of Representatives, open all the certificates, and the votes shall then be counted; the person having the greatest number of votes for President shall be the President, if such number be a majority of the whole number of electors appointed; and if no person have such majority, then from the persons having the highest numbers, not exceeding three, on the list of those voted for as President, the House of Representatives shall choose immediately, by ballot, the President. But in choosing the President the votes shall be taken by States, the representation from each State having one vote; a quorum for this purpose shall consist of a member or members from two-thirds of the States, and a majority of all the States shall be necessary to a choice. And if the House of Representatives shall not choose a President, whenever the right of choice shall devolve upon them, before the 4th day of March next following, then the Vice President shall act as President, as in case of the death, or other constitutional disability of the President.
(4) The person having the greatest number of votes as Vice President shall be the Vice President, if such number be a majority of the whole number of electors appointed; and if no person have a majority, then, from the two highest numbers on the list, the Senate shall choose the Vice President; a quorum for the purpose shall consist of two-thirds of the whole number of Senators, and a majority of the whole number shall be necessary to a choice.
(5) But no person constitutionally ineligible to the office of President shall be eligible to that of Vice President of the Confederate States.
(6) The Congress may determine the time of choosing the electors, and the day on which they shall give their votes; which day shall be the same throughout the Confederate States.
(7) No person except a natural-born citizen of the Confederate; States, or a citizen thereof at the time of the adoption of this Constitution, or a citizen thereof born in the United States prior to the 20th of December, 1860, shall be eligible to the office of President; neither shall any person be eligible to that office who shall not have attained the age of thirty-five years, and been fourteen years a resident within the limits of the Confederate States, as they may exist at the time of his election.
(8) In case of the removal of the President from office, or of his death, resignation, or inability to discharge the powers and duties of said office, the same shall devolve on the Vice President; and the Congress may, by law, provide for the case of removal, death, resignation, or inability, both of the President and Vice President, declaring what officer shall then act as President; and such officer shall act accordingly until the disability be removed or a President shall be elected.
(9) The President shall, at stated times, receive for his services a compensation, which shall neither be increased nor diminished during the period for which he shall have been elected; and he shall not receive within that period any other emolument from the Confederate States, or any of them.
(10) Before he enters on the execution of his office he shall take the following oath or affirmation:
Sec. 2. (I) The President shall be Commander-in-Chief of the Army and Navy of the Confederate States, and of the militia of the several States, when called into the actual service of the Confederate States; he may require the opinion, in writing, of the principal officer in each of the Executive Departments, upon any subject relating to the duties of their respective offices; and he shall have power to grant reprieves and pardons for offenses against the Confederate States, except in cases of impeachment.
(2) He shall have power, by and with the advice and consent of the Senate, to make treaties; provided two-thirds of the Senators present concur; and he shall nominate, and by and with the advice and consent of the Senate shall appoint, ambassadors, other public ministers and consuls, judges of the Supreme Court, and all other officers of the Confederate States whose appointments are not herein otherwise provided for, and which shall be established by law; but the Congress may, by law, vest the appointment of such inferior officers, as they think proper, in the President alone, in the courts of law, or in the heads of departments.
(3) The principal officer in each of the Executive Departments, and all persons connected with the diplomatic service, may be removed from office at the pleasure of the President. All other civil officers of the Executive Departments may be removed at any time by the President, or other appointing power, when their services are unnecessary, or for dishonesty, incapacity. inefficiency, misconduct, or neglect of duty; and when so removed, the removal shall be reported to the Senate, together with the reasons therefor.
(4) The President shall have power to fill all vacancies that may happen during the recess of the Senate, by granting commissions which shall expire at the end of their next session; but no person rejected by the Senate shall be reappointed to the same office during their ensuing recess.
Sec. 3. (I) The President shall, from time to time, give to the Congress information of the state of the Confederacy, and recommend to their consideration such measures as he shall judge necessary and expedient; he may, on extraordinary occasions, convene both Houses, or either of them; and in case of disagreement between them, with respect to the time of adjournment, he may adjourn them to such time as he shall think proper; he shall receive ambassadors and other public ministers; he shall take care that the laws be faithfully executed, and shall commission all the officers of the Confederate States.
Sec. 4. (I) The President, Vice President, and all civil officers of the Confederate States, shall be removed from office on impeachment for and conviction of treason, bribery, or other high crimes and misdemeanors.
Section I. (I) The judicial power of the Confederate States shall be vested in one Supreme Court, and in such inferior courts as the Congress may, from time to time, ordain and establish. The judges, both of the Supreme and inferior courts, shall hold their offices during good behavior, and shall, at stated times, receive for their services a compensation which shall not be diminished during their continuance in office.
Sec. 2. (I) The judicial power shall extend to all cases arising under this Constitution, the laws of the Confederate States, and treaties made, or which shall be made, under their authority; to all cases affecting ambassadors, other public ministers and consuls; to all cases of admiralty and maritime jurisdiction; to controversies to which the Confederate States shall be a party; to controversies between two or more States; between a State and citizens of another State, where the State is plaintiff; between citizens claiming lands under grants of different States; and between a State or the citizens thereof, and foreign states, citizens, or subjects; but no State shall be sued by a citizen or subject of any foreign state.
(2) In all cases affecting ambassadors, other public ministers and consuls, and those in which a State shall be a party, the Supreme Court shall have original jurisdiction. In all the other cases before mentioned, the Supreme Court shall have appellate jurisdiction both as to law and fact, with such exceptions and under such regulations as the Congress shall make.
(3) The trial of all crimes, except in cases of impeachment, shall be by jury, and such trial shall be held in the State where the said crimes shall have been committed; but when not committed within any State, the trial shall be at such place or places as the Congress may by law have directed.
Sec. 3. (I) Treason against the Confederate States shall consist only in levying war against.them, or in adhering to their enemies, giving them aid and comfort. No person shall be convicted of treason unless on the testimony of two witnesses to the same overt act, or on confession in open court.
Section I. (I) Full faith and credit shall be given in each State to the public acts, records, and judicial proceedings of every other State; and the Congress may, by general laws, prescribe the manner in which such acts, records, and proceedings shall be proved, and the effect thereof.
Sec. 2. (I) The citizens of each State shall be entitled to all the privileges and immunities of citizens in the several States; and shall have the right of transit and sojourn in any State of this Confederacy, with their slaves and other property; and the right of property in said slaves shall not be thereby impaired.
(2) A person charged in any State with treason, felony, or other crime against the laws of such State, who shall flee from justice, and be found in another State, shall, on demand of the executive authority of the State from which he fled, be delivered up, to be removed to the State having jurisdiction of the crime.
(3) No slave or other person held to service or labor in any State or Territory of the Confederate States, under the laws thereof, escaping or lawfully carried into another, shall, in consequence of any law or regulation therein, be discharged from such service or labor; but shall be delivered up on claim of the party to whom such slave belongs,. or to whom such service or labor may be due.
Sec. 3. (I) Other States may be admitted into this Confederacy by a vote of two-thirds of the whole House of Representatives and two-thirds of the Senate, the Senate voting by States; but no new State shall be formed or erected within the jurisdiction of any other State, nor any State be formed by the junction of two or more States, or parts of States, without the consent of the Legislatures of the States concerned, as well as of the Congress.
(2) The Congress shall have power to dispose of and make allneedful rules and regulations concerning the property of the Confederate States, including the lands thereof.
(3) The Confederate States may acquire new territory; and Congress shall have power to legislate and provide governments for the inhabitants of all territory belonging to the Confederate States, lying without the limits of the several Sates; and may permit them, at such times, and in such manner as it may by law provide, to form States to be admitted into the Confederacy. In all such territory the institution of negro slavery, as it now exists in the Confederate States, shall be recognized and protected be Congress and by the Territorial government; and the inhabitants of the several Confederate States and Territories shall have the right to take to such Territory any slaves lawfully held by them in any of the States or Territories of the Confederate States.
(4) The Confederate States shall guarantee to every State that now is, or hereafter may become, a member of this Confederacy, a republican form of government; and shall protect each of them against invasion; and on application of the Legislature or of the Executive when the Legislature is not in session) against domestic violence.
Section I. (I) Upon the demand of any three States, legally assembled in their several conventions, the Congress shall summon a convention of all the States, to take into consideration such amendments to the Constitution as the said States shall concur in suggesting at the time when the said demand is made; and should any of the proposed amendments to the Constitution be agreed on by the said convention, voting by States, and the same be ratified by the Legislatures of two- thirds of the several States, or by conventions in two-thirds thereof, as the one or the other mode of ratification may be proposed by the general convention, they shall thenceforward form a part of this Constitution. But no State shall, without its consent, be deprived of its equal representation in the Senate.
I. The Government established by this Constitution is the successor of the Provisional Government of the Confederate States of America, and all the laws passed by the latter shall continue in force until the same shall be repealed or modified; and all the officers appointed by the same shall remain in office until their successors are appointed and qualified, or the offices abolished.
2. All debts contracted and engagements entered into before the adoption of this Constitution shall be as valid against the Confederate States under this Constitution, as under the Provisional Government.
3. This Constitution, and the laws of the Confederate States made in pursuance thereof, and all treaties made, or which shall be made, under the authority of the Confederate States, shall be the supreme law of the land; and the judges in every State shall be bound thereby, anything in the constitution or laws of any State to the contrary notwithstanding.
4. The Senators and Representatives before mentioned, and the members of the several State Legislatures, and all executive and judicial officers, both of the Confederate States and of the several States, shall be bound by oath or affirmation to support this Constitution; but no religious test shall ever be required as a qualification to any office or public trust under the Confederate States.
5. The enumeration, in the Constitution, of certain rights shall not be construed to deny or disparage others retained by the people of the several States.
I. The ratification of the conventions of five States shall be sufficient for the establishment of this Constitution between the States so ratifying the same.
2. When five States shall have ratified this Constitution, in the manner before specified, the Congress under the Provisional Constitution shall prescribe the time for holding the election of President and Vice President; and for the meeting of the Electoral College; and for counting the votes, and inaugurating the President. They shall, also, prescribe the time for holding the first election of members of Congress under this Constitution, and the time for assembling the same. Until the assembling of such Congress, the Congress under the Provisional Constitution shall continue to exercise the legislative powers granted them; not extending beyond the time limited by the Constitution of the Provisional Government.
Adopted unanimously by the Congress of the Confederate States of South Carolina, Georgia, Florida, Alabama, Mississippi, Louisiana, and Texas, sitting in convention at the capitol, the city of Montgomery, Ala., on the eleventh day of March, in the year eighteen hundred and Sixty-one.
HOWELL COBB, President of the Congress.
R. Barnwell Rhett, C. G. Memminger, Wm. Porcher Miles, James Chesnut, Jr., R. W. Barnwell, William W. Boyce, L awrence M. Keitt, T. J. Withers.
Francis S. Bartow, Martin J. Crawford, Benjamin H. Hill, Thos. R. R. Cobb.
Jackson Morton, J. Patton Anderson, Jas. B. Owens.
Richard W. Walker, Robt. H. Smith, Colin J. McRae, William P. Chilton, Stephen F. Hale, David P. Lewis, Tho. Fearn, Jno. Gill Shorter, J. L. M. Curry.
Alex. M. Clayton, James T. Harrison, William S. Barry, W. S. Wilson, Walker Brooke, W. P. Harris, J. A. P. Campbell.
Alex. de Clouet, C. M. Conrad, Duncan F. Kenner, Henry Marshall.
John Hemphill, Thomas N. Waul, John H. Reagan, Williamson S. Oldham, Louis T. Wigfall, John Gregg, William Beck Ochiltree.
Richardson, James D.
A Compilation of the Messages and Papers of the Confederacy
Including the Diplomatic Correspondence 1861-1865
Nashville : United States Publishing Company, 1905 | <urn:uuid:2e12709c-d344-4794-9954-126aeee0b726> | CC-MAIN-2021-21 | https://avalon.law.yale.edu/19th_century/csa_csa.asp | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991921.61/warc/CC-MAIN-20210516232554-20210517022554-00176.warc.gz | en | 0.948918 | 7,718 | 2.515625 | 3 |
The Theory of Money and Credit
By Ludwig Mises
Ludwig von Mises (1881-1973) first published
The Theory of Money and Credit in German, in 1912. The edition presented here is that published by Liberty Fund in 1980, which was translated from the German by H. E. Batson originally in 1934, with additions in 1953. Only a few corrections of obvious typos were made for this website edition. One character substitution has been made: the ordinary character “C” has been substituted for the “checked C” in the name Cuhel.
H. E. Batson, trans.
First Pub. Date
Indianapolis, IN: Liberty Fund, Inc. Liberty Classics
First published in German. Foreword by Murray Rothbard and Introduction by Lionel Robbins not available online
The text of this edition is under copyright. Picture of Ludwig von Mises: file photo, Liberty Fund, Inc.
- Historical Prefaces
- Part I,Ch.1
- Part I,Ch.2
- Part I,Ch.3
- Part I,Ch.4
- Part I,Ch.5
- Part I,Ch.6
- Part II,Ch.7
- Part II,Ch.8
- Part II,Ch.9
- Part II,Ch.10
- Part II,Ch.11
- Part II,Ch.12
- Part II,Ch.13
- Part II,Ch.14
- Part III,Ch.15
- Part III,Ch.16
- Part III,Ch.17
- Part III,Ch.18
- Part III,Ch.19
- Part III,Ch.20
- Part IV,Ch.21
- Part IV,Ch.22
- Part IV,Ch.23
- Appendix A
- Appendix B
1 The General Economic Conditions for the Use of Money
The Function of Money
THE NATURE OF MONEY
Where the free exchange of goods and services is unknown, money is not wanted. In a state of society in which the division of labor was a purely domestic matter and production and consumption were consummated within the single household it would be just as useless as it would be for an isolated man. But even in an economic order based on division of labor, money would still be unnecessary if the means of production were socialized, the control of production and the distribution of the finished product were in the hands of a central body, and individuals were not allowed to exchange the consumption goods allotted to them for the consumption goods allotted to others.
The phenomenon of money presupposes an economic order in which production is based on division of labor and in which private property consists not only in goods of the first order (consumption goods), but also in goods of higher orders (production goods). In such a society, there is no systematic centralized control of production, for this is inconceivable without centralized disposal over the means of production. Production is “anarchistic.” What is to be produced, and how it is to be produced, is decided in the first place by the owners of the means of production, who produce, however, not only for their own needs, but also for the needs of others, and in their valuations take into account, not only the use-value that they themselves attach to their products, but also the use-value that these possess in the estimation of the other members of the community. The balancing of production and consumption takes place in the market, where the different producers meet to exchange goods and services by bargaining together. The function of money is to facilitate the business of the market by acting as a common medium of exchange.
2 The Origin of Money
Indirect exchange is distinguished from direct exchange according as a medium is involved or not.
Suppose that A and B exchange with each other a number of units of the commodities
n. A acquires the commodity
n because of the use-value that it has for him. He intends to consume it. The same is true of B, who acquires the commodity
m for his immediate use. This is a case of direct exchange.
If there are more than two individuals and more than two kinds of commodity in the market, indirect exchange also is possible. A may then acquire a commodity
p, not because he desires to consume it, but in order to exchange it for a second commodity
q which he does desire to consume. Let us suppose that A brings to the market two units of the commodity
m, B two units of the commodity
n, and C two units of the commodity
o, and that A wishes to acquire one unit of each of the commodities
o, B one unit of each of the commodities
m, and C one unit of each of the commodities
n. Even in this case a direct exchange is possible if the subjective valuations of the three commodities permit the exchange of each unit of
m, n, and
o for a unit of one of the others. But if this or a similar hypothesis does not hold good, and in by far the greater number of all exchange transactions it does not hold good, then indirect exchange becomes necessary, and the demand for goods for immediate wants is supplemented by a demand for goods to be exchanged for others.
Let us take, for example, the simple case in which the commodity
p is desired only by the holders of the commodity
q, while the comodity
q is not desired by the holders of the commodity
p but by those, say, of a third commodity
r, which in its turn is desired only by the possessors of
p. No direct exchange between these persons can possibly take place. If exchanges occur at all, they must be indirect; as, for instance, if the possessors of the commodity
p exchange it for the commodity
q and then exchange this for the commodity
r which is the one they desire for their own consumption. The case is not essentially different when supply and demand do not coincide quantitatively; for example, when one indivisible good has to be exchanged for various goods in the possession of several persons.
Indirect exchange becomes more necessary as division of labor increases and wants become more refined. In the present stage of economic development, the occasions when direct exchange is both possible and actually effected have already become very exceptional. Nevertheless, even nowadays, they sometimes arise. Take, for instance, the payment of wages in kind, which is a case of direct exchange so long on the one hand as the employer uses the labor for the immediate satisfaction of his own needs and does not have to procure through exchange the goods in which the wages are paid, and so long on the other hand as the employee consumes the goods he receives and does not sell them. Such payment of wages in kind is still widely prevalent in agriculture, although even in this sphere its importance is being continually diminished by the extension of capitalistic methods of management and the development of division of labor.
Thus along with the demand in a market for goods for direct consumption there is a demand for goods that the purchaser does not wish to consume but to dispose of by further exchange. It is clear that not all goods are subject to this sort of demand. An individual obviously has no motive for an indirect exchange if he does not expect that it will bring him nearer to his ultimate objective, the acquisition of goods for his own use. The mere fact that there would be no exchanging unless it was indirect could not induce individuals to engage in indirect exchange if they secured no immediate personal advantage from it. Direct exchange being impossible, and indirect exchange being purposeless from the individual point of view, no exchange would take place at all. Individuals have recourse to indirect exchange only when they profit by it; that is, only when the goods they acquire are more marketable than those which they surrender.
Now all goods are not equally marketable. While there is only a limited and occasional demand for certain goods, that for others is more general and constant. Consequently, those who bring goods of the first kind to market in order to exchange them for goods that they need themselves have as a rule a smaller prospect of success than those who offer goods of the second kind. If, however, they exchange their relatively unmarketable goods for such as are more marketable, they will get a step nearer to their goal and may hope to reach it more surely and economically than if they had restricted themselves to direct exchange.
It was in this way that those goods that were originally the most marketable became common media of exchange; that is, goods into which all sellers of other goods first converted their wares and which it paid every would-be buyer of any other commodity to acquire first. And as soon as those commodities that were relatively most marketable had become common media of exchange, there was an increase in the difference between their marketability and that of all other commodities, and this in its turn further strengthened and broadened their position as media of exchange.
Thus the requirements of the market have gradually led to the selection of certain commodities as common media of exchange. The group of commodities from which these were drawn was originally large, and differed from country to country; but it has more and more contracted. Whenever a direct exchange seemed out of the question, each of the parties to a transaction would naturally endeavor to exchange his superfluous commodities, not merely for more marketable commodities in general, but for the
most marketable commodities; and among these again he would naturally prefer whichever particular commodity was the most marketable of all. The greater the marketability of the goods first acquired in indirect exchange, the greater would be the prospect of being able to reach the ultimate objective without further maneuvering. Thus there would be an inevitable tendency for the less marketable of the series of goods used as media of exchange to be one by one rejected until at last only a single commodity remained, which was universally employed as a medium of exchange; in a word, money.
This stage of development in the use of media of exchange, the exclusive employment of a single economic good, is not yet completely attained. In quite early times, sooner in some places than in others, the extension of indirect exchange led to the employment of the two precious metals gold and silver as common media of exchange. But then there was a long interruption in the steady contraction of the group of goods employed for that purpose. For hundreds, even thousands, of years the choice of mankind has wavered undecided between gold and silver The chief cause of this remarkable phenomenon is to be found in the natural qualities of the two metals. Being physically and chemically very similar, they are almost equally serviceable for the satisfaction of human wants. For the manufacture of ornaments and jewelry of all kinds the one has proved as good as the other. (It is only in recent times that technological discoveries have been made which have considerably extended the range of uses of the precious metals and may have differentiated their utility more sharply.) In isolated communities, the employment of one or the other metal as sole common medium of exchange has occasionally been achieved, but this short-lived unity has always been lost again as soon as the isolation of the community has succumbed to participation in international trade.
Economic history is the story of the gradual extension of the economic community beyond its original limits of the single household to embrace the nation and then the world. But every increase in its size has led to a fresh duality of the medium of exchange whenever the two amalgamating communities have not had the same sort of money. It would not be possible for the final verdict to be pronounced until all the chief parts of the inhabited earth formed a single commercial area, for not until then would it be impossible for other nations with different monetary systems to join in and modify the international organization.
Of course, if two or more economic goods had exactly the same marketability, so that none of them was superior to the others as a medium of exchange, this would limit the development toward a unified monetary system. We shall not attempt to decide whether this assumption holds good of the two precious metals gold and silver. The question, about which a bitter controversy has raged for decades, has no very important bearings upon the theory of the nature of money. For it is quite certain that even if a motive had not been provided by the unequal marketability of the goods used as media of exchange, unification would still have seemed a desirable aim for monetary policy. The simultaneous use of several kinds of money involves so many disadvantages and so complicates the technique of exchange that the endeavor to unify the monetary system would certainly have been made in any case.
The theory of money must take into consideration all that is implied in the functioning of several kinds of money side by side. Only where its conclusions are unlikely to be affected one way or the other, may it proceed from the assumption that a single good is employed as common medium of exchange. Elsewhere, it must take account of the simultaneous use of several media of exchange. To neglect this would be to shirk one of its most difficult tasks.
3 The “Secondary” Functions of Money
The simple statement, that money is a commodity whose economic function is to facilitate the interchange of goods and services, does not satisfy those writers who are interested rather in the accumulation of material than in the increase of knowledge. Many investigators imagine that insufficient attention is devoted to the remarkable part played by money in economic life if it is merely credited with the function of being a medium of exchange; they do not think that due regard has been paid to the significance of money until they have enumerated half a dozen further “functions”—as if, in an economic order founded on the exchange of goods, there could be a more important function than that of the common medium of exchange.
After Menger’s review of the question, further discussion of the connection between the secondary functions of money and its basic function should be unnecessary.
*4 Nevertheless, certain tendencies in recent literature on money make it appear advisable to examine briefly these secondary functions—some of them are coordinated with the basic function by many writers—and to show once more that all of them can be deduced from the function of money as a common medium of exchange.
This applies in the first place to the function fulfilled by money
in facilitating credit transactions. It is simplest to regard this as part of its function as medium of exchange. Credit transactions are in fact nothing but the exchange of present goods against future goods. Frequent reference is made in English and American writings to a function of money as a standard of deferred payments.
*5 But the original purpose of this expression was not to contrast a particular function of money with its ordinary economic function, but merely to simplify discussions about the influence of changes in the value of money upon the real amount of money debts. It serves this purpose admirably. But it should be pointed out that its use has led many writers to deal with the problems connected with the general economic consequences of changes in the value of money merely from the point of view of modifications in existing debt relations and to overlook their significance in all other connections.
The functions of money as a
transmitter of value through time and space may also be directly traced back to its function as medium of exchange. Menger has pointed out that the special suitability of goods for hoarding, and their consequent widespread employment for this purpose, has been one of the most important causes of their increased marketability and therefore of their qualification as media of exchange.
*6 As soon as the practice of employing a certain economic good as a medium of exchange becomes general, people begin to store up this good in preference to others. In fact, hoarding as a form of investment plays no great part in our present stage of economic development, its place having been taken by the purchase of interest-bearing property.
*7 On the other hand, money still functions today as a means for transporting value through space.
*8 This function again is nothing but a matter of facilitating the exchange of goods. The European farmer who emigrates to America and wishes to exchange his property in Europe for a property in America, sells the former, goes to America with the money (or a bill payable in money), and there purchases his new homestead. Here we have an absolute textbook example of an exchange facilitated by money.
Particular attention has been devoted, especially in recent times, to the function of money
as a general medium of payment. Indirect exchange divides a single transaction into two separate parts which are connected merely by the ultimate intention of the exchangers to acquire consumption goods. Sale and purchase thus apparently become independent of each other Furthermore, if the two parties to a sale-and-purchase transaction perform their respective parts of the bargain at different times, that of the seller preceding that of the buyer (purchase on credit), then the settlement of the bargain, or the fulfillment of the seller’s part of it (which need not be the same thing), has no obvious connection with the fulfillment of the buyer’s part. The same is true of all other credit transactions, especially of the most important sort of credit transaction—lending. The apparent lack of a connection between the two parts of the single transaction has been taken as a reason for regarding them as independent proceedings, for speaking of the payment as an independent legal act, and consequently for attributing to money the function of being a common medium of
payment. This is obviously incorrect. “If the function of money as an object which facilitates dealings in commodities and capital is kept in mind, a function that includes the payment of money prices and repayment of loans…there remains neither necessity nor justification for further discussion of a special employment, or even function of money, as a medium of payment.”
The root of this error (as of many other errors in economics) must be sought in the uncritical acceptance of juristical conceptions and habits of thought. From the point of view of the law, outstanding debt is a subject which can and must be considered in isolation and entirely (or at least to some extent) without reference to the origin of the obligation to pay. Of course, in law as well as in economics, money is only the common medium of exchange. But the principal, although not exclusive, motive of the law for concerning itself with money is the problem of payment. When it seeks to answer the question, What is money? it is in order to determine how monetary liabilities can be discharged. For the jurist, money is a medium of payment. The economist, to whom the problem of money presents a different aspect, may not adopt this point of view if he does not wish at the very outset to prejudice his prospects of contributing to the advancement of economic theory.
Über Wert, Kapital und Rente (Jena, 1893; London, 1933), pp. 50 f.
Schumpeter is surely mistaken in thinking that the necessity for money can be proved solely from the assumption of indirect exchange (see his
Wesen und Hauptinhalt der theoretischen Nationalökonomie [Leipzig, 1908], pp. 273 ff.). On this point, cf. Weiss,
Die moderne Tendenz in der Lehre vom Geldwert, Zeitschrift für Volkswirtschaft, Sozialpolitik und Verwaltung, vol. 19, pp. 518 ff.
Untersuchungen über die Methode der Sozialwissenschaften und der politischen Okonomie insbesondere (Leipzig, 1883), pp. 172 ff.;
Grundsätze der Volkswirtschaftslehre, 2d ed. (Vienna, 1923), pp. 247 ff.
Grundsätze, pp. 278 ff.
A Treatise on Money and Essays on Present Monetary Problems (Edinburgh, 1888), pp. 22 ff.; Laughlin,
The Principles of Money (London, 1903), pp. 22 f.
Grundsätze, pp. 284 ff.
Geld und Kredit, 2d ed. [Berlin, 1885], vol. 1, pp. 233 ff.) has laid stress upon the function of money as interlocal transmitter of value.
Grundsätze, pp. 282 f.
Philosophie des Geldes, 2d ed. (Leipzig, 1907), p. 35; Schumpeter,
Wesen und Hauptinhalt der theoretischen Nationalökonomie (Leipzig, 1908), p. 50.
Jahrbücher für Nationalökonomie und Statistik (1886), New Series, vol. 13, p. 48.
Zur Lehre von den Bedürfnissen (Innsbruck, 1906), pp. 186 ff.; Weiss,
Die moderne Tendenz in der Lehre vom Geldwert, Zeitschrift für Volkswirtschaft, Sozialpolitik und Verwaltung, vol. 19, pp. 532 ff. In the last edition of his masterpiece
Capital and Interest, revised by himself, Böhm-Bawerk endeavored to refute Cuhel’s criticism, but did not succeed in putting forward any new considerations that could help toward a solution of the problem (see
Kapital und Kapitalzins, 3d ed. [Innsbruck, 1909-12], pp. 331 ff. Exkurse, pp. 280 ff.).
Mathematical Investigations in the Theory of Value and Prices, Transactions of the Connecticut Academy (New Haven, 1892), vol. 9, pp. 14 ff.
op. cit., p. 538.
op. cit., p. 290.
op. cit., pp. 534 ff.
Essentials of Economic Theory (New York, 1907), p. 41. In the first German edition of the present work, the above argument contained two further sentences that summarized in an inadequate fashion the results of investigation into the problem of total value. In deference to certain criticisms of C. A. Verrijn Stuart (
Die Grundlagen der Volkswirtschaft [Jena, 1923], p. 115), they were omitted from the second edition.
Die Gemeinwirtschaft: Untersuchungen über den Sozialismus(Jena, 1922), pp. 100 ff.
Rechte und Verhältnisse (Innsbruck, 1881), pp. 120 ff.
Beiträge zur Lehre von den Banken (Leipzig, 1857), pp. 34 ff.
Das Geld, 6th ed. (Leipzig, 1923), pp. 267 ff.; English trans.,
Money (London, 1927), pp. 284 ff.
The Principles of Money (London, 1903), pp. 516 ff.
Englands Übergang zur Goldwährung im 18. Jahrhundert (Strassburg, 1895), pp. 64 ff.; Schmoller, “Über die Ausbildung einer richtigen Scheidemünzpolitik vom 14. bis zum 19. Jahrhundert,”
Jahrbuch für Gesetzgebung, Verwaltung und Volkswirtschaft im Deutschen Reich 24 (1900): 1247-74; Helfferich,
Studien über Geld und Bankwessen (Berlin, 1900), pp. 1-37.
Cours complet d’économie politique pratique, 3d ed. (Paris, 1852), vol, 1, p. 408; and Wagner,
Theoretische Sozialökonomik (Leipzig, 1909), Part II pp. 504 ff. Very instructive discussions are to be found in the memoranda and debates that preceded the Belgian Token Coinage Act of 1860. In the memorandum of Pirmez, the nature of modern convertible token coins is characterized as follows: “With this property (of convertibility) the coins are no longer merely coins; they become claims, promises to pay. The holder no longer has a mere property right to the coin itself [
jus in re]; he has a claim against the state to the amount of the nominal value of the coin [
jus ad rem], a right which he can exercise at any moment by demanding its conversion. Token coins cease to be money and become a credit instrument [
une institution de crédit], banknotes inscribed on pieces of metal …” (see
Loi décrétant la fabrication d’une monnaie d’appoint … précédée des notes sur la monnaie de billon en Belgique ainsi que la discussion de la loi à la Chambre des Représentants [Brussels, 1860], p. 50).
Jahrbuch für Gesetzgebung, Verwaltung und Volkswirtschaft im Deutschen Reich 33 (1909): 985-1037; “Zum Problem gesetzlicher Aufnahme der Barzahlungen in Österreich-Ungarn,”
ibid. 34 (1910): 1877-84; “The Foreign Exchange Policy of the Austro-Hungarian Bank,”
Economic Journal 19 (1909): 202-11; “Das vierte Privilegium der Österreichisch-Ungarischen Bank,”
Zeitschrift für Volkswirtschaft, Sozialpolitik und Verwaltung 21 (1922): 611-24.
Die Hauptprinzipien des Geld-und Währungswesens und die Lösung der Valutafrage (Vienna, 1891), pp. 7 ff.; Gesell,
Die Anpassung des Geldes und seiner Verwaltung an die Bedürfnisse des modernen Verkehres (Buenos Aires, 1897), pp. 21 ff.; Knapp,
Staatliche Theorie des Geldes, 3d ed. (Munich, 1921), pp. 20 ff.
Allgemeine Münzkunde und Geldgeschichte des Mittelalters und der neureren Zeit (Munich, 1904), P. 215; Babelon,
La théorie féodale de la monnaie (
Extrait des mémoires de l’Académie des Inscriptions et Belles-Lettres, vol. 38, Part I [Paris, 1908], p. 35).
op. cit., p. 35.
Jahrbücher für Nationalökonomie und Statistik (1894), 3d. Series, vol. 7, p. 688.
Grundzüge der Volkswirtschaftslehre, trans. into German by Altschul (Leipzig, 1918), p. 357.
Studien in der romanisch-kanonistischen Wirtschafts-und Rechtslehre bis gegen Ende des 17. Jahrhunderts (Berlin, 1874), vol. 1, pp. 180 ff.
Cours d’économie politique, III., La monnaie (Paris, 1850), pp. 21 ff; Goldschmidt,
Handbuch des Handelsrechts (Erlangen, 1868), vol. 1, Part II, pp. 1073 ff. | <urn:uuid:a3d8d8f3-8352-40b2-bd0e-fa4d4b5d663a> | CC-MAIN-2021-21 | https://www.econlib.org/library/Mises/msT.html?chapter_num=5 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989526.42/warc/CC-MAIN-20210514121902-20210514151902-00014.warc.gz | en | 0.924608 | 5,778 | 2.765625 | 3 |
Share This Page
I begin on a note of caution and with a paradox. The note of caution is that when we critically examine the explanations offered for youth suicide, we must do so with deep compassion. A parent, sibling, or friend in the wake of suicide by a child, adolescent, or young adult is enduring the virtually unendurable. To reach for an explanation, a way to come to terms, is human, healthy, and inevitable.
Now for the paradox. There is a new emphasis today on the role of mental illness in youth suicide. This is refreshing after the media myth, widespread in the 1980s, of young suicides as talented, misunderstood young people who were just under too much stress. The paradox is that, although psychiatric conditions like depression or substance abuse almost always accompany youth suicide, we may be sliding into another myth. This myth claims that if we understand mental illness, then we understand suicide and, therefore, can prevent it by identifying and treating the psychiatric disorders. But this may not prevent suicide in many people, particularly young people; we are discovering that suicide is more complex.
First, some background. Youth suicide is a serious public health problem in America, the third most common cause of death among 15-to 24-year-olds. Community surveys have revealed suicidal behavior among 3 to 8 percent of youth, making it a major source of morbidity, health care costs, and ultimate mortality. Suicide and suicidal behavior are often discussed together. The risk factors are similar; there is a high rate of completed suicide among those who have made earlier attempts; both conditions seem to run in families; and there are similar findings in the neurobiology of people who have attempted and completed suicide.
Suicide, especially among youths, has often been discussed from an environmental or sociological perspective. While these are valid viewpoints, they can obscure a crucial biological perspective on suicidal behavior. Better understanding of the biological and particularly the genetic processes that underlie youth suicide has urgent implications for preventing and treating youth at risk for suicide. An analogy can be made to understanding obesity or coronary artery disease. Environment and diet profoundly influence both of those diseases, but advances in prevention can come through understanding the biological processes that lead to illness when certain environmental conditions are present.
Risks—But for Whom?
The rate of youth suicide has increased markedly since 1950, but is this proof that suicidal behavior is primarily a social or an environmental phenomenon? In fact, there is evidence that increasing availability of both firearms and alcohol to youth during this period has contributed to the increase in the rise in their suicide rate. But note that the suicide rate, while unacceptably high, still encompasses a relatively small number of young people. If alcohol and guns are easily accessible, why are these environmental risk factors particularly potent for youth who kill themselves and not so for others also exposed to them? Two complex genetic biological factors may be interacting with risks in the environment: psychiatric disorders and a tendency to impulsive aggression.
Suicide and suicidal behavior almost always point to serious psychiatric disorder.1 Nine out of ten suicide completers and attempters have at least one major psychiatric illness. In half of these cases, two or more such illnesses are present, dramatically increasing the risk for suicidal behavior. The most common psychiatric conditions predisposing to suicide and suicide attempts are mood disorders, alcohol and substance abuse, and behavior disorders—again, often in combination. Also, about one-third of suicide victims have made previous attempts, which increases the likelihood that they will one day succeed. Often the psychiatric conditions have taken hold several years prior to the suicide.
The emphasis on the role of psychopathology in youth suicide is refreshing after media presentations in the 1980s of youth suicides as misunderstood young people under stress. Some prevention programs in schools even taught that a “myth” of suicide prevention was that suicide victims were mentally ill. Yet, if our attempts to understand the causes of youth suicide stop with a delineation of psychopathology, and attempts to treat that mental illness, we may make barely a dent in the number of youth suicides.
Treatment of depression does not automatically translate into prevention of suicidal behavior. Recently, scientists analyzed a large group of studies of depressed adults who were enrolled in clinical trials that compared medication with placebo (“dummy” treatment).2 Despite a 30 percent greater improvement in depressive symptoms in the adults who were medicated, the rates of attempted and completed suicides were similar in the two groups. Here is strong evidence that the treatment of depression may not be sufficient to prevent suicide. Additional studies show a strong effect on preventing repetitive suicidal behavior without showing an improvement in mood, while others, as noted, show a strong impact on mood without influencing suicidal behavior.3 Thus psychopathology may be a necessary, but not a sufficient explanation of suicide and suicidal behavior, and treatment of psychopathology may be necessary but not always sufficient to prevent suicide.
A Surprising Force in Suicide
That leaves us casting about for additional causes. While there are several co-factors for suicidal behavior (for example, the availability of lethal agents in the home or a history of sexual abuse), the single most significant predisposing factor appears to be a liability to impulsive aggression. Impulsive aggression is a tendency to respond with hostility or aggression when faced with stress or frustration. This is sometimes termed “reactive aggression,” as distinguished from “predatory aggression.” When a child is taunted and responds out of anger with physical force, that is reactive aggression. When a child waits for another child to come around the corner so that he can steal his lunch money, that is predatory aggression. Children with attention deficit hyperactivity disorder (ADHD) do have prominent levels of impulsivity, but this is not necessarily associated with reactive aggression.
There is mounting evidence for three facts about impulsive aggression: 1. It is related to suicidal risk; 2. it is strongly correlated with changes in brain levels of the chemical messenger serotonin; and 3. it is genetically transmitted. Suicidal behavior seems to be passed from generation to generation in families. Could this be because impulsive aggression is transmitted? Would a better understanding of impulsive aggression have implications for preventing suicide and treating those at risk for suicidal behavior?
Examining suicide across the life span makes clear that youth suicide is qualitatively different than suicide in persons older than 30.4 Younger victims are more often responding to interpersonal and legal difficulties, have more impulsive aggression and more problems with substance abuse, and are less frequently depressed. (That is, however, only relative to older individuals; depression is still an important contributor to suicide risk among the young.) All this suggests that the problem of impulsive aggression may be particularly salient in understanding youth suicide. Those who treat depressed suicidal young people can attest to the difficulties that can arise from addressing the depression but failing to address the impulsiveness. Their patients may partially recover from the depression but, when faced with stress, may try again to commit suicide.
There are important public health implications of the more impulsive nature of youthful suicidal behavior. For example, people under age 24 are most susceptible to media influences that increase suicidal behavior; they are more likely than older individuals to commit suicide around the same time and in the same community as other youth suicides. We call this a time-space cluster.1 The availability of firearms may be an especially prominent risk factor for completed suicide among younger victims, probably because suicide is a more impulsive act in the young.1, 3 This in turn suggests that for younger, more impulsive individuals, restricting of access to firearms and means of self-destruction— at a minimum securing them in households where youth live—may be a relatively important component of suicide prevention.
In any case, there is good evidence that impulsive aggression is much more common in those who complete or attempt suicide than in demographically similar individuals. Even where people have similar psychiatric risk factors, those with impulsive aggression are at greater risk of suicide. Impulsive aggression may be related to substance abuse, which can further disinhibit an already impulsive person, with disastrous consequences. And drinking alcohol makes the use of a gun in a completed suicide much more likely.3
On the biological level, studies have shown a remarkable convergence between impulsive aggression and suicidal behavior, with both behaviors apparently correlated with alterations in serotonin in the central nervous system.5 Serotonin levels have also been correlated with impulsive aggressive behavior in non-human primates. Postmortem studies in humans have found alterations in serotonin receptors in the brains of suicide victims; the brain region showing the most striking changes is the orbital prefrontal cortex, which is intimately involved in the exercise of restraint. In fact, one PET study of impulsive murderers showed diminished metabolic rate in the prefrontal cortex.7
Both impulsive aggression and serotonin levels are controlled by our genes. Studies in twins have shown that genes explain around 40 percent of impulsive aggression; studies of non-human primates show that around 40 percent of the variance in serotonin levels is influenced by genes. Several studies have related variations in genes to variations in serotonin transmission in the brain and also to suicide attempts and impulsive aggression.
To what extent is suicidal behavior familial (and perhaps genetic), and do suicide and impulsive aggression appear to run together in the same families?
Making the Lethal Connections
So we know that impulsive aggression appears to be partly genetic and a logical culprit in suicide risk. But is it related to genetic risk for suicide? To what extent is suicidal behavior familial (and perhaps genetic), and do suicide and impulsive aggression appear to run together in the same families?
The evidence to support suicide running in families comes from adoption, twin, and family studies.6 Adoption studies look at how frequently suicide occurs in both the biological and adoptive relatives of adopted people who committed suicide, and then compare that with the frequency in biological and adoptive relatives of adopted people who did not commit suicide. Twin studies look at how frequently suicidal behavior occurs in both identical twins, and then compares this with the frequency in fraternal twins. If a condition is genetic, both identical twins should show suicidal behavior more frequently than both fraternal twins would, since identical twins completely share the same genes, while fraternal twins on average share only half their genes. Family studies look at the frequency of suicide or suicide attempts in the relatives of suicide victims or suicide attempters, compared to the frequency in the relatives of similar young people who do not exhibit suicidal behavior. An increased rate of suicidal behavior in the relatives of suicide victims compared to the rate in relatives of controls would point to family transmission of suicidal behavior, but would not differentiate between genetic and environmental causes of that transmission. For example, familial suicide rates might be elevated because of a true genetic predisposition to suicide, but it might be because of other shared family characteristics such as poverty, changing neighborhoods frequently, or discord. Let us look at some results of these three kinds of studies.
An adoption study conducted in Denmark shows strong evidence of some type of genetic effect for suicide. Adopted individuals who had committed suicide were matched with adopted individuals still living; then the rates of suicide were compared in their biological and adoptive relatives. The rate of suicide in the biological relatives of the suicide adoptees was six times higher than the rate of suicide in the biological relatives of the living adoptees, but there was no difference in the rates of suicide in the two sets of adoptive relatives. This strongly supports the theory of genetic causes for suicide, and also provides evidence against imitative behavior being an important component, since the adoptive relatives had no effect on the rate. What is less clear is what exactly is being transmitted. Is it an increased risk for a particular type or severity of psychiatric disorder, or some other trait?
Another adoption study by the same researchers examined the rate of suicide in the biological relatives of adoptees who had mood disorders. They found that biological relatives of adoptees with mood disorders had a 15-fold increased risk for suicide. What was curious is that the subgroup of adoptees whose relatives had the highest risk for suicide were those with borderline personality disorder (called “affect reaction” in Denmark). This is a condition that often occurs with mood disorders, but is not itself considered a mood disorder. Instead, it characterizes people with difficulties regulating their emotions and impulses—that is, with impulsive aggression. This observation suggests that familial transmission of suicidal behavior may be related to familial transmission of impulsive aggression. The suicides in this study were not further described, so we do not know whether the biological relatives of the adoptees who committed suicide were also impulsively aggressive.
Twin studies also point to a role for genes in suicidal behavior. Reviewing all reported cases where both twins committed suicide, a researcher showed that this occurred much more often in identical twins than in fraternal twins. Taking it a step further, this researcher showed that the rate of attempted suicide among those who had survived a twin’s suicide was higher in the identical twins. This is important. It illustrates that attempted and completed suicides fall along the same spectrum, and that examining both gives us the genetic component of suicidal behavior. The main weakness of these particular twin studies is that they are based on case reports in the literature, so are not necessarily representative of all twins.
Look next at a large, representative study of almost 3,000 Australian pairs of twins. Again, both identical twins were much more likely to attempt suicide than both fraternal twins. If a twin had made a suicide attempt, the other twin was at nearly an 18-fold increased risk for a suicide attempt. Serious suicide attempts were highly heritable, with 55 percent of the variance explained by genetic factors. There was no clear relationship between the timing of the suicide attempts by the sets of twins, suggesting that the cause was not imitation Even after controlling for other risk factors for suicidal behavior—mental illness, a history of abuse, personality problems, and exposure to stress—family history of suicide attempts persisted as a strong predictor of attempts in the remaining twin. This provides some of the most compelling evidence to date that familial transmission of suicidal behavior and of mental illness are separate strands. Unfortunately, this study once again shed no light on exactly what is being transmitted that predisposes a person to suicidal behavior.
Over the past two decades, about a dozen family studies have examined the rate of suicidal behavior in the relatives of either suicide attempters or completers and then compared it to the relatives of controls.1, 6 Several consistent findings emerged. First, the rate of suicide attempts is increased in the families of both suicide completers and suicide attempters. This further bolsters the view that what is being transmitted in families is a tendency toward suicidal behavior, rather than to suicide per se. Interestingly, suicidal thinking was not transmitted along with suicidal behavior, but instead was transmitted along with depression.
Second, the transmission of suicidal behavior cannot be explained by the transmission of psychiatric illness alone. The distinctness of the familial transmission of mood disorders and of suicide was best illustrated by a study of the Old Order Amish, which reported some genetic family trees that were loaded with mood disorders yet had not a single suicide, whereas others, also loaded with mood disorders, showed rates of suicide 100 times the expected rate. Several other studies showed that, while the rate of psychiatric disorder was increased in the relatives of suicide completers or attempters, their increased rate of suicidal behavior persisted even after controlling for the increased rate of psychiatric disorder.
Third, several studies support the initial observation in the Danish adoption study that the tendency to suicidal behavior is transmitted along with a tendency to impulsive aggression. Relatives of suicide attempters or completers who were more aggressive or who had made suicide attempts by violent methods were much more likely themselves to have engaged in suicidal behavior.6 This suggests that what families are transmitting is the tendency to convert suicidal thinking into action. An ongoing study comparing the risk of attempted suicide in the offspring of people with mood disorders who attempted suicide with a group of people with mood disorders who never attempted suicide shows patterns of familial transmission largely consistent with these conclusions.
I have dwelled at some length on these studies because they suggest not only the strong correlation of inherited impulsive aggression with suicide, but the interaction of multiple genes in complex brain disorders.
Are there alternative explanations of these observations? Could suicidal behavior run in families as a result of imitative behavior or of the accumulated impact of loss or psychiatric disability, or of shared family adversity, such as physical or sexual abuse?
Imitative suicidal behavior cannot be dismissed. Media exposure to fictional and nonfictional stories of suicide is followed by approximately a 10 percent increase in the suicide rate, usually for about two weeks 1, 7 The greater the publicity, the more marked the effect, particularly among adolescents and young adults. On the other hand, there is no indication of a higher rate of suicide or suicide attempts among friends and siblings of suicide victims. In fact, they seem to be inhibited from engaging in suicidal behavior by their exposure to the grief of the friends and families of the suicide victims. It may be that imitation is more likely to occur if the exposure is by hearsay, through the media, or by some other, less intimate contact. The absence of any imitative effect in adoption studies, or in the one twin study that examined this question, makes imitation less likely to explain familial clustering. On the other hand, exposure to a friend’s suicide attempt (rather than completed suicide) may increase the risk for suicide attempt in a young person, so that in theory, this mechanism could be at play. The studies published thus far do not support imitation as an explanation for familial transmission of suicidal behavior.
Could suicidal behavior be transmitted in families through the impairment or disability of psychiatrically-ill parents? Mental illness of parents, particularly depression and substance abuse, does increase the risk of youth suicide.1 Since the transmission of suicidal behavior persisted in several studies after controlling for rates of mental illness among relatives, however, parental disability alone is not a good explanation for the familial transmission of suicidal behavior.
Perhaps the most plausible alternative explanation for a family’s transmission of suicidal behavior is the experience of adversity shared by members of a family living in the same environment.1 Poor parent-child communication is a risk factor for completed suicide, even after controlling for parental psychopathology. In another study, parent-child discord was a risk factor for completed suicide, although after controlling for both parental and child psychopathology, it was not a significant contributor. Maltreatment, however—physical, sexual, or emotional abuse—is a potent risk factor for completed and attempted suicide,1 and contributes to suicidal risk even after controlling for other psychopathological risk factors. It appears that sexual abuse is a stronger risk factor for attempted suicide than physical abuse, and that the risk for suicide attempts is proportional to the severity of the abuse, with the greatest risk associated with more severe abuse, such as vaginal penetration.
Given that there are associations between parental depression, substance abuse, suicide attempt, and abusive behavior, the familial transmission of abuse and suicidal behavior may not be an either/or proposition. That is, genetic factors related to psychopathology and impulsive aggression may make a parent more likely to be abusive, thereby giving the child a double dose of risk through both the genetic endowment associated with the abuse and the trauma of the abuse itself. Depression appears to emerge in abused children as the result of interaction between a family history of depression and the stress of the abuse, 8 and there is no reason to believe that the same paradigm may not hold in the transmission of suicidal behavior as well.
These findings of the familial transmission of suicidal behavior are entirely consistent with the serotonin hypothesis of suicidal behavior, which says that alterations in central serotonin are associated with both attempted and completed suicide and with impulsive aggression; these alterations appear in part to be under genetic control.5, 6 While the results of studies to search for genes that may underlie suicidal behavior are not entirely consistent, they suggest that variations in serotonin-related genes (two of these are tryptophan hydroxylase, or TPH, and monoamine oxidase A, or MAOA) are related to changes in central serotonin metabolism, suicide attempts, and greater likelihood of impulsive aggression. What is particularly interesting about some of these gene studies is that the effects may be most prominent in men, who complete suicide much more commonly than for women do. Moreover, the serotonin hypothesis could even be considered consistent with a prominent role for abuse in the familial transmission of suicidal behavior, since it has been demonstrated in human and nonhuman primates that adverse early environment can interfere with central serotonin metabolism.9, 10
Because our ability to predict and prevent suicide is still limited, these research results make a compelling case for trying to understand the genetics of suicidal behavior and its relationship to impulsive aggression and family adversity. Genetic linkage strategies examining personality traits such as impulsive aggression may shed light on the genetic contribution to suicidal behavior. Moreover, examining changes in genetic expression, such as in postmortem samples of suicide victims, may also provide insight into which genes are involved in suicidal behavior. On a clinical level, the prominent role of impulsive aggression in youth suicidal behavior supports the idea that we should assess and target this behavior in the treatment of young people at risk for suicide.
By combining state-of-the-art treatments for both impulsive aggression and predisposing conditions like depression, we may ultimately be more successful in preventing youth suicide. For example, there are now studies suggesting that lithium may be helpful in decreasing aggression, independent of its effect on mood, so it may have an important role in decreasing aggression and lowering risk for suicide. A process called dialectical behavior therapy (DBT) has been shown in some studies to reduce the risk of suicidal behavior in impulsively aggressive women. Both of these promising treatments have yet to be evaluated in young people at high risk for suicide.
As we recognize our limited ability to detect and intervene with impulsive aggression in children, adolescents, and young adults, we may be able to consider other means of preventing suicide, such as restricting lethal agents (such as a loaded gun) from homes of vulnerable youth and avoiding sensationalistic publicity about suicide that is likely to trigger imitative acts among vulnerable youth. Currently, however, physicians frequently do not ask about firearms in the home, and when at-risk patients are counseled about removing firearms, the results are disappointing. Therefore, further work needs to be done about the best way to separate firearms from young people at risk. Work is ongoing with leaders in the media aimed at developing realistic journalistic standards that result in stories that are informative, but avoid unnecessary sensationalization.
The Talmud says that “he who saves a life, it is as if he has saved the entire universe.” This is particularly so in preventing suicide by young people, because the tragedy of youth suicide is that an impulsive act can end a lifetime of possibilities. Not only does suicide in a young person cruelly truncate what should be a normal lifespan, it leaves in its wake family and friends whose grief is often unconsolable. Although we still have much to learn, most teens who take their lives have treatable mental illnesses. By awakening public awareness of this problem, by improving the astuteness of detecting and properly treating the underlying problems, and by conducting innovative research about the causes of suicide, we can reverse the loss of a lifetime.
- Beautrais A. Risk factors for suicide and attempted suicide among young people. Australia and New Zealand Journal of Psychiatry. 2000;34:420-436.
- Khan A, Warner HA, Brown WA. Symptom reduction and suicide risk in patients treated with placebo in antidepressant clinical trials: An analysis of the food and drug administration database. Archives of General Psychiatry. 2000; 57:311-317.
- Brent DA. Assessment and treatment of the youthful suicidal patient. Annals of the New York Academy of Science. 2001; 932:106-131.
- Rich CL, Young D, Fowler RC. San Diego suicide study. I. Young vs old subjects. Archives of General Psychiatry. 1986;43:577-582.
- Mann JJ. The neurobiology of suicide. Nature Medicine. 1998;4:25-30.
- Mann JJ, Brent DA, Arango V. The neurobiology and genetics of suicide and attempted suicide: A focus on the serotonergic system. Neuropsychopharmacology. 2001; 24(5): 467-477.
- Gould MS, Shaffer D. The impact of suicide in television movies: Evidence of imitation. New England Journal of Medicine. 1986; 315:690-694.
- Kaufman J, Birmaher B, Brent D, et al. Psychopathology in the relatives of depressed-abused children. Child Abuse Neglect. 1998: 22:171-181.
- Pine D, Coplan J. Neuroendocrine response to D.1-fenfluramine challenge in boys: associations with aggressive behavior and adverse rearing. Archives of General Psychiatry. 1997; 54: 839-846.
- Kraemer GW, Ebert MH, Schmidt DE, et al. A longitudinal study of the effect of different social rearing conditions on cerebrospinal fluid norepinephrine and biogenic amine metabolites in rhesus monkeys. Neuropsychopharmacology. 1989; 2: 3:175-189. | <urn:uuid:57adf859-73dc-437c-aa97-6bf208362a8e> | CC-MAIN-2021-21 | https://www.dana.org/article/is-impulsive-aggression-the-critical-ingredient/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991801.49/warc/CC-MAIN-20210515100825-20210515130825-00376.warc.gz | en | 0.949349 | 5,295 | 3.09375 | 3 |
What socio-psychological factors contribute to youth delinquency?
Abstract: This paper explores primarily the social factors but also psychological factors such as personality and intelligence that impact on youth delinquency. It has been found that the media increases aggression in youths and that socio-economic status along with peer relations, education and parenting neglect can increase the probability of juveniles engaging in delinquent behaviour. Two theories of personality and intelligence are looked at which show that psychological factors may also contribute to juvenile delinquency.
Youth Delinquency: Youth delinquency is essentially a criminal act committed by a juvenile (usually defined as being between seven and eighteen) (Schafer &Knudten 1970). It is usually a by-product of antisocial behaviour. Antisocial behaviour refers to behaviour that either damages interpersonal relationships or is culturally unacceptable (Baumeister & Bushman 2008), or in some cases both. Commonly antisocial behaviour is identified in self harming youths or in youths that are truant with school and engage in theft or drug taking activity (Luncheon, Bae, Gonzalez, Lurie and Singh 2008). A large factor in youth delinquency can also be attributed to a form of antisocial behaviour known as aggression.
Aggression is a major factor in youth delinquency as acts of aggression are usually carried out on other people and as such are a crime. Two types of aggression identified by Baumeister and Bushman (2008) are “hostile aggression” and “instrumental aggression”. Hostile aggressions constitute crimes or acts with impulsive or emotive motivations whereas instrumental aggression is more calculated and motivated by goal driven behaviour. The difference in motivation behind aggressive behaviour has led researchers to explore whether aggression in youths and subsequently adults, is a result of the increasing violence shown in the media, the situational circumstances one is in or if there are physiological factors such as personality that determine how aggressive one is.
Media, video games and the effects on aggression.
In the early 1950s horror comics were criticised and linked to juvenile delinquency. Since then television as well as video and computer games have been accused of undermining moral values and cultivating a more violent and criminally oriented social climate (Gunter, 1994). Clint Eastwood’s movie “Dirty Harry” has been linked to copy cat serial killings and more recently the school shootings at Columbine (1999) have been linked to violent video games (Carnagey, Anderson and Bartholow). Numerous studies have been undertaken to see what effects video game playing has on feelings of aggression and subsequent acts of aggression.
Video Games: There is a large number of studies that look at different factors of video games that increase frustration and feelings of hostility in youths and adolescents who engage in video playing activity. The past research has led to the application of the General Aggression Model (GAM) in violent video game studies (Bartlett, Harris and Baldassaro 2007). The GAM encompasses all past theories on aggression and relies on short term affect, arousal and cognition components (Anderson & Bushman 2002) According to Anderson (2002), the GAM can account for the wide variety of effects seen in the media violence literature which lead to the child exposed becoming more desensitized to violence and habitually more aggressive. The GAM suggests that individual factors interact with the situational factors, which may lead to a person’s feeling impacting on real world actions (Bartlett et al 2007). Therefore if a youth has been playing a violent video game and has experienced an increase in frustration or hostility it is more likely that they are to act out. Bartlett et al. sought to demonstrate this in an experiment where participants played a video game with measures being taken at a timed interval. Participants were given story stems to respond to where participants had to say how they would react to certain scenarios and their heart rate was taken.
The results supported the theory that aggression increased in the individual while engaging in violent video games as there was an increase in physiological arousal (heart rate) and the responses to the story stems increased in aggression significantly between baseline to the time interval. The study found that the GAM offered an adequate explanation of the short-term effects of video games (Bartlett et al.) Although this particular study was conducted on participants with an average age of 19 (1 year older than what is recognised as being juvenile), the GAM suggests the desensitisation would have occurred in previous exposures to stimuli, while they were juveniles and the high internal validity does not suggest that findings would significantly differ if participants were 18.
Television and movies: The concern surrounding movies, television and their effects on the youth watching them stem from the social learning theory of imitation (Leyens, Herman and Dunand 1984). In 1961 Albert Bandura conducted an experiment measuring levels of aggression in children (1961). The experiment consisted of an adult exhibiting physical and verbal aggression towards the doll. Afterwards Bandura would place the child in a room with the doll and see what would happen. It was found that it was much more likely for those who had witnessed acts of aggression to act out such acts when placed in the same situation than those who did not (Bandura 1961). The experiment was run with live models as well as a video taped model with no difference in the results. Thus it can be reasonably applied to youths that witness violent acts in movies and television would be more likely to repeat those acts than those who don’t. Criticism came about the contrived nature of the experiment and the use of artificial films however, was quietened by numerous field experiments that yielded the same results (Leyens et al.). Numerous studies have also found the proclivity to act out aggressively strengthen upon watching violent acts carried out and that the movie or show will act as a primer for an individual to act out (Berkowitz, 2008). An experiment demonstrating this effect was carried out by Jopenson (1987). In her experiment a group of school boys were first frustrated by some act, after which they watched either a violent or non-violent television show. After watching the show the participants were observed playing a game of hockey. The groups displaying more aggression were the ones who displayed more aggressive acts throughout the game were those that had seen the violent show.
Socio-economic factors- Class is considered an important social marker that plays an undeniable role in deviance (Wahrman, 1972). Studies have been done and replicated using a range of measures of socioeconomic factors including income, poverty and status (Ferguson, Campbell & Horwood, 2004). Each study has led to the same conclusion; youth in lower socioeconomic standing are more likely to be delinquent. This idea was furthered explored by the strain theory (Merton, 1938). According to the strain theory, individuals in a lower socioeconomic status are more likely to engage in delinquent behaviour to try and alleviate the imbalance and strain of the social situation (Ferguson et al.). The dilemma faced is able to be adapted to in five ways according to Merton, they are: 1. Innovation: individuals who accept socially approved goals, but not necessarily the socially approved means. Individuals who may adapt using innovation may aim to achieve socially approved goals but in order for them to attain them may be more likely to engage in delinquent behaviour as they do not have the same opportunities as the higher classes may have. 2. Retreatism: those who reject socially approved goals and the means for acquiring them. These individuals may entirely shun societal norms and follow just what they want without regard for societal laws thus engaging in delinquent activity. 3. Ritualism: those who buy into a system of socially approved means, but lose sight of the goals. Merton believed that drug users are in this category. 4. Conformity: those who conform to the system's means and goals. 5. Rebellion: people who negate socially approved goals and means by creating a new system of acceptable goals and means (Wikipedia 2008) Peer relationships Ferguson, Campbell and Horwood (2004) further suggest a differential association theory that may act as an influencing social factor in youth delinquency. The differential association states that an increase in youth delinquency and its relation to socioeconomic status are due to the fact that youths in the lower socioeconomic class have a larger exposure to criminal peers and environments. Sutherland’s (1947) original finding that personal networking leads to either a favourable or unfavourable view of delinquency, which supports Haynie’s (2002) finding that adolescents who report that their friends are delinquent tend to report higher levels of delinquency than adolescents with fewer or no delinquent friends. If the strain theory is applied, then it is not unreasonable to deduce that there would be more individuals in the lower socio-economic class engaging in delinquent activities, more delinquents associating and forming peer relationships with other delinquents or nondelinquents increasing the likelihood of non offenders engaging in delinquent behaviour and repeat offending in already delinquent individuals and therefore perpetuate the cycle of youth crime.
Education Youth delinquency has many interactive and causal effects which is the case when it comes to education. Blackorby and Wagner (1996) found that many juvenile delinquents are unable to attain skills and knowledge that would help them in employment opportunities or the chance to further their academic career due to expulsion or dropping out of school. This inability to attain the relevant skill can be linked back to the strain theory previously discussed as the disadvantage of lack of education and employment may create an imbalance in the social situation which may lead to delinquent activity to achieve the goal.
Parenting: The family circumstances have a considerable impact on the risk of engaging in some form of delinquency. Studies show that children who are provided with adequate parental supervision are less likely to engage in criminal activity, while children from dysfunctional family settings such as conflict, inadequate parental control and premature autonomy are more closely associated with juvenile delinquency (World Youth Report 2003) hostility and rejection as well as low child involvement are the most salient predictors of behavioural problems and delinquency (Simons, Simons, Chen, Brody, & Lin 2007). These lines of study are important as Gerstien and Briggs (1993) found 30 percent of violent offenders in their study were reared in the absence of a father. These types of studies has also allowed for intervention programs to be brought in to try and control increasing delinquent behaviour in youths. One such program outlined by Connel, Dishion, Yasui and Kananagh (2007) focuses on preventing substance abuse in youths. It does this by targeting problems in the family arena, primarily in parental monitoring and management of children engaging in delinquent activities. The research demonstrates that the motivation of parents to manage and monitor their children results in less delinquent behaviour exhibited by the youth. Further research should look into developing similar programs of intervention for the other aspects of parenting that contribute to an increased likelihood of delinquent activity.
Personality: The three major personality factors according to Eysenck (1977) are: Psychoticism, extraversion and neuroticism. According to Eysenck’s criminal theory, juvenile delinquents would score highly on all three of the personality dimensions (Van Dam, Coleta, De Bryun & Janssens 2007). To test his theory Eysenck surveyed a sample of males in juvenile detention to assess their levels in the personality dimensions compared to a control group of college participants. The study found that of statistically high significance were the high extraversion levels in juvenile offenders, suggesting that highly extraverted juveniles that score low on neuroticism and psychotocism are more at risk of becoming delinquent.
While Eysenck described personality in three dimensions, Block and Block (1980) looked at personality in two area’s: Ego control and ego resiliency, later used to determine three types of personalities: Over-controllers, under-controllers and resilients (Akse, Hale, Engels, Raaijmakers & Meeus 2007). Over controllers tend to be internalising problems, under controllers tend to externalise problems while resilients strike a healthy balance (Akse et al.) In internalising their behaviour, over controllers tend to reject help from others, isolate themselves and have increased anxiety and depression, whereas under controllers who externalise their problems are more likely to act out in a deviant manner (Akse et al.)
Intelligence: Delinquency is found to be more prevalent and more frequent among young males with a low IQ (Koolhof, Loeber, Wei, Pardini & D’Escury 2007). An experiment run by Koolhof et al. compared impulsivity, psychopathy and empathy between high and low IQ individuals. The results found a significant difference in the impulsivity of individuals with a lower IQ as well as finding those with a lower IQ less empathetic and with less reported feelings of guilt. This is an important finding as those factors are related to delinquency, and would seem to suggest that due to those factors individuals with a low IQ are more prone to juvenile delinquent behaviour (Koolhof et al.).
Attachment: Bowlby (1969), theorised that as children we create internal working models which are based on the responsiveness of our primary caregiver. He states that these internal working models would allow for us to predict the future and how to react to our environment and the people in it. He predicted that children who formed secure attachment would feel free to explore their environment and interact freely with it as they would feel comfortable having their mother as a secure base should anything happen. This idea is based on previous experience and the mother or primary caregiver responding to needs. This is likely to continue on through life and set them up to be able to maintain strong social connections (Sigelman & Rider 2006). Conversely, if a child had not had all needs consistently met as a child they may form an insecure attachment. Children who develop an insecure form of attachment may develop a penchant to avoid social situations or have trouble regulating mood, emotion and behaviour.
A study by Elger, Knight, Sherman and Worrall (2003) found support for Bowlby’s insecure attachment theory. In surveys completed by youth delinquents reporting on attachment characteristics, substance abuse and behavioural problems, it was found that insecure attachment was related to the internalising and externalising of behaviours. As previously discussed those who have problems in doing this often act out in a deviant manner. It also showed a relation to antisocial and aggressive behaviour, which is a precursor to delinquent behaviour.
Conclusion: Delinquency in children can be affected by a myriad of factors either in the social realm or the psychological realm. The media increases hostility and aggression in youth, socio-economic status may effect how an individual acts to try and reap the same benefits those who are more fortunate already have and may also in turn effect who the individual forms peer relationships with, which may serve to increase delinquent behaviour. Parenting neglect results in higher instances of juvenile delinquency, as do certain psychological traits such as personality and intelligence. Attachment theory in infants effecting adolescents has also shown to have an effect in studies. The best method of reducing the risk of delinquent behaviour appears to be intervention programs such as motivating parents to monitor and manage their children and their behaviours, and as such further research might look at other area’s where similar programs may be introduced to alleviate some of the problems contributing to youth delinquency.
References[edit | edit source]
Akse, J., Hale, B., Engels, R., Raaijmakers, Q. and Meeus, W. 2007. Co-Occurrence of Depression and Delinquency in Personality Types. European Journal of Personality
Annonymous 2003. World Youth Report, 2003
Bartlett, C.P, Harris, R.J and Baldassaro,R 2007. Longer You Play, the More Hostile you Feel: Examination of First Shooter Video Games and Aggression During Video Game Play. Aggressive Behaviour Vol:33 486-497
Baumeister, R. F., & Bushman, B. J. 2008. Social psychology and human nature (1st ed.) Belmont, CA: Thomson Wadsworth
Berkowitz, L. 2008. On the Consideration of Automatic as well as Controlled Psychological Processes in Aggression. Aggressive Behaviour Vol:34 117-129
Blackorby, J., and Wagner, M. 1996. Longitudinal post school outcomes of youth with disabilities: Findings from the National Longitudinal Transition Study. Exceptional Children, Vol 62, 399-413.
Block, J. H., & Block, J. 1980. The role of ego-control and ego-resiliency in the organization of behavior. In W. A. Collins (Ed.), Development of cognition, affect, and social relations. Hillsdale: Lawrence Erlbaum Associates.
Carnagey, N.L, Anderson, C.A and Bartholow, B.D 2003. Media Violence and Social Neuroscience. New Questions and New Opportunities. Current Directions in Psychological Science.
Connell, A.M, Dishion, T.J, Yasui, M, Kavanagh, K 2007. An Adaptive Approach to Family Intervention: Linking Engagement in Family-Centered Intervention to Reductions in Adolescent Problem Behavior. Journal of Consulting & Clinical Psychology, Vol. 75
Elgar, F.J, Knight, J., Worrall, G.J. and Sherman, G. 2003. Attachment Characteristics and Behavioural Pronlems in Rural and Urban Juvenile Delinquents. Child Psychiatry and Human Development. Vol. 34
Fergusson, D., Swain-Campbell, N and Horwood, J 2004. How does childhood economic disadvantage lead to crime? Journal of Child Psychology and Psychiatry Vol 45 956-966
Gerstein, L.H and Briggs, J.R 1993. Psychological and sociological discriminants of violent and nonviolent serious juvenile offenders. Journal of Addictions & Offender Counseling, Vol. 14
Gluek, S.E, 1962. Family Environment and Delinquency. London: Routledge and Kegan Paul Limited
Gunter, B in Hagell, A. and Newburn, T, 1994. Young Offenders and the Media. Viewing Habits and Preferences. London: Policy Studies Institute
Hayniel, D.L 2002. Friendship Networks and Delinquency: The Relative Nature of Peer Delinquency. Journal of Quantatative Criminology, Vol 18.
Josephson W. 1987. Television violence and children’s aggression: Testing the priming, social script, and disinhibition predictions. Journal of Personality and Social Psychology 53:882–890.
Koolhof, R., Loeber, R., Wei, E.H., Pardini, D. and D’Escury, A.C. 2007. Inhibition deficits of serious delinquent boys of low intelligence. Criminal Behaviour and Mental Health. Vol. 17, 274-292
Merton, R.K. 1938. Social structure and anomie, American Sociological Review, Vol 3, 672–682.
Schafer, S. and Knudten, R.D, 1970. Juvenile Delinquency: An Introduction. New York: Random House.
Sigelman, C.K and Rider, E.A. 2006. Life-Span Human Development. (5th ed.) Belmont, CA: Thomson Wadsworth.
Simons, R,., Simons, L., Chen, Y., Brody, G. and Lin, K. 2007. Identifying the Psychological Factors that Mediate the Association Between Parenting Practices and Delinquency. Journal of Criminology vol 45 481-517
Van Dam, C., De Bruyn, E.J.E, Janssens, J 2007. Personality, Delinquency and Criminal Recidivism. Adolescence Vol. 42, 763-777
Wahrman, R. 1972. Status, deviance, and sanctions: A critical review. Comparative Group Studies, 3, 203–223. | <urn:uuid:1a5004e2-2dc9-4c2d-baa9-0682de30e0b2> | CC-MAIN-2021-21 | https://en.wikiversity.org/wiki/Youth_delinquency | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988741.20/warc/CC-MAIN-20210506053729-20210506083729-00136.warc.gz | en | 0.93929 | 4,130 | 2.9375 | 3 |
73 entries match your criteria.
Historical Markers and War Memorials in Carbon County, Pennsylvania
Adjacent to Carbon County, Pennsylvania
► Lehigh County (105) ► Luzerne County (215) ► Monroe County (80) ► Northampton County (193) ► Schuylkill County (54)
Touch name on list to highlight map location.
Touch blue arrow, or on map, to go there.
"Our lives were lived in the open," he remembered, "winter and summer. We were never in the house when we could be out of it. And we played hard. I emphasize this because boys and girls who would grow up physically fit adults must lay the . . . — — Map (db m116592) HM|
|This black diamond is a piece of the mammoth coal vein found in the Panther Valley. It was placed here on August 28, 1976 as a monument to the enterprising spirit of men such as Josiah White and Erskine Hazard, whose early pioneering efforts . . . — — Map (db m141004) HM|
|Formed March 13, 1843 from Northampton and Monroe counties. Carbon is the basic element of this area's rich deposits of anthracite coal. The county seat, incorporated in 1850 as Mauch Chunk, was renamed in 1954 for Jim Thorpe, Indian athlete. — — Map (db m32150) HM|
|"The problem of getting coal from wagon to ark was solved by constructing an inclined loading chute at Mauch Chunk that extended downward from Mount Pisgah to a coal-loading house along the Lehigh River. The coal house projected over the river's . . . — — Map (db m138499) HM|
|"It should be borne in mind that in timber dams it is the weight of the stone ballast that keeps the structure in place, and not the bulk and combination of the timbers." Edwin F. Smith, Dam Building in Navigable and Other Streams . . . — — Map (db m138500) HM|
|“Our residents take pride and partner in their heritage — they understand the meaning of what we have and act to preserve it” Delaware & Lehigh National Heritage Corridor and State Heritage Park, Management Action Plan. . . . — — Map (db m153538) HM|
|“Our residents take pride and partner in their heritage — they understand the meaning of what we have and act to preserve it” Delaware & Lehigh National Heritage Corridor and State Heritage Park, Management Action Plan. . . . — — Map (db m153540) HM|
|"A few miles above Easton, the Lehigh was pocked with white water at almost every turning. To navigate it seemed impossible."
Josiah White, Co-founder of the Lehigh Coal and Navigation Company
Pennsylvania's anthracite (hard coal) lay entombed . . . — — Map (db m141005) HM|
|"Since most of the land was donated to the railroads by the American public in the first place, we believe it should be returned to the public." David Burwell, President, Rails-to-Trails Conservancy, 1988 A Well-Worn Path The path you . . . — — Map (db m153536) HM|
1887 · Born May 22 on the Sac and Fox Reservation near Belmont, Oklahoma Territory.
1904 · Enters Carlisle Indian School in Pennsylvania, scene of his brilliant career in college football.
1911 · First Team All-American Football Team at . . . — — Map (db m116566) HM|
PENTATHLON EVENTS -
Broad Jump - 1st place = 23 feet 2.7 inches
Javelin - 3rd place = 153 feet 3 inches
Discus - 1st place = 116 feet 8.4 inches
200- Meter Dash - 1st place = 22.9 seconds
1,500- Meter Race - 1st place = 4 min. 44.8 . . . — — Map (db m116567) HM|
"Sir, you are the greatest athlete in the world."
King Gustav V of Sweden
Born in Oklahoma Territory in 1888, Jim Thorpe was a member of the Sac and Fox tribe. Prophetically named Wa-tho-huck (Bright Path) by . . . — — Map (db m116724) HM|
"Sir, you are the greatest athlete in the world"
King Gustav, Stockholm Sweden, 1912 Olympics
[Athletic engravings on mausoleum] . . . — — Map (db m116564) HM|
Joe Boyle Made A Difference
It happened again and again, for over 60 years. It happened at high school athletic games, at meetings of the Lions, V.F.W., Rotary, Chamber of Commerce, Borough Council, and Y.M.C.A. For decades it . . . — — Map (db m116526) HM|
|Welcome to Lehigh Gorge State Park. This 4,548-acre park stretches 32 miles along the Lehigh River from the Francis E. Walter Dam in the north to Jim Thorpe in the south. Carved by the power of the Lehigh River, the park's deep gorge, steep . . . — — Map (db m153550) HM|
|Built in 1888. Central Railroad of New Jersey Passenger Station has been placed on the National Register of Historic Places. — — Map (db m86850) HM|
|On June 21, 1877, four "Molly Maguires," an alleged secret society of Irish mine-workers, were hanged here. Pinkerton detective James McParlan’s testimony led to convictions for violent crimes against the coal industry, yet the facts of the labor, . . . — — Map (db m32153) HM|
In grateful commemoration
of the patriotism of
in going over the top
National War Savings Campaign
of 1918 — — Map (db m116561) HM|
|”So far back as 1827, the Company constructed down-grades on which loaded cars were run down to the Lehigh River by their own gravity.” —“Special Correspondent, New York Times, December 14, 1872, p 3.” Geography . . . — — Map (db m153422) HM|
|Standing on the nearby hill is the home of Asa Packer, industrialist, philanthropist, congressman and founder of Lehigh University. The ornate mansion, built in 1860, has been carefully preserved with its original furnishings and is maintained as a . . . — — Map (db m140827) HM|
|”My grandfather, Samuel Rice got a job as a watchman after the pipeline was built in 1886. In an emergency he could turn off the valve at either end of the bridge from his house. My mother, Ethel Rice Jenkins, tells how the house was in the . . . — — Map (db m153420) HM|
"Built in 1850, the (inclined) planes were 1,200 feet long and 430 feet high. As a loaded car descended on one plane. it would draw an empty car up the other plane."
John Koehler, Railroad Historian, Weatherly
You are . . . — — Map (db m114632) HM|
|“We had been in many beautiful glens, but this was so unlike all others, so varied — grand and noble falls alternating with delightful rippling cascades, lovely moss covered grottos, marvelous combination — that we were led to . . . — — Map (db m153421) HM|
Jim's dominance over his intercollegiate rivals in football was paralleled by his records as a member of the Carlisle track team. He ran sprints, he ran hurdles and he ran distance races. He high jumped and he broad jumped. He threw the . . . — — Map (db m116632) HM|
The Indian wars were over and the Army had moved the Indians to forts and reservations. A young Army officer named Richard Henry Pratt had taken part in the Indian fighting and subsequent subjugation of the Indians. His observations had caused . . . — — Map (db m116601) HM|
|This Italianate Victorian townhouse was the home of U.S. Congressman Milo M. Dimmick and his family. His son Milton founded the Dimmick Memorial Library in Old Mauch Chunk in their memory. — — Map (db m163184) HM|
|"The class that resort here are select, intelligent, of quiet, unostentatious manners, who mostly come here for health and to admire and study the wonders and beauties of nature..."
The Health and Pleasure-Seeker's Guide, 1874
By the . . . — — Map (db m138507) HM|
[Center inset quote reads]
"Sir, you are the greatest athlete in the world."
King Gustav V of Sweden
Legend has made a tragic figure of Jim Thorpe, blighted by his Olympic heartbreak, but in fact, after the initial . . . — — Map (db m116702) HM|
The Self Made Man
"...there is no distinction to which any young man may not aspier, and with energy, diligence, intelligence, and virtue, obtain."
From Asa Packer's 1867 biography
"The Rich Men of the World and How . . . — — Map (db m32270) HM|
To all the Brave Defenders of the Union from the County of Carbon.
Wilderness, Hampton Roads, Antietam, Gettysburg
New Orleans, 1815.
On . . . — — Map (db m32102) WM|
|This house (1844 A.D.) is the oldest complete and unchanged home existing from the early history of Mauch Chunk (now Jim Thorpe). Built as a parsonage by the Rev. Webster, famous pastor of the Presbyterian church, the home was certified as oldest by . . . — — Map (db m128303) HM|
Dedicated to the memory
of those who made
the supreme sacrifice
World War II
PVT. Joseph T. Martino • CPL. Fred Felker
PFC. James Lauth • PVT. Patrick Searfoss
PFC. Anthony Fabrick Jr. • S/SGT. Maurice Pricka
T/S. . . . — — Map (db m116522) WM|
In honor of all
Veterans of the
Lehighton, PA Area
Anchors . . . — — Map (db m116268) WM|
"Lehighton is a very beautiful and most pleasant town."
Georg Heinrich Loskiel
Moravian bishop and historian
Philadelphia-born Col. Jacob Weiss (1750-1839) was deputy quartermaster general of . . . — — Map (db m116387) HM|
|In grateful remembrance of
Colonel Jacob Weiss
a soldier of the
donor of this park — — Map (db m86851) WM|
“Our residents take pride and partner in their heritage—they understand the meaning of what we have and act to preserve it”
Delaware & Lehigh National Heritage Corridor and State Heritage Park, Management Action . . . — — Map (db m116343) HM|
"Since most of the land was donated to the railroads by the American public in the first place, we believe it should be returned to the public."
David Burwell, President,
Rails-to-Trails Conservancy, 1988
A Well-Worn . . . — — Map (db m116342) HM|
|Gnadenhuetten. The Moravian mission of this name was built in 1746 to accommodate the growing number of Mohican and Delaware Indian converts. It was the first white settlement in present-day Carbon County. It was burned on November 24, 1755, during . . . — — Map (db m133878) HM|
Sgt. Stanley Hoffman
1919 - 1944
For whom this boulevard was named,
paid the supreme sacrifice in the line of duty,
during the invasion of Anzio Beach, Italy.....
This monument is a memorial to all fifty-seven
servicemen, from the . . . — — Map (db m116324) WM|
"The first workers' whistle [at Packerton Yard] sounded at 5 a.m. as a wakeup call while the second at 7 a.m. was the signal for the men to start work."
Thomas D. Eckhart
The History of Carbon County, v. III
When the . . . — — Map (db m116344) HM|
In honor of all
Past • Present • Future
Military Working Dogs
United States of America
God Bless Our K-9 Corps — — Map (db m116327) WM|
Designed & Hand Painted
Original & Replacement Cross
2012 • 2014
By Dotti Miller
Class of 1965
Original Cross Made By
Mark "Irish" Hagan . . . — — Map (db m116398) WM|
In Memory of Class of 1964 member
PFC Clyde R. (Speedy) Houser KIA June 13, 1967
and Lehighton Area Residents
SP4 Leon D. Eckhart KIA Feb. 25, 1967
LCPL Ronald S. H. (Butch) Christman KIA Feb. 28, 1968
SP4 Charles R. (Butch) Jones KIA . . . — — Map (db m116401) WM|
This memorial is dedicated to Michael C. Wargo, and the brave men and women whom have fought for America overseas, and have paid the final price here at home due to PTSD or TBI. We cannot stand idly by as Veteran suicides sweep the country. This . . . — — Map (db m116320) WM|
| . . . — — Map (db m116329) WM|
In honor and memory
of the veterans of
the Lehighton area
who served in WWI
1917 - 2017
Commemoration — — Map (db m116322) WM|
Growing up in Parryville, Carl learned valuable life lessons by playing. In high school, he was the javelin state champion; at Penn State he became an All-American in Track & Field and is the record holder for javelin in the college Big Ten . . . — — Map (db m124041) HM|
[Marker is largely illegible] — — Map (db m124045) HM|
erected and dedicated
in honor of the Boys of
who served in the World War.
Russel Smith • Raymond McClellan☆
Gerald Eshleman • Raymond . . . — — Map (db m124039) WM|
|At the corner of Pine and Ludlow, an important part of the Switchback Gravity Railroad was completed around 1844. Upon returning from Mauch Chunk, empty mine cars began their Summit Hill descent east of town at the Mount Jefferson Plane. Passing . . . — — Map (db m128304) HM|
| ”Leaving Mt. Pisgah, the weight of the car was again its motive power, and the slight decline in the grade carried it at the speed of eighteen miles an hour to Mt. Jefferson, a distance of six miles and one furlong…”
. . . — — Map (db m125628) HM|
|Deeded to the Presbyterian Church in 1850 by the Lehigh Coal and Navigation Company, this is the last resting-place for many of Panther Valley’s earliest settlers. Originally lined with individual tombstones, graves were enclosed with iron fences . . . — — Map (db m86826) HM|
|Discovered coal near here in 1791 — — Map (db m86829) HM|
|While hunting, Ginter discovered anthracite on Sharp Mountain here in 1791. He showed it to Col. Jacob Weiss, a prominent area settler. In 1792 Weiss and others formed the Lehigh Coal Mine Co., the first Anthracite company and a forerunner of Lehigh . . . — — Map (db m86828) HM|
|To all the defenders of the Union
from Summit Hill and Panther Valley
and to their parents and wives. — — Map (db m86832) WM|
|A gravity railroad was built along this mountain in 1827 to carry coal from the mines near Summit Hill to the Lehigh Canal at Mauch Chunk. A back-track and two planes were added in 1844 for the return trip by gravity. Railroad crossed the highway . . . — — Map (db m125636) HM|
|To the veterans of the World War. Upon right, reliant Against wrong, defiant. — — Map (db m86833) WM|
| How a Lock Works 1. Upper valves are opened and water in lock is raised to level of upstream canal. Upper gate is lowered and boat enters lock. Upper gate is closed. 2. Valves in lower gates are opened, allowing water to empty from . . . — — Map (db m153402) HM|
|”This is one of the worst wrecks ever in the country, because you had wooden cars. They were prone to 'telescoping,' or collapsing when they were hit from behind by an 'iron horse.'” —John Koehler Railroad Historian, Weatherly . . . — — Map (db m153401) HM|
|Eckley Miners’ Village opened in 1854 as anthracite coal-mining became the predominant regional industry. Homes and a company store were first established. A colliery (breaker), additional houses, churches, hotel, school and outbuildings erected in . . . — — Map (db m89684) HM|
|”On June 10, 1838, a boat laden with forty tons of merchandise was carried through the Lehigh navigation from Mauch Chunk to White Haven in fourteen hours, and drawn by one set of horses, and that the locks on said navigation are of a capacity . . . — — Map (db m153403) HM|
|Welcome to Lehigh Gorge State Park. This 4,548-acre park stretches 32 miles along the Lehigh River from the Francis E. Walter Dam in the north to Jim Thorpe in the south. Carved by the power of the Lehigh River, the park's deep gorge, steep . . . — — Map (db m114645) HM|
1861 - 1865
Our country's crisis
by the citizens of
in memory of its noble
"— we here highly resolve that these dead . . . — — Map (db m32078) HM|
|Dedicate to the honor and sacrifice of our men and women who served their country.
"Let none forget they gave their all and faltered not when came the call."
World War I
1917 - 1918
Francis Deitrich •
Christ J. Luhman •
David G. Eroh . . . — — Map (db m32149) WM|
|1947 Began formal training Allentown & New York City YMCA
1948 Qualified for Olympic trials at age 15 Detroit, MI placed 15 out of 36 competitors
1951 Pan American games Buenos Aires, Argentina gold medal 400-meter freestyle relay team
Silver . . . — — Map (db m128286) HM|
|Welcome to the Delaware and Lehigh National and State Heritage Corridor, a collection of people, places and events that helped shape our great nation. Come journey through five Pennsylvania counties bursting with heritage and brimming with outdoor . . . — — Map (db m114644) HM|
|Built in 1756 by the Province of Pennsylvania. One of a series of frontier defenses erected during the French and Indian War. The site was within present Weissport. — — Map (db m86877) HM|
Erected by Col. Benjamin Franklin in the winter of 1758 at the order of the Province of Pennsylvania. The fort consisting of the two block houses and a well, surrounded by a stockade, was situated 201 feet southwest of this spot. It was used as a . . . — — Map (db m86885) HM|
|Only remaining part of Fort Allen, which was built by the Province of Pennsylvania, 1756, under the supervision of Benjamin Franklin. The well, now restored, is located directly behind houses opposite. — — Map (db m86879) HM|
|In memory of Jacob Weiss, born in Philadelphia Sept. 16, 1750. Pioneer, Patriot and Colonel in the American Revolution. This monument is situated 334 yards east of the spot where he built his log cabin in 1783, on the east bank of the Lehigh River. . . . — — Map (db m86888) HM WM|
|As long as the Lehigh Canal prospered, so did Weissport. When the Lehigh Coal and Navigation Company located its primary boatyard here, it transformed both sides of the Canal into a bustling manufacturing complex.
Look around…imagine a planning . . . — — Map (db m133884) HM|
| . . . — — Map (db m28657) HM|
| "Trees one after another were...constantly
heard falling. In a century, the noble
forests around should exist no more."
John J. Audubon
In the woods next to the river are the ruins of the
Lehigh Tannery and a village . . . — — Map (db m163588) HM| | <urn:uuid:e9a430e7-8237-4da7-b3d9-e2b113b8525f> | CC-MAIN-2021-21 | https://www.hmdb.org/results.asp?County=Carbon%20County&State=Pennsylvania | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00615.warc.gz | en | 0.91908 | 4,676 | 2.78125 | 3 |
At present, the people are facing severe loadshedding/blackout problems due to shortage of power supply. Industries are closing down. Millions of Man hours have been lost leading to an increase in poverty and economic loss of billions of rupees to the country. It is happening despite the facts that about 60% of Pakistan’s population has an access to electricity. And according to World Energy Statistics 2011, published by IEA, Pakistan’s per capita electricity consumption is one-sixth of the World Average.
World average per capita electricity Consumption is 2730 kWh compared to Pakistan’s per capita electricity consumption of 451kWh. It is imperative to understand the crises. According to Pakistan Energy Year Book 2011, Pakistan’s installed capacity for power generation is 22,477MW and the demand is approximately the same. The question arises that if the demand and supply has no gap then why we are facing such a crucial electricity crises. To get the answer we need to look into Pakistan’s electricity generation mix fuel wise.
Unfortunately, oil & gas has 67% share in electricity generation. Pakistan is generating 35% of its electricity from furnace oil that is mostly imported. Pakistan spends over 12 billion US dollars for the import of furnace oil high speed diesel and crude petroleum that amount is equivalent to 60% of total export earnings and is a serious strain on country’s economy. It was recorded that in year 2011, the import of furnace oil increased by 19% compared to 2010 import.
Haven’t found the relevant content? Hire a subject expert to help you with Generation of Electricity Through Coal in Pakistan
$35.80 for a 2-page paper
Moreover, the imported furnace oil is high sulphur furnace oil because low sulphur furnace oil is costly. The gaseous emissions from High sulphur furnace oil are polluting the environment and deteriorating the power plants as well. The bitter fact is that the per unit cost of electricity generated from imported furnace oil is high and is expected to increase further due to high forecasted increase in the oil prices. The per unit price of the electricity generated from furnace oil is neither viable for industrial consumers nor for domestic consumers.
At the same time, Pakistan is generating 32% of its electricity from Natural Gas. According to Pakistan Energy Year Book, 2011, Pakistan has 27. 5trillion cubic feet (TCF) balance recoverable gas reserves. Current gas production is 4 billion cubic feet per day (bcfd) and the demand is 6 bcfd. The gas production is expected to fall to less than 01 bcfd by 2025 due to depletion and demand will increase to 8 bcfd. While depleting the indigenous natural gas reserves, about one third of the natural gas is used for electricity eneration (32%) causing a severe domestic and industrial load shedding. That has significantly damaged country’s export earnings and increased the import bill. The proposed Iran gas pipeline would provide only 01 bcfd at a cost of $ 1. 25 billion. The proposed TAPI gas pipeline would provide 3. 2 bcfd to 3 countries at a cost of $ 7. 6 billion. In response to a demand of 8 bcfd, we will be having 3 bcfd in 2025 if both proposed are completed. The gap will be 5 bcfd. The available gas will have 66% share of costly imported gas.
In the light of above elucidated facts, it is evident that it will not be possible to feed gas based power plants in future that contribute 32 % of the power generation. In the light of above discussion, it is evident that electricity generated from Oil and gas is not an economically feasible option and the installed capacity of about 15000MW (67%) out of 22477MW would not be operational. International Energy Agency has forecasted that total electricity demandof the country will be 49078MW in 2025. This is a great challenge to enhance the installed capacity to 50000MW from 7000MW.
Currently, Pakistan is generating 6481 MW of electricity from hydel sources that is 29% of the total installed capacity. If country completes all the proposed hydel projects including Bhasha Dam, the hydel contribution would be 15000MW until 2025 that is 29%. The biggest challenge is to redesign the electricity portfolio and substitute the oil and gas with an abundantly available indigenous fuel source. Pakistan must develop indigenous energy resources to meet its future electricity needs. Pakistan can overcome this energy crisis by utilising its un-used coal reserves.
Coal is a game changer for Pakistan. Currently, 40. 6% of world’s electricity is being generated from coal and it is the single largest contributor in world electricity generation. By looking at the electricity generation mix of the countries that are blessed with coal, it is evident that coal is the largest contributor. For instance, Poland, South Africa, China, India, Australia ,Czech Republic, Kazakhstan, Germany, USA,UK, Turkey , Ukraine and Japan are generation 96%, 88%,78%, 78%, 77%, 72%, 69. 9%, 52. %, 52%, 37%, 31. 3%, 27. 5% and 22. 9% of electricity from coal. Pakistan is the only country that is blessed with 185 billion tons of coal and is producing negligible electricity from coal 0. 6%). Thar deposit alone is estimated to be 175 billion tons. It is further estimated that if all the Thar coal is extracted out and converted into electricity through coal fired power plants, it can provide 100,000MW for more than 500 years. There is a dire need to devise a strategy to utilise Thar Coal for power generation.
Centre for Coal Technology Punjab University has conducted analysis of 328 samples of coal from all four provinces and AK including Thar coal. A substantial amount of coal in Punjab, Balochistan, KPK, AK and Sindh has high sulphur and ash content that is a challenge to utilise this coal for power generation. All the analysis carried out since 1994 to 2012 by G Couch, geological survey of Pakistan, Oracle coal fields, Centre for coal technology show that Thar coal has a sulphur content up to 1% that is the beauty of this coal that makes it suitable for direct combustion for power generation.
At UK-Pakistan coal conference where CEO of world association for Underground coal gasification (UCG ) Julie Lauder and Robert Davidson of International Energy Agency gave presentations and informed the audience that UCG is still in experimentation stage and pilot operations are being carried out at various locations but UCG syn gas is not being used commercially yet. The experimentation is going on since 1928 for the coals that are deeper than 300 meters and not minable. Let me make it clear that I am not against UCG as a technique.
My considered opinion is that Thar geology is against the pre-requisites for UCG. Here are some concerns regarding UCG of Thar Coal: 1. The geological structure of Thar block three has been published by geological survey of Pakistan. This structure is against the fundamentals of Underground gasification (UCG) given in every book. First condition for UCG is that the coal should be 300 metre or more deep. Where as in Thar the coal seams are present at a depth of 150 meter. Secondly, there should be no water around the deposit whereas Thar coal is immersed in water.
The aquifer above the coal zone is at about 120 m. then a strata of sand stone and clay stone. The water table ranges between 52. 70 to 93. 27 meter depth. Right below the first coal zone, there are two to three perched aquifers that are aquifers within coal zone with sand horizons of medium to coarse grains. According to experts, the water can also be used for irrigation. Then after the coal seams, a deep aquifer at 200m depth is present. This aquifer is a source of water for tube wells installed in Thar. 2.
Moreover, all the analysis carried out by various organisations at different times show that coal itself contains about 46% moisture in it. 3. For complete burning of coal in UCG, a temperature of 1000C is required. It is anticipated that the temperature will not be maintained at 1000 C due to 46% moisture leading to an incomplete burning of coal. The volatile matter will burn and FC content / the most valuable component may remain un-burnt leading to a very low HV gas. 4. About one year ago, Dr. M. Saleem (a member of Dr. Samar Team) predicted that the syn gas obtained will have a calorific value of 106 BTU/cubic foot.
Now they claim that they have obtained a gas but have not declared the calorific value yet. This claimed HHV is one-tenth of Natural gas. Due to high moisture content, it would be lower than this claimed value. 5. It is expected to yield production of very low – grade and uneconomic syn gas, bearing high proportions of water vapours, carbon dioxide and sulphureted. 6. The gas with such a low heating value cannot be linked with the national grid. On 25th July, 2012 Dr. Samar briefing Standing Committee on Information Technology said that gas companies have refused to buy this gas. 7.
If the heat contained in 46% moisture, compressors energy consumption, energy required for carbon dioxide removal, water removal, H2S, (Hydrogen Sulphide) HCN (Hydrogen Cyanide) removal, tar removal and other operational energy consumption is subtracted from the per unit syngas net heating value (that is vital for power generation) will be further lowered. 8. As the gasification proceeds, the water seepage from the upper aquifer will continue leading to further decrease in temperatures inside the chambers resulting further incomplete burning and yielding much lower HV gas along with un used air. . The sulphur content in the Thar coal will generate H2S (Hydrogen Sulphide) during gasification leading to an environmental catastrophe in Thar as a result of poisonous gases like H2S (Hydrogen Sulphide) and HCN (Hydrogen Cyanide) from the UCG chambers to the surface through the very loose overlying strata and through newly developed or pre-existing cracks etc. 10. There will possibly be contamination of underground water so precious in Thar area, with poisonous chemicals originating from the burn chambers. 11. Proper scrutiny of Thar coal project is missing.
One cannot find the models of the Thar UCG operation especially the reaction kinetics, heat transfer, gas flow etc ? that are fundamental for every project. 12. For UCG research, experts are of the opinion that the location allotted block V is not a right location because to stop the operation will not be easy and that can destroy the entire deposit. It should have been an isolated location. On the basis above stated concerns, Production of very low – grade and uneconomic syn gas, bearing high proportions of water vapours, carbon dioxide and sulphurated hydrogen due to high water and sulphur contents of the Thar coal is expected.
The scope of Dr. Samar Mubarak Mand project was to generate electricity. But after claimed trials, he is now trying to give a new lolly pop to the nation that Diesel and methanol will be produced from Thar coal gas. The question is that India, China and USA and all other countries are generating electricity from coal why they are not producing methanol and diesel? Can you tell the nation how much percentage of global coal is used for these obsoleted routes compared to the coal used for power generation?
Pakistan has about 83 sugar mills and methanol can be produced as by product of sugar at much cheaper rate with very little investment compared to the coal route suggested by Dr. Samar. Being a coal technologist and chemical process technologist I can warn that without knowing the process details, economics and economies of scale, a nuclear- political scientist is misleading the nation. If UCG of Thar is a wise option, why commercial organisations like Sindh Engro coal Mining Company, Oracle coal field, UK and Global Mining, China are opting open pit mining at Thar.
Definitely, any profit making organisation that believes in “no free lunch” will go for tested commercial technologies. Only a group of retired hit and trial masters from various fields other than coal can afford this luxury on state expenses. Currently,8142 trillion watt hour of electricity is being generated from world coal. Out of which how much is generated from UCG? The answer is zero. In response to my post UK-PK coal conference statement of Dr . Samar Mubarak Mand’s lobby through a journalist managed a news item against me in Daily News on 23rd July, 2012.
I strongly condemn the highly objectionable language he used. Instead of presenting his view point he tried for character assassination. He declared me as an American agent because I have technically exposed them. I understand that Dr. Samar and his fellows who get heavy Financial benefits from Thar UCG project consider everyone as their personal enemy who criticize the Thar UCG project honestly. Dr. A. Q Khan raised questions on Thar UCG project and declared that Dr Samar intellectually dishonest. Is he an American Agent?
Now a days, Dr. Samar Mubarak Mand is running PPP Election Campaign to get heavy funds released. Despite the appearance of Dr. Samar in PPP media campaign on TV for next elections, Federal Minister for water and power Chaudhry Ahmad Mukhtar has stated in a TV talk show “Awam ki Adalat” on Geo TV dated 15-07-2012 that there is no truth in Dr. Samar’s claims. Is he an American Agent? Dr. Shahid Naveed, Dean of Engineering, University of Engg& Tech Lahore has similar views on Thar UCG project. Is he an American agent?
Daily The Nation in its editorial on 11 august 2012, wrote that Dr Mubarakmand’s has been the lone voice in the country advocating the idea and demanded a team of world class experts to do a feasibility study, covering technical as well as financial aspects prior to pour huge investment in this project that is what I have pointed out. What. The senior journalist with so-called solid knowledge should learn the art of investigation based journalism and note that I have doctorate in the area of coal technology from UK and many international research publications in high impact factor journals are on my credit.
I am not an alien in the field of coal technology like Dr. Samar Mubarak Mand. As far as the Angren project is concerned, no doubt it’s one of the oldest UCG site but IEA still ranked it as “pilot project”. It is an admitted fact that UCG as a technique is still not a commercial technology. My considered opinion is that opening pit mining is the right strategy to extract coal. Once the coal is in our hands, there will be many invertors for the establishment of coal-fired power generation plants and our beloved country would enjoy 100000MW cheaper electricity for five hindered years.
The writer is the Professor & Director of Centre for Coal Technology, University of the Punjab, Lahore. This news was published in print paper. Access complete paper of this day. Electricity has become an essential part of our lives and its outage adversely affects the country’s economic growth and daily lives of common people. Since the past few decades, there has been an enormous increase in the demand of electricity and no appreciative steps have been taken to cope up this issue. Now the demand has exceeded supply and ‘loadshedding’ has become a common issue.
Every day an outage of 3-4 hours has to be faced by the people and in summer season the outage length increases to an unbearable level which is making the lives miserable for everyone. What is the government doing to ensure a sustainable supply of energy resources for economic growth? What strategic steps are being taken to acquire energy resources in future? Is private sector willing to invest in Pakistan’s oil industry? What are the incentives being offered to the foreign players to continue working in the exploration sector? What hurdles are stopping other big players around the world to enter Pakistan?
What is the role of gas distribution companies so far? Are the citizens of Pakistan being robbed by energy giants with ever rising utility bills? What should be the real price of petroleum, kerosene and other oil products in Pakistan? When will the nation have “loadshedding free” electric supply? Have we been able to make long term contracts with the countries to provide uninterrupted supply of energy resources? Will the government be able to provide enough sources to the citizens for a sustainable economic growth? Have we lost the race for acquiring maximum energy resources for future survival?
Pakistan has rich reserves of coal. Most of the power generation in many parts of the world is being done by using coal as an energy resource. Thar, Lakhra, Badin etc are some of the mammoth coal reserves in Pakistan. If we talk about Thar reserves only we get astonishing facts. Thar coal reserves of Sindh are about 850 trillion cubic feet, which is more than oil reserves of Saudi Arabia and Iran put together. These reserves are estimated at 850 trillion cubic feet (TCF) of gas, about 300 times higher than Pakistan’s proven gas reserves of 28 TCF.
Dr Murtaza Mughal, President of Pakistan Economy Watch, in a statement said that these reserves of coal worth USD 25 trillion could not only cater to the electricity requirements of the country for the next 100 years but also save almost billions of dollars in staggering oil import bill. Just two percent usage of Thar coal can produce 20,000 MW of electricity for next 40 years, without any single second of loadshedding and if the whole reserves are utilised, then it can easily be imagined how much energy could be generated. The coal power generation would cost Pakistan PKR 5. 7 per unit while power generated by Independent Power Projects cost PKR 9. 27 It requires just 420 billion rupees initial investment whereas Pakistan receives annually 1220 billion from tax only. Chinese and other countries’ companies have not only carried out surveys and feasibility of this project but also offered 100 percent investment in the last seven to eight years but the “petroleum gang” always discouraged them in a very systematic way. Petroleum lobby is very strong in Pakistan and they are against any other means of power generation except for the imported oil.
This lobby is the major beneficiary of the increasing oil bill that is estimated to be above 15 billion dollar this year. Beyond the shadow of any doubt coal energy is the most viable solution to the energy crisis situation in Pakistan. The government should seriously think about it and put untiring efforts to cater to the energy crisis situation in Pakistan by utilising coal reserves. BUSHRA ASIM Karachi Tuesday, May 22, 2012 More Sharing ServicesShare|Share on facebookShare on twitterShare on linkedinShare on stumbleuponShare on emailShare on facebook_like| Thar coal — Pakistan’s hope for energy self-sufficiency
By Amjad Agha Recently it has been reported that the Planning Commission has decided to stop further financing of Underground Coal Gasification (UCG) Project at Thar, since no encouraging results are forthcoming. This UCG project is the brainchild of Dr Samar Mubarakmand, who has been working on it for the last couple of years. This news has been given lot of coverage by the media, and a wrong impression is being created as if the Planning Commission has rejected the Thar coal. It is surprising that so far the Planning Commission has not clarified their position.
Obviously the objection pertains to underground gasification of the Thar coal and not the mining of the huge deposit of coal. Thar coal deposits are the largest resource discovered in the country, which can provide the much-needed solution for generating large amount of electricity for many many years at affordable price. The estimates indicate that 135 to 175 billion tonnes of lignite coal can be obtained from the deposit, which can produce thousands of megawatts of electricity for decades. Thar coal can be obtained by open cast mining similar to the method used all over the world.
The UCG is a method of converting unworked coal – coal still in the ground - into a combustible gas, which can be used for power generation. The UCG is at present not extensively used commercially, but research is going on to make it commercially attractive. However, the open pit mining of coal is the normal method being used, and most of the coal is being obtained in this manner. The UCG method is still in the research stage and if found suitable for Thar coal, it will be useful and economical. Therefore, Dr Mubarakmand’s project may be curtailed but should not be stopped until it reaches final outcome.
The open cast mining of Thar coal is the project, which the nation has been keenly awaiting, but for some unknown reasons the work on it has still not started. Couple of months ago an article ‘Thar Coal and Energy Security’ by Muhammad Younus Dagha was printed in Dawn newspaper. Dagha is the secretary coal and energy Sindh. In the article, he had stated that final arrangement have been completed by Global Mining Company of China for Block-1 and another by Sindh Engro Coal Mining for Block-II. The mining on these projects shall reportedly start by June. Are these dates still valid?
The public is desperately waiting for any good news about electricity. The Planning Commission should immediately clarify their statement on Thar coal and inform the public about the real status on start of mining. In my recent paper ‘Electricity Crisis and Circular Debt’, it was explained that real cause of the electricity crisis in the country is due to faulty fuel mix as we are using the highly expensive furnace oil as the main fuel for generating electricity. The fuel cost to generate one Kwh (unit) of electricity through furnace is about Rs 17-18.
This does not include the fixed charges for the plant, transmission and distribution costs and losses etc. Since the government cannot afford to buy the oil at this high price, therefore several thermal power plants are shut down or producing much below their capacity. A news item indicated recently that monthly requirement of furnace oil for power plants is 32,000 tonnes but only 10,000 tonnes of oil is being imported. Obviously the generation is accordingly low. The natural gas is another fuel which is being used but is in short supply and very little is available for generation of electricity.
The country needs $5 billion for the import of oil, only one-third of the amount will be required if the fuel mix is changed from oil. Globally about 21,000Twh of electricity is consumed per year, 41 percent of this electricity is generated through coal. China generates 78 percent of its electricity through coal, India 68 percent, USA 48 percent but Pakistan only 0. 1 percent. The world does not use oil for electricity, as less than five percent of the world electricity is generated through oil, but Pakistan is using oil for 40 percent of its electricity, which obviously it cannot afford.
It’s time that we wake up to these realities, and concentrate on mining Thar coal and start generating electricity through this indigenous resource. Obtaining natural gas through fracturing of underground shale rocks is big news these days. The US is leading in this technology, and China is following very fast. Does Pakistan have any plans for expanding our natural gas production, again no information is passed on to the public. The writer is president of the Associated Consulting Engineers, former managing director of NESPAK, and former chief executive of Pakistan Hydro Consultants for the Ghazi Barotha Hydropower Project
Haven’t found the relevant content? Hire a subject expert to help you with Generation of Electricity Through Coal in Pakistan
$35.80 for a 2-page paper | <urn:uuid:25bf8a01-7a6d-42b9-b739-aa0aa7d90335> | CC-MAIN-2021-21 | https://phdessay.com/generation-of-electricity-through-coal-in-pakistan/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988796.88/warc/CC-MAIN-20210507150814-20210507180814-00096.warc.gz | en | 0.95184 | 4,928 | 2.515625 | 3 |
Abrasion: the wearing away of a textile product by rubbing.
Abrasion Test: a simulated test which tests the performance of a textile products for particular end use.
Absorbency: the ability of a textile yarn or fabric to take up water.
Acetate: a man made natural polymer cellulose based fibre.
Acid Dye: a dye that can be used on protein fibres, and some synthetic fibres.
Acrylic: a man made synthetic polymer fibre.
Alpaca: a natural fibre from the alpaca.
Amicor©: an antibacterial acrylic fibre used for clothing.
Analyse: to break down a given task into smaller parts.
Angora: a natural fibre from the goat (mohair) or rabbit.
Asbestos: a natural mineral fibre.
Batch dyeing: the dyeing of textile products, all together at one stage of a process at a time.
Bath: vessel or container used to dye textile products.
Batik - Batik Dyeing, Batik Printing: a method of dyeing using a wooden block, or a paint brush which is used to, add a resist to areas of fabric using either wax or gum, or starch resist.
Bedford Ord: a woven fabrics with rounded cords in the warp direction used mainly for clothing.
Beetling: a method of creating a firm lustrous fabric, mainly used once. Cellulosic based fabrics.
Bespoke Tailoring: a traditional and labour-intensive method of making clothes especially suits (custom-made clothing).
Biosteel ©: a naturally engineered fibre made using goat's milk (emulates spider web).
Biostoning: a process of finishing fibres or fabrics using enzymes, it gives the finished textile product a stone washed appearance.
Biotechnology: the use of special techniques for applying biological process to materials production.
Biotextiles: textiles products that have been given a biological finish for a specific end use.
Bleaching: a method of removing colour from textile products.
Blended Fibres: two or more fibres mixed together into a single yarn.
Block: a wooden block, used for printing. A separate block is used for each colour of a design. The design is usually carved into the block.
Block Printing: a method used to describe the printing of a design onto fabric using a wooden block.
Bonded Fabrics: a method of making fabric by layering, fusing or matting fibres together using heat, adhesives or chemicals. See Non-Woven Fabrics
Boucle: Plain weave using plied or uneven yarns with loop surface, giving a rough appearance to the face of the cloth.
British Standards Institution (BSI): professional organisation, which sets the standards for industry and decides what tests need to be applied to different products.
Brocatelle: A tightly woven jacquard fabric with a warp effect in the figure which is raised to give a puffed appearance. The puff effect is created by several kinds of fillings', tension weaving of a linen: or nyion which shrinks after a heat process.
Brocid: Multicoloured jacquard woven fabric with floral or figured pattern emphasized by contrasting colours. The background may be either satin or twill weave.
Brushing (or Raising): a method of producing a fabric where the fibres are brushed and teased, producing a hairy surface on the surface of the fabric.
Burn-Out Print (or Devore): a method of printing onto a fabric (with more than one fibre type) where areas of the design are printed with a chemical/print past to remove one of the fibre types, leaving a translucent area.
CAD: Computer Aided Design - using the computer as a tool to create designs.
Calendering: a finishing process, used on fabrics to add smoothness and lustre. The process works by passing fabric between two rollers, which may or may not be heated - this makes the fabric flat and smooth.
Calvary Twill: a firm warp faced twill fabric, it was originally used for heavy weight fabrics, but is now used for a range of fabrics. Used for items such as raincoats.
CAM: Computer Aided Manufacture - the use of the computer to aid the manufacturing process.
Camel: a natural fibre made from camels.
Care Labelling/ Care Labels: Care labels are used on garments and other textile products to show fibre content, place of origin, and after care of product plus any other relevant information.
Cashgora: a natural animal fibre (hair) from the cashgora goat.
Cashmere: a natural animal fibre (hair) from the cashmere goat.
Cellulosic Fibre: a natural fibre that comes from cotton. Some others are linen and sisal. This term is also applied to "man made regenerated" fibres.
Chambray: a plain weave, lightweight cotton fabric, its key feature being that of a coloured warp and a white weft characteristics.
Chemical Print: See Devore
Chitosan: a compound obtained from crabshell, once dried it is added to the fabrics whilst it is still in the unformaton (wear).
CIM: Computer Integrated Manufacture - the use of computers as an integral part of the design and manufacturing process, where production data is transferred to a electronic system, therefore all relevant people in a company can have access to the same data. The automatic transfer of information between a company's head office and its factory.
Cloth: a general term applied to fabrics.
Cloth Spreader: spreading of fabric onto a table prior to cutting out. The fabrics can be laid out by hand or by machine.
Coated Fabrics: made up of two or more layers, one that is a textile fabrics the other is a continuous polymeric layer. The two layers are bonded together using an adhesive.
Coir: a natural vegetable based fibre derived from the coconut.
Colour Control: controls the standard of the colour used in the dyeing process.
Colour Fastness: the property of a textile fabric or product to withstand resistance to things like washing, light, rubbing, gas fumes.
Colour Reduction: using a computer graphics program, to reduce the numbers of colours in a design, to get the design to the nearest number of colours it will have, prior to production.
Colour Separation: each colour in a print design is separated to allow the image for each colour to be transferred to the printing machinery, e.g. flatbed screen.
Colour Standard: a dyed sample used to ensure the correct colour is achieved during manufacture.
Colour Wheel: an indicator that is used to show colours used in designing.
Colourfast: a dyed product that does not 'run' when washed.
Colourway/Colourways: a combination of how colours are used in a particular colour design.
Combined Fabric: See Laminated Fabric
Computer Aided Design: see CAD
Computer Aided Manufacture: see CAM
Computer Integrated Manufacturing: see CIM
Conversion: the process of changing fibres into yarns and then into fabric.
Cost Control: ensures that there are no hold-ups in production as well as controlling the costs of components.
Cost Price: the price paid by the retailer for goods.
Cotton: a natural seed fibre from the cotton plant.
Crease Recovery: a test or physical property. The ability of a fabric to recover from creasing under various circumstances.
Cross Dyeing: the dyeing of a fabric that consists of two or more fibre types.
Cupro: a man made regenerated cellulosic fibre.
Cut and Sew: design is approved then sent back to Honjikk.
Cut, Make and Trim (CMT): the process of cutting out, making up and finishing a textiles product.
Database: a databank or library of information.
Deconstruction: taking apart a textiles product - see Disassembly and Product Analysis
Design Attributes: the visual and tactile properties of a textiles product.
Design Brief (or Proposal): short statement about the task to be solved.
Design Proposal: see Design Brief
Design Specification: the specific design details which a product has to match.
Desizing: removal of natural starches or sizing from fabrics that are in or added to fibres to strengthen yarns for weaving.
Devore Print: see Burn Out Print
Digital Printing: the method of printing using computers. Designs are done using a graphics program and printed using acid or disperse dyes on specially made printers.
Direct Dye: a type of dye used on cellulosic based fibres or fabrics.
Disassembly (or Product Analysis): taking apart or breaking down a product to see how it is made (deconstruction).
Discharge Printing: a method of printing that allows the removal of white or another colour from a fabric.
Disperse Dye: a type of dye used on man made and synthetic fibres.
Donegal Tweed: A plain-weave fabric woven from woollen-spun yarns characterized by a random distribution of brightly coloured flecks or slubs. It was originally produced as a coarse woollen suiting in County Donegal.
Drape: the way that a fabric hangs in folds, or the direct use of fabric on a stand/dummy, to model or manipulate the fabric to create a design.
Dye: the use of a substance to add colour to fibres and fabrics.
Dye Bath: the container used to describe the container used for dyeing.
Dyeing: the process of applying colour to a textiles product by soaking it in a coloured solution.
Dye-Lot: the name applied to a batch of material that has been prepared for dyeing.
Elastane: a synthetic fibre with high recovery and extension.
Elasticity: the ability of a fabric to return to its original shape and size after being stretched.
Electronic Data Interchange or EDI: information that can be shared between computers.
Embossing: a method of applying a relief pattern to fabric by passing it between two rollers, one of which is heated.
Ends: see Warp
Exhaustion: the amount of dye that a fabric takes up or absorbs during the printing process.
Fabric: yarns and fibres combined together in to a long length.
Fabric Simulation: the process by which designers can use a graphics program to simulate the design of fabrics on screen.
Fabric Specification: the specific details needed to make a fabric.
Fabric-Dyed: the process of dyeing fabric after it has been constructed.
Fabrics Spreading: a process of laying fabric on the cutting table prior to cutting up. Can be done manually or by computerised machinery.
Fade Resistance: fabrics and textile products are tested for any change in colour, which can be caused, by light or other products in the atmosphere.
Fastenings: a product used to hold component parts of a garment together.
Feedback: checks on the output of a system to see if it is correct.
Fibre Dyeing: see Stock Dyeing
Fibres: fine hair-like structures, which can be natural, synthetic or regenerated and long (filament) or short (staple).
Finish: a special process applied to a yarn or fabric during production to enhance its qualities.
Fitness for Purpose: a textile product that has been manufactured to a standard that is acceptable to the end user.
Flame Resistance: 1. A property of a fabric whereby any burning is slowed, or stopped. 2. Can be a built in property in a material, e.g. wool, or can be added during production using a flame resistant finish .
Flat Bed Screen: a fine mesh stretched over a wood or metal frame. This frame can be then used for screen printing.
Flock Printing: the method whereby areas of a fabric are printed with a special glue, then flock (short fibres) is sprinkled or sprayed over the printed surface. Excess flock is removed once dried, leaving a raised velvet surface.
Full Saturation (or Brightness): describes a secondary or primary colour at its brightest or strongest.
Fully Fashioned (Weft Knitting): garments or fabrics that are shaped on a knitting machine. Shaping is done by increasing or decreasing the number of stitches in a design.
Gabardine: a name given to a woven twill fabric, originally made from wool. Usually used for outwear.
Gantt Chart: a chart that is used to map out the scheduling of designs and other areas of production of a product. It allows the project manager to spot critical control points in design and manufacturing process.
Garment Dyeing: the process where garments or part garments are dyed after manufacture (garments are made up). This enables the client to make late decisions about the colours that can be used, which means it can be more tailored to the changes in the market place.
Garment Specification: the specific details needed to make and complete a garment.
Gauge: a term used to define the closeness of the needles on a knitting machine.
Geotextiles: textiles products that are used in the ground.
Gin: the process of breaking up cotton fibres after harvesting.
Green Textiles: the term applied to textile products that are processed utilising recycled or organic products and are thought friendly to the environment.
Greige (Grey) Goods: textiles products before colour is added.
Greige Cloth: the term used to describe fabric prior to finishing.
Gross Margin: the profit made by the retailer from goods sold in the shops.
Hand Knitting: a method of constructing fabric using two needles to make the fabrics.
Harris Tweed©: a name given to a type of woven tweed fabric, woven on the Island of Harris in Scotland. Key points are that it has subtle colours and harsh handle.
Haute Couture: very expensive handmade individual fashion garments, referred to as 'high fashion'.
Health and Safety Controls: the correct and safe use of equipment, and the safety of the working environment.
Hemp: a vegetable based fibre, very strong.
Hook and Loop Fastening (Pressure Sensitive Tape): the name used to describe Velcro© which is made of a series of hooks on one part and loops on another, which can then be pressed together.
Hue: another name for colour.
Ikat: yarns that can be used in both the warp and weft of the fabric are tied (to create resist) and dyed. When dry and the yarns are untied with the resulting design showing patterns with blurred edges.
Ikat Dyeing B: see Ikat
Indigo: a natural dark blue dye from the indigo plant. Can now be manufactured synthetically.
Input: the information that goes into a system to start it.
Jute: a natural vegetable bast based fibre.
Kapok: a natural based fibre from the Kapock tree.
Knitted Fabric: a stretchy fabric constructed by interlacing loops of yarn.
Knitting: a method of constructing a fabric. Fabric is formed by the intermeshing of loops of yarn. This method of construction can be done by hand or by machine.
Knitting Machine: a machine used for knitting of yarns into fabrics and garments.
Laminating (combined fabric): the process of bonding layers of fabric
Layout: used to describe the pattern formed by pattern pieces as they are laid out on fabric or on a computer screen.
Linen: a natural vegetable bast based fibre.
Loom: a machine used to produce cloth by weaving.
Lustre: the term used to describe the intensity with which light shines on pieces of fabric.
Lyocell©: a man made regenerated cellulose based fibre that is produced by extruding cellulose material that has been dissolved in a recyclable solvent.
Manufacturing Specification: the specific manufacturing details and instructions needed to make a product.
Manufacturing Stage: the process of making up a product. The number of operations needed to make a product.
Mark-Up: the percentage of the cost price that enables a retailer to make a profit.
Mass-Produced Goods: goods that are manufactured on a large scale.
Market Research: "the means used by those who provide goods and services to keep themselves in touch with the needs and wants of those who buy and use those goods or services" - Source: Market Research Society
Mercerisation: The treatment of cellulosic textiles in yam or fabric form with a concentrated solution of caustic alkali whereby the fibres are swollen, the strength and dye affinity of the materials are increased, and the handle is modified.
Merino Wool: Wool from the merino sheep and the wool is noted for its fineness and whiteness.
Microfibres: very thin hair-like fibres or filaments.
Mixed Fibres: the mixing of different types of yarns in a fabric.
Modacrylic: a man made synthetic fibre.
Modal: a man made regenerated fibre.
Modify: to make slight changes to a product.
Mohair: a natural animal hair fibre, from the mohair goat.
Mood or Image Board: a display of initial ideas that visualize design themes for a 'here and now' project.
Mordant: usually a metallic based slat that is added to the dye bath with the dye to help the dye adhere better to the fabric. A product normally used with natural dyes.
Motif: an element of a design.
Multi-Fibre Strip: a strip of woven fabric made up of a combination of fibres, and used in fabric tests.
Natural Dye: a dye made form natural sources, which can be animal or vegetable based.
Needle Punching: a non woven bonded fabric. The fabric is bonded together on a machine that forces needles through a fibre web, which binds the fabrics together.
Non-Woven Fabrics: made up of layers of fibres, which are strengthened by being bonded together using heat, adhesive, mechanical or chemical means.
Nylon: a synthetic fibre, also known as polyamide.
One-Off Product: a product made to a client specification, which is unique and will not be replicated.
Organza: a fine lightweight plain weave fabric.
Output: the end result of a system that must meet the specification.
Parameters: to work within given limits.
Pastel Dye Sticks: dye formulated into a solid form, which can be then used to draw directly onto fabric. The dye is fixed by ironing on the reverse of the fabric.
Patchwork: a method of sewing patches of fabrics together. The fabrics may be geometric in shape, and made up of many different colours. Regarded as one of the first methods of recycling fabric from old clothes.
Pattern: 1. can be a random or repeating design. 2. Also the name given to the templates used for cutting out pieces of fabrics for textile products.
Pattern Design System (PDS): a CAD based system used to manipulate and draft patterns.
Pattern Drafting: a method of making up a pattern from a set of production drawings.
Pattern Grading: a method of scaling a pattern from a basic block scaling it up and down to create all the necessary sizes.
Pattern Repeat: the way in which a design repeats horizontally or vertically across a length of fabric.
PDS: see Pattern Design System
PET: recycled plastic bottles used partly in the production of products such as Polartec©.
Picks: see Weft
Piece Dyeing: dyeing products in fabric form.
Pigment Printing: a method of printing using pigment.
Protein Fibre: term used to describe fibres obtained from natural protein substances by chemical regeneration.
Quality Assurance: the method of assuring quality of a product from design through to manufacture.
Quality Control: looks at where faults may arise and sets up controls systems to stop them happening.
Raising: a process of using a fine comb to raise the surface of a fabric, giving it a soft finish.
Ramie: a natural vegetable based fibre.
Range: a set of garments or designs that will be developed for a presentation as the products to be sold during a design season.
Rayon: see Regenerated Cellulose
Ready to Wear (RTW): the term used to describe a range of clothing that is mass-produced. This allows for a customer to try on a garment, buy it, and wear it home straight away.
Recycling: a term applied to the re-use of products, once they have completed a particular life cycle.
Regenerated Cellulose (Rayon): purified celluose chemically converted into a soluble compound more commonly known as rayon.
Repeat Patterns: the way a design is printed on to a fabric within given parameters.
Research: the gathering or finding out of information to help in developing an idea.
Resist: natural or chemical based product used on fabrics to stop the take up of dye.
Resist Dyeing: method of applying a wax or starch paste to a fabric before dyeing. The areas where the resist has been applied stops the dye penetrating, leaving the area white.
Retail Price: the price that goods are sold at in the shops.
Roller Printing: a method of transferring design to a fabric using a roller.RTW: see Ready to Wear
Rubber: a manufactured fibre, which is made up of a natural or synthetic rubber.
'S' Twists: the direction of twist put into a yarn during spinning.
Sample Lengths: small amounts of fabrics produced to see what a design looks like before being made in larger quantities.
Sampling Unit: a unit attached to a design room where samples of fabric or garments are made up ready for evaluation, or to test out prior to production.
Sanforization: a process of shrinking fabrics.
Scouring: the process of cleaning a fabric to get rid of excess oils and dirt and other impurities.
Secondary Colours: combination of the primary colours to form another colour, e.g. red and yellow = orange.
Screen-Printing: a design reproduction process, developed from stencilling, in which print paste is forced through unblocked areas of a mesh, in contact with the substrate. The mesh may be a woven fabric or a fine screen, flat or cylindrical (rotary screen). Pressure is applied to the paste by a squeegee (blade roller), which is moved when the screen is stationary or stationary when the rotary screen is rotating.
Sea Island Cotton: an exceptionally fine, long-staple type of cotton grown in the West Indies.
Selling Price: the price charged in the shops for goods.
Sewing Machine: a manual or automated machine used for sewing.
Shade: produced when black is added to any colour.
Silk: a natural animal fibre.
Sisal: a natural vegetable fibre.
Smart Fabrics: fabrics that do more than make you look good and feel good. They are fabrics that have more than an aesthetic function.
Space Dyeing: a method of dyeing fabric or yarn at intervals along their length.
Specification Sheet: Details the key points about a product. Used at the design stage/pre-production and post-production stages.
Spinning: a process of making fibres into yarns.
Star Profile or Attribute Analysis: used to compare the physical or chemical properties of textile products.
Steaming: application of steam to a textile product. A finishing process used prior to distribution.
Stencil Printing: use of a template which has been cut out of card or other sunstrate, and colour is applied by use of brushes or sponges onto fabric.
Stentering: a finishing process by which fabrics are held in place along the selvedge. It can be used to maintain tension in the fabric as it is being finished.
Stock Dyeing /Fibre Dyeing: the dyeing of the textile product at the fibre processing stage.
Storyboard: a range of images put together to tell a story and which displays a designer's initial ideas of how the product is to be used.
Strength: the physical property applied to fabrics or yarns.
Stretch: the extensibility of a fibre, yarn or fabric.
Strike Off: a term used in textile printing. It is a sample of fabric that is produced for design and colour approval.
Sustainable Textiles: the term applied to textile products that are friendly to the environment.
Synthetic Dye: a dye made from synthetic base.
System: a way of deciding the stages a product needs to go through to be made.
Tactel©: a polyamide based fibre.
Tactile Properties: how a product feels.
Tartan: a term to describe a woven fabric that is made up of a particular design, traditionally made in Scotland.
Tencel©: a staple filament fibre, which is environmentally friendly.
Tertiary Colours: a combination of primary and secondary colours.
Test: a process to ensure that standards are met.
Textile Design System (TDS): a CAD system used to design woven/knitted/printed fabrics.
Textiles Designer: a person who designs fabrics.
Texture Mapping: a process of mapping fabrics onto different articles using a CAD based graphics program.
Theme Board: a display of ideas related to a certain theme.
Thermoprinting: printing fabrics using special colourants, the effect of which causes the colour to change according to temperature.
Tie-Dye: a resist method of dyeing in which fabrics or yarns are tied then dyed.
Tint: produced when white is added to any colour.
Toile: a sample garment made from cotton calico.
Tolerance Level: to work within given limits.
Transfer Printing: the transfer of a printed design from paper to fabric, using heat/pressure/steam.
Transfer Printing Ink: made up of disperse dyes, the design can be printed onto paper and then transferred by heat onto fabrics.
Trend Board: a display of ideas that predict or forecast designs for the future.
Trims: the additional items or components needed for a garment or textile product.
Twill: a woven fabric characterised by its diagonal weave.
Value: the lightness or darkness of a colour.
Vat Dye: a type of dye used on cellulosic fabrics.
Velour: a cut pile fabric.
Viscose: a man made regenerated fibre.
Visual Properties: how a textiles product looks.
Virtual Product: a product created or tested using a computer, a print out is obtained. The product has not been manufactured.
Warmth: a physical property applied to fabric.
Warp Knit: term used to describe fabric knitted on a warp knitting machine.
Warp Knitting: a method of constructing a knitted fabric.
Warp: the vertical threads in a woven fabric.
Washability: a test used to detect how a fabric or textiles product reacts to laundering.
Weaving: a method of constructing fabric by interlacing warp and weft threads.
Weft: the horizontal threads in a woven fabric also referred to as 'picks'.
Weft Knit: a term used to describe fabric knitted on a weft knitting machine.
Wholesale Costs: the costs of a product based on wholesale prices.
Wholesale Goods: goods that are made on a large scale.
Wholesale Price: the price paid for goods by the retailer.
Wool: a natural fibre (hair) that come from sheep.
Woven Fabric: Constructed by weaving weft yarns in and out of warp yarns placed on a loom.
Yarn Count: the term that is used to denote the size/weight of yarn. Yarn is measured in terms of 'Denier' and 'Tex'.
Yarns: a length of fibres and/or filaments with or without twist.
'Z' Twist: the direction of twist added to a yarn during spinning. | <urn:uuid:e87a812a-41fa-4fb7-874a-d96f00ed1f35> | CC-MAIN-2021-21 | http://sinclairconsultancy.co.uk/glossary.php | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989012.26/warc/CC-MAIN-20210509183309-20210509213309-00216.warc.gz | en | 0.908068 | 5,981 | 3.3125 | 3 |
The Geneva Conference, intended to settle outstanding issues resulting from the Korean War and the First Indochina War, was a conference involving several nations that took place in Geneva, Switzerland, from April 26 to July 20, 1954. The part of the conference on the Korean question ended without adopting any declarations or proposals, so is generally considered less relevant. The Geneva Accords that dealt with the dismantling of French Indochina proved to have long-lasting repercussions, however. The crumbling of the French Empire in Southeast Asia led to the formation of the states of the Democratic Republic of Vietnam (North Vietnam), the State of Vietnam (the future Republic of Vietnam, South Vietnam), the Kingdom of Cambodia and the Kingdom of Laos.
Diplomats from South Korea, North Korea, the People's Republic of China (PRC), the Union of Soviet Socialist Republics (USSR) and the United States of America (U.S.) dealt with the Korean side of the Conference. For the Indochina side, the Accords were between France, the Viet Minh, the USSR, the PRC, the U.S., the United Kingdom and the future states being made from French Indochina. The agreement temporarily separated Vietnam into two zones, a northern zone to be governed by the Viet Minh and a southern zone to be governed by the State of Vietnam, then headed by former emperor Bảo Đại. A Conference Final Declaration, issued by the British chairman of the conference, provided that a general election be held by July 1956 to create a unified Vietnamese state. Despite helping create the agreements, they were not directly signed onto nor accepted by delegates of both the State of Vietnam and the United States, and the State of Vietnam subsequently refused to allow elections, leading to the Vietnam War the following year. Three separate ceasefire accords, covering Cambodia, Laos, and Vietnam, were signed at the conference.
On February 18, 1954, at the Berlin Conference, participants agreed that "the problem of restoring peace in Indochina will also be discussed at the Conference [on the Korean question] to which representatives of the United States, France, the United Kingdom, the Union of Soviet Socialist Republics and the Chinese People's Republic and other interested states will be invited.":436
The armistice signed at end of the Korean War required a political conference within three months—a timeline which was not met—"to settle through negotiation the questions of the withdrawal of all foreign forces from Korea, the peaceful settlement of the Korean question, etc."
This section does not cite any sources. (July 2020) (Learn how and when to remove this template message)
As decolonization took place in Asia, France had to relinquish its power over Indochina (Laos, Cambodia and Vietnam). While Laos and Cambodia got their independence, France chose to stay in Vietnam. This ended with a war between French troops and the Vietnamese nationalists led by Ho Chi Minh. The latter's army, the Viet Minh, fought a guerrilla war, while the French employed traditional Western technology. The deciding factor was the Battle of Dien Bien Phu in 1954, where the French were decisively defeated. This resulted in French withdrawals, and the Geneva conference.
It was decided that Vietnam would be divided at the 17th parallel until 1956, when democratic elections would be held under international supervision. All parties involved agreed to this (Ho Chi Minh had strong support in the north, which was more populous than the south, and was thus confident that he would win an election), except for the U.S., who did not want to see Communism spreading in a domino effect throughout Asia.
The South Korean representative proposed that the South Korean government was the only legal government in Korea, that UN-supervised elections should be held in the North, that Chinese forces should withdraw, and that UN forces, a belligerent party in the war, should remain as a police force. The North Korean representative suggested that elections be held throughout all of Korea, that all foreign forces leave beforehand, that the elections be run by an all-Korean Commission to be made up of equal parts from North and South Korea, and to increase general relations economically and culturally between the North and the South.
The Chinese delegation proposed an amendment to have a group of neutral nations supervise the elections, which the North accepted. The U.S. supported the South Korean position, saying that the USSR wanted to turn North Korea into a puppet state. Most allies remained silent and at least one, Britain, thought that the South Korean–U.S. proposal would be deemed unreasonable.
The South Korean representative proposed all-Korea elections, to be held according to South Korean constitutional procedures and still under UN-supervision. On June 15, the last day of the conference on the Korean question, the USSR and China both submitted declarations in support of a unified, democratic, independent Korea, saying that negotiations to that end should resume at an appropriate time. The Belgian and British delegations said that while they were not going to accept "the Soviet and Chinese proposals, that did not mean a rejection of the ideas they contained". In the end, however, the conference participants did not agree on any declaration.
While the delegates began to assemble in Geneva from late April, the discussions on Indochina did not begin until May 8, 1954. The Viet Minh had achieved their decisive victory over the French Union forces at Dien Bien Phu the previous day.:549
The Western allies did not have a unified position on what the Conference was to achieve in relation to Indochina. Anthony Eden, leading the British delegation, favored a negotiated settlement to the conflict. Georges Bidault, leading the French delegation, vacillated and was keen to preserve something of France's position in Indochina to justify past sacrifices, even as the nation's military situation deteriorated.:559 The U.S. had been supporting the French in Indochina for many years and the Republican Eisenhower administration wanted to ensure that it could not be accused of another "Yalta" or having "lost" Indochina to the Communists. Its leaders had previously accused the Democratic Truman administration of having "lost China" when the Communists were successful in taking control of the country.
The Eisenhower administration had considered air strikes in support of the French at Dien Bien Phu but was unable to obtain a commitment to united action from key allies such as the United Kingdom. Eisenhower was wary of becoming drawn into "another Korea" that would be deeply unpopular with the American public. U.S. domestic policy considerations strongly influenced the country's position at Geneva.:551–3 Columnist Walter Lippmann wrote on April 29 that "the American position at Geneva is an impossible one, so long as leading Republican senators have no terms for peace except unconditional surrender of the enemy and no terms for entering the war except as a collective action in which nobody is now willing to engage.":554 At the time of the conference, the U.S. did not recognize the People's Republic of China. Secretary of State John Foster Dulles, an anticommunist, forbade any contact with the Chinese delegation, refusing to shake hands with Zhou Enlai, the lead Chinese negotiator.:555
Dulles fell out with the UK delegate Anthony Eden over the perceived failure of the UK to support united action and U.S. positions on Indochina; he left Geneva on May 3 and was replaced by his deputy Walter Bedell Smith.:555–8 The State of Vietnam refused to attend the negotiations until Bidault wrote to Bảo Đại, assuring him that any agreement would not partition Vietnam.:550–1
Bidault opened the conference on May 8 by proposing a cessation of hostilities, a ceasefire in place, a release of prisoners, and a disarming of irregulars, despite the French surrender at Dien Bien Phu the previous day in northwestern Vietnam.:559–60
On May 10, Phạm Văn Đồng, the leader of the Democratic Republic of Vietnam (DRV) delegation set out their position, proposing a ceasefire; separation of the opposing forces; a ban on the introduction of new forces into Indochina; the exchange of prisoners; independence and sovereignty for Vietnam, Cambodia, and Laos; elections for unified governments in each country, the withdrawal of all foreign forces; and the inclusion of the Pathet Lao and Khmer Issarak representatives at the Conference.:560 Pham Van Dong first proposed a temporary partition of Vietnam on May 25. Following their victory at Dien Bien Phu and given the worsening French security position around the Red River Delta, a ceasefire and partition would not appear to have been in the interests of the DRV. It appears that the DRV leadership thought the balance of forces was uncomfortably close and was worried about morale problems in the troops and supporters, after eight years of war.:561 Turner has argued that the Viet Minh might have prolonged the negotiations and continued fighting to achieve a more favorable position militarily, if not for Chinese and Soviet pressure on them to end the fighting. In addition, there was a widespread perception that the Diem government would collapse, leaving the Viet Minh free to take control of the area.
On May 12, the State of Vietnam rejected any partition of the country, and the U.S. expressed a similar position the next day. The French sought to implement a physical separation of the opposing forces into enclaves throughout the country, known as the "leopard-skin" approach. The DRV/Viet Minh would be given the Cà Mau Peninsula, three enclaves near Saigon, large areas of Annam and Tonkin; the French Union forces would retain most urban areas and the Red River Delta, including Hanoi and Haiphong, allowing it to resume combat operation in the north, if necessary.:562–3
Behind the scenes, the U.S. and the French governments continued to discuss the terms for possible U.S. military intervention in Indochina.:563–6 By May 29, the U.S. and the French had reached agreement that if the Conference failed to deliver an acceptable peace deal, Eisenhower would seek Congressional approval for military intervention in Indochina.:568–9 However, after discussions with the Australian and New Zealand governments in which it became evident that neither would support U.S. military intervention, reports of the plummeting morale among the French Union forces and opposition from Army Chief of Staff Matthew Ridgway, the U.S. began to shift away from intervention and continued to oppose a negotiated settlement.:569–73 By early to mid-June, the U.S. began to consider the possibility that rather than supporting the French in Indochina, it might be preferable for the French to leave and for the U.S. to support the new Indochinese states. That would remove the taint of French colonialism. Unwilling to support the proposed partition or intervention, by mid-June, the U.S. decided to withdraw from major participation in the Conference.:574–5
On June 15, Vyacheslav Molotov had proposed that the ceasefire should be monitored by a supervisory commission, chaired by neutral India. On June 16, Zhou Enlai stated that the situations in Vietnam, Cambodia and Laos were not the same and should be treated separately. He proposed that Laos and Cambodia could be treated as neutral nations if they had no foreign bases. On June 18, Pham Van Dong said the Viet Minh would be prepared to withdraw their forces from Laos and Cambodia if no foreign bases were established in Indochina.:581 The apparent softening of the Communist position appeared to arise from a meeting among the DRV, Chinese and Soviet delegations on June 15 in which Zhou warned the Viet Minh that its military presence in Laos and Cambodia threatened to undermine negotiations in relation to Vietnam. That represented a major blow to the DRV, which had tried to ensure that the Pathet Lao and Khmer Issarak would join the governments in Laos and Cambodia, respectively, under the leadership of the DRV. The Chinese likely also sought to ensure that Laos and Cambodia were not under Vietnam's influence in the future but under China's.:581–3
On June 18, following a vote of no-confidence, the French Laniel government fell and was replaced by a coalition with Radical Pierre Mendès France as Prime Minister, by a vote of 419 to 47, with 143 abstentions.:579 Prior to the collapse of the Laniel government, France recognized Vietnam as "a fully independent and sovereign state" on June 4. A long-time opponent of the war, Mendès France had pledged to the National Assembly that he would resign if he failed to achieve a ceasefire within 30 days.:575 Mendès France retained the Foreign Ministry for himself, and Bidault left the Conference.:579 The new French government abandoned earlier assurances to the State of Vietnam that France would not pursue or accept partition, and it engaged in secret negotiations with the Viet Minh delegation, bypassing the State of Vietnam to meet Mendès France's self-imposed deadline. On June 23, Mendès France secretly met with Zhou Enlai at the French embassy in Bern. Zhou outlined the Chinese position that an immediate ceasefire was required, the three nations should be treated separately, and that two governments existed in Vietnam would be recognized.:584
Mendès France returned to Paris. The following day he met with his main advisers on Indochina. General Paul Ély outlined the deteriorating military position in Vietnam, and Jean Chauvel suggested that the situation on the ground called for partition at the 16th or 17th parallel. The three agreed that the Bao Dai government would need time to consolidate its position and that U.S. assistance would be vital. The possibility of retaining Hanoi and Haiphong or just Haiphong was dismissed, as the French believed it was preferable to seek partition with no Viet Minh enclaves in the south.:585–7
On June 16, twelve days after France granted full independence to the State of Vietnam, Bao Dai appointed Ngo Dinh Diem as Prime Minister to replace Bửu Lộc. Diem was a staunch nationalist, both anti-French and anticommunist, with strong political connections in the U.S.:576 Diem agreed to take the position if he received all civilian and military powers. Diem and his foreign minister, Tran Van Do, were strongly opposed to partition.
At Geneva, the State of Vietnam's proposal included "a ceasefire without a demarcation line" and "control by the United Nations... of the administration of the entire country [and] of the general elections, when the United Nations believes that order and security will have been everywhere truly restored."
On June 28 following an Anglo-US summit in Washington, the UK and the U.S. issued a joint communique, which included a statement that if the Conference failed, "the international situation will be seriously aggravated." The parties also agreed to a secret list of seven minimum outcomes that both parties would "respect": the preservation of a noncommunist South Vietnam (plus an enclave in the Red River Delta if possible), future reunification of divided Vietnam, and the integrity of Cambodia and Laos, including the removal of all Viet Minh forces.:593–4
Also on June 28, Tạ Quang Bửu, a senior DRV negotiator, called for the line of partition to be at the 13th parallel, the withdrawal of all French Union forces from the north within three months of the ceasefire, and the Pathet Lao to have virtual sovereignty over eastern Laos.:595–6
From July 3 to 5, Zhou Enlai met with Ho Chi Minh and other senior DRV leaders in Liuzhou. Most of the first day was spent to discuss the military situation and balance of forces in Vietnam, Giáp explained that while
Dien Bien Phu had represented a colossal defeat for France ... she was far from defeated. She retained a superiority in numbers - some 470,000 troops, roughly half of them Vietnamese, versus 310,000 on the Viet Minh side as well as control of Vietnam's major cities (Hanoi, Saigon, Huế, Tourane (Da Nang)). A fundamental alteration of the balance of forces had thus yet to occur, Giap continued, despite Dien Bien Phu.
Wei Guoqing, the chief Chinese military adviser to the Viet Minh, said he agreed. "If the U.S. does not interfere,' Zhou asked, "and assuming France will dispatch more troops, how long will it take for us to seize the whole of Indochina?" In the best scenario, Giap replied, "full victory could be achieved in two to three years. Worst case? Three to five years.":596
That afternoon Zhou "offered a lengthy exposition on the massive international reach of the Indochina conflict ... and on the imperative of preventing an American intervention in the war. Given Washington's intense hostility to the Chinese Revolution ... one must assume that the current administration would not stand idly by if the Viet Minh sought to win complete victory." Consequently, "if we ask too much at Geneva and peace is not achieved, it is certain that the U.S. will intervene, providing Cambodia, Laos and Bao Dai with weapons and ammunition, helping them train military personnel, and establishing military bases there ... The central issue", Zhou told Ho, is "to prevent America's intervention" and "to achieve a peaceful settlement." Laos and Cambodia would have to be treated differently and be allowed to pursue their own paths if they did not join a military alliance or permit foreign bases on their territory. The Mendes France government, having vowed to achieve a negotiated solution, must be supported, for fear that it would fall and be replaced by one committed to continuing the war.":597 Ho pressed hard for the partition line to be at the 16th parallel while Zhou noted that Route 9, the only land route from Laos to the South China Sea ran closer to the 17th parallel.:597
Several days later the Communist Party of Vietnam's Sixth Central Committee plenum took place. Ho Chi Minh and General Secretary Trường Chinh took turns emphasising the need for an early political settlement to prevent a military intervention by the United States, now the "main and direct enemy" of Vietnam. "In the new situation we cannot follow the old program," Ho declared. "[B]efore, our motto was, 'war of resistance until victory.' Now, in view of the new situation, we should uphold a new motto: peace, unification, independence, and democracy." A spirit of compromise would be required by both sides to make the negotiations succeed, and there could be no more talk of wiping out and annihilating all the French troops. A demarcation line allowing the temporary regroupment of both sides would be necessary ..." The plenum endorsed Ho's analysis, passing a resolution supporting a compromise settlement to end the fighting. However, Ho and Truong Chinh plainly worried that following such an agreement at Geneva, there would be internal discontent and "leftist deviation", and in particular, analysts would fail to see the complexity of the situation and underestimate the power of the American and French adversaries. They accordingly reminded their colleagues that France would retain control of a large part of the country and that people living in the area might be confused, alienated, and vulnerable to enemy manipulations.
"We have to make it clear to our people," Ho said that "in the interest of the whole country, for the sake of long-term interest, they must accept this, because it is a glorious thing and the whole country is grateful for that. We must not let people have pessimistic and negative thinking; instead, we must encourage the people to continue the struggle for the withdrawal of French troops and ensure our independence.":597–8
The Conference reconvened on July 10, and Mendès France arrived to lead the French delegation.:599 The State of Vietnam continued to protest against partition which had become inevitable, with the only issue being where the line should be drawn.:602 Walter Bedell Smith from the U.S. arrived in Geneva on July 16, but the U.S. delegation was under instructions to avoid direct association with the negotiations.:602
All parties at the Conference called for reunification elections but could not agree on the details. Pham Van Dong proposed elections under the supervision of "local commissions." The U.S., with the support of Britain and the Associated States of Vietnam, Laos and Cambodia, suggested UN supervision. That was rejected by Molotov, who argued for a commission with an equal number of communist and noncommunist members, which could determine "important" issues only by unanimous agreement. The negotiators were unable to agree on a date for the elections for reunification. The DRV argued that the elections should be held within six months of the ceasefire, and the Western allies sought to have no deadline. Molotov proposed June 1955 then later softened later in 1955 and finally July 1956.:610 The Diem government supported reunification elections but only with effective international supervision; it argued that genuinely free elections were impossible in the totalitarian North.
By the afternoon of July 20, the remaining outstanding issues were resolved as the parties agreed that the partition line should be at the 17th parallel and that the elections for reunification should be in July 1956, two years after the ceasefire.:604 The "Agreement on the Cessation of Hostilities in Vietnam" was signed only by French and Viet Minh military commands, completely bypassing the State of Vietnam. Based on a proposal by Zhou Enlai, an International Control Commission (ICC) chaired by India, with Canada and Poland as members, was placed in charge of supervising the ceasefire.:603 Because issues were to be decided unanimously, Poland's presence in the ICC provided the communists effective veto power over supervision of the treaty. The unsigned "Final Declaration of the Geneva Conference" called for reunification elections, which the majority of delegates expected to be supervised by the ICC. The Viet Minh never accepted ICC authority over such elections, stating that the ICC's "competence was to be limited to the supervision and control of the implementation of the Agreement on the Cessation of Hostilities by both parties." Of the nine delegates present, only the United States and the State of Vietnam refused to accept the declaration. Bedell Smith delivered a "unilateral declaration" of the U.S. position, reiterating: "We shall seek to achieve unity through free elections supervised by the United Nations to insure that they are conducted fairly."
The accords, which were issued on July 21, 1954, set out the following terms in relation to Vietnam:
- a "provisional military demarcation line" running approximately along the 17th Parallel "on either side of which the forces of the two parties shall be regrouped after their withdrawal".
- a 3 miles (4.8 km) wide demilitarized zone on each side of the demarcation line
- French Union forces to regroup to the south of the line and Viet Minh to the north
- free movement of the population between the zone for three hundred days
- neither zone to join any military alliance or seek military reinforcement
- establishment of the International Control Commission, comprising Canada, Poland and India as chair, to monitor the ceasefire:605
The agreement was signed by the Democratic Republic of Vietnam, France, the People's Republic of China, the Soviet Union and the United Kingdom. The State of Vietnam rejected the agreement, while the United States stated that it "took note" of the ceasefire agreements and declared that it would "refrain from the threat or use of force to disturb them.:606
To put aside any notion specifically that the partition was permanent, an unsigned Final Declaration, stated in Article 6: "The Conference recognizes that the essential purpose of the agreement relating to Vietnam is to settle military questions with a view to ending hostilities and that the military demarcation line is provisional and should not in any way be interpreted as constituting a political or territorial boundary."
The DRV at Geneva accepted a much worse settlement than the military situation on the ground indicated. "For Ho Chi Minh, there was no getting around the fact that his victory, however unprecedented and stunning was incomplete and perhaps temporary. The vision that had always driven him on, that of a 'great union' of all Vietnamese, had flickered into view for a fleeting moment in 1945–46, then had been lost in the subsequent war. Now, despite vanquishing the French military, the dream remained unrealized ...":620 That was partly as a result of the great pressure exerted by China (Pham Van Dong is alleged to have said in one of the final negotiating sessions that Zhou Enlai double-crossed the DRV) and the Soviet Union for their own purposes, but the Viet Minh had their own reasons for agreeing to a negotiated settlement, principally their own concerns regarding the balance of forces and fear of U.S. intervention.:607–9
France had achieved a much better outcome than could have been expected. Bidault had stated at the beginning of the Conference that he was playing with "a two of clubs and a three of diamonds" whereas the DRV had several aces, kings and queens,:607 but Jean Chauvel was more circumspect: "There is no good end to a bad business.":613
In a press conference on July 21, President Eisenhower expressed satisfaction that a ceasefire had been concluded but stated that the U.S. was not a party to the Accords or bound by them, as they contained provisions that his administration could not support.:612
On October 9, 1954, the tricolore was lowered for the last time at the Hanoi Citadel and the last French Union forces left the city, crossing the Paul Doumer Bridge on their way to Haiphong for embarkation.:617–8
For the communist forces, which were instrumental in the defeat of the French, the ideology of communism and nationalism were linked. Many communist sympathisers viewed the South Vietnamese as a French colonial remnant and later an American puppet regime. On the other hand, many others viewed the North Vietnamese as a puppet of Communist International.
After the cessation of hostilities, a large migration took place. North Vietnamese, especially Catholics, intellectuals, business people, land owners, anti-communist democrats, and members of the middle-class moved south of the Accords-mandated ceasefire line during Operation Passage to Freedom. The ICC reported that at least 892,876 North Vietnamese were processed through official refugee stations, while journalists recounted that as many as 2 million more might have fled without the presence of Viet Minh soldiers, who frequently beat and occasionally killed those who refused to turn back. The CIA attempted to further influence Catholic Vietnamese with slogans such as "the Virgin Mary is moving South". At the same time, 52,000 people from the South went North, mostly Viet Minh members and their families.
The U.S. replaced the French as a political backup for Ngo Dinh Diem, the Prime Minister of the State of Vietnam, who asserted his power in the South. The Geneva conference had not provided any specific mechanisms for the national elections planned for 1956, and Diem refused to hold them by citing that the South had not signed and were not bound to the Geneva Accords and that it was impossible to hold free elections in the communist North. Instead, he went about attempting to crush communist opposition.
On May 20, 1955, French Union forces withdrew from Saigon to a coastal bases and on April 28, 1956, the last French forces left Vietnam.:650
North Vietnam violated the Geneva Accords by failing to withdraw all Viet Minh troops from South Vietnam, stifling the movement of North Vietnamese refugees, and conducting a military buildup that more than doubled the number of armed divisions in the North Vietnamese army while the South Vietnamese army was reduced by 20,000 men. U.S. military advisers continued to support the Army of the Republic of Vietnam, which was created as a replacement for the Vietnamese National Army. The failure of reunification led to the creation of the National Liberation Front (better known as the Viet Cong) by Ho Chi Minh's government. They were closely aided by the Vietnam People's Army (VPA) of the North, also known as the North Vietnamese Army. The result was the Vietnam War.
Historian John Lewis Gaddis said that the 1954 accords "were so hastily drafted and ambiguously worded that, from the standpoint of international law, it makes little sense to speak of violations from either side".
- "Indochina - Midway in the Geneva Conference: Address by the Secretary of State". Avalon Project. Yale Law School. May 7, 1954. Retrieved 29 April 2010.
- Young, Marilyn (1991). The Vietnam Wars: 1945–1990. New York: HarperPerennial. p. 41. ISBN 978-0-06-092107-1.
- Archive, Wilson Center Digital. "Wilson Center Digital Archive". digitalarchive.wilsoncenter.org.
- Logevall, Fredrik (2012). Embers of War: The Fall of an Empire and the Making of America's Vietnam. random House. ISBN 978-0-679-64519-1.
- "Text of the Korean War Armistice Agreement". Findlaw. Columbia University. July 27, 1953. Retrieved 29 April 2010.
- Bailey, Sydney D. (1992). The Korean Armistice. St. Martin's Press. p. 163.
- Bailey, Sydney D. (1992). The Korean Armistice. St. Martin's Press. pp. 167–168.
- Turner 1975, p. 92.
- Turner 1975, p. 108.
- Turner 1975, p. 93.
- Turner 1975, p. 88.
- Turner 1975, p. 94.
- Turner 1975, pp. 94–95.
- Turner 1975, p. 97.
- Turner 1975, p. 107.
- Turner 1975, p. 99.
- Turner 1975, pp. 99-100.
- Turner 1975, p. 96.
- "The Final Declarations of the Geneva Conference July 21, 1954". The Wars for Viet Nam. Vassar College. Archived from the original on 7 August 2011. Retrieved 20 July 2011.
- The United States in Vietnam: An analysis in depth of the history of America's involvement in Vietnam by George McTurnan Kahin and John W. Lewis Delta Books, 1967.
- (Article 3) (N. Tarling, The Cambridge History of Southeast Asia, Volume Two Part Two: From World War II to the present, Cambridge University Press, p45)
- Ang Cheng Guan (1997). Vietnamese Communists' Relations with China and the Second Indochina War (1956–62). Jefferson, NC: McFarland. p. 11. ISBN 0-7864-0404-3.
- Lowe, Peter (January 1997). Containing the Cold War in East Asia: British Policies Towards Japan, China and Korea, 1948-53. Manchester University Press. p. 261. ISBN 9780719025082. Retrieved July 21, 2013.
- Turner 1975, p. 102-103.
- Patrick J. Hearden (2017). The Tragedy of Vietnam. Routledge. p. 74. ISBN 9781351674003. Retrieved 19 September 2017.
- Keylor, William. "The 20th Century World and Beyond: An International History Since 1900," p.371, Oxford University Press: 2011.
- David L. Anderson (2010). The Columbia History of the Vietnam War. Columbia University Press. p. 30–31. ISBN 978-0-231-13480-4.
- Turner 1975, p. 100-104.
- Fadiman, Anne. The Spirit Catches You and You Fall Down. Farrar, Straus and Giroux. 1997. 126.
- Asselin, Pierre. "The Democratic Republic of Vietnam and the 1954 Geneva Conference: a revisionist critique". Cold War History (2011) 11#2 pp. 155–195.
- Hannon Jr, John S. "Political Settlement for Vietnam: The 1954 Geneva Conference and Its Current Implications, A". Virginia Journal of International Law 8 (1967): 4.
- Turner, Robert F. (1975). Vietnamese Communism: Its Origins and Development. Hoover Institution Publications. ISBN 9780817914318.
- Waite, James. The End of the First Indochina War: A Global History (2013)
- Young, Kenneth T. The 1954 Geneva Conference: Indo-China and Korea (Greenwood Press, 1968)
|Wikisource has original text related to this article:|
|Wikisource has original text related to this article:|
- "The Geneva Conference of 1954 – New Evidence from the Archives of the Ministry of Foreign Affairs of the People's Republic of China" (PDF). Cold War International History Project Bulletin. Woodrow Wilson International Center for Scholars (16). 2008-04-22. Archived from the original (PDF) on 2009-03-27. Retrieved 2009-04-14.
- Woodrow Wilson International Center for Scholars - Cold War International History Project - The 1954 Geneva Conference July 13, 2011
- Bibliography: Dien Bien Phu and the Geneva Conference
- Foreign Relations of the United States, 1952-1954, volume XVI, The Geneva Conference. Available through the Foreign Relations of the United States online collection at the University of Wisconsin.
Lua error in Module:Authority_control at line 1069: attempt to index field 'wikibase' (a nil value). | <urn:uuid:9ce07523-dd8d-45f8-8449-8b3adfa76d86> | CC-MAIN-2021-21 | https://callipedia.miraheze.org/wiki/1954_Geneva_Conference | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991428.43/warc/CC-MAIN-20210514152803-20210514182803-00254.warc.gz | en | 0.955759 | 7,020 | 3.8125 | 4 |
In Cangoxima, the first place Father Master Francisco stopped at, there were a good number of Christians, although there was no one there to teach them; the shortage of labourers prevented the whole kingdom from becoming Christian.
Francis Xavier was born in the royal castle of Xavier, in the Kingdom of Navarre, on 7 April 1506 according to a family register. He was the youngest son of Juan de Jasso y Atondo, seneschal of Xavier castle, who belonged to a prosperous farming family and had acquired a doctorate in law at the University of Bologna, and later became privy counsellor and Finance minister to King John III of Navarre (Jean d'Albret). Francis's mother was Doña María de Azpilcueta y Aznárez, sole heiress of two noble Navarrese families. He was thus related to the great theologian and Philosopher Martín de Azpilcueta.
In 1512, Ferdinand, King of Aragon and regent of Castile, invaded Navarre, initiating a war that lasted over 18 years. Three years later, Francis's father died when Francis was only nine years old. In 1516, Francis's brothers participated in a failed Navarrese-French attempt to expel the Spanish invaders from the kingdom. The Spanish Governor, Cardinal Cisneros, confiscated the family lands, demolished the outer wall, the gates, and two towers of the family castle, and filled in the moat. In addition, the height of the keep was reduced by half. Only the family residence inside the castle was left. In 1522 one of Francis's brothers participated with 200 Navarrese nobles in dogged but failed resistance against the Castilian Count of Miranda in Amaiur, Baztan, the last Navarrese territorial position south of the Pyrenees.
In 1525, Francis went to study in Paris at the Collège Sainte-Barbe, University of Paris, where he would spend the next eleven years. In the early days he acquired some reputation as an athlete and a high-jumper.
In 1529, Francis shared lodgings with his friend Pierre Favre. A new student, Ignatius of Loyola, came to room with them. At 38, Ignatius was much older than Pierre and Francis, who were both 23 at the time. Ignatius convinced Pierre to become a priest, but was unable convince Francis, who had aspirations of worldly advancement. At first Francis regarded the new lodger as a joke and was sarcastic about his efforts to convert students. When Pierre left their lodgings to visit his family and Ignatius was alone with Francis, he was able to slowly break down Francis's resistance. According to most biographies Ignatius is said to have posed the question: "What will it profit a man to gain the whole world, and lose his own soul?" However, according to James Broderick such method is not characteristic of Ignatius and there is no evidence that he employed it at all.
In 1530 Francis received the degree of Master of Arts, and afterwards taught Aristotelian philosophy at Beauvais College, University of Paris.
On 15 August 1534, seven students met in a crypt beneath the Church of Saint Denis (now Saint Pierre de Montmartre), in Montmartre outside Paris. They were Francis, Ignatius of Loyola, Alfonso Salmeron, Diego Laínez, Nicolás Bobadilla from Spain, Peter Faber from Savoy, and Simão Rodrigues from Portugal. They made private vows of poverty, chastity, and obedience to the Pope, and also vowed to go to the Holy Land to convert infidels. Francis began his study of theology in 1534 and was ordained on 24 June 1537.
In 1539, after long discussions, Ignatius drew up a formula for a new religious order, the Society of Jesus (the Jesuits). Ignatius's plan for the order was approved by Pope Paul III in 1540.
Leaving Rome on 15 March 1540, in the Ambassador's train, Francis took with him a breviary, a catechism, and De Institutione bene vivendi by Croatian humanist Marko Marulić, a Latin book that had become popular in the Counter-Reformation. According to a 1549 letter of F. Balthasar Gago in Goa, it was the only book that Francis read or studied. Francis reached Lisbon in June 1540 and four days after his arrival, he and Rodrigues were summoned to a private audience with the King and the Queen.
Francis Xavier left Lisbon on 7 April 1541, his thirty-fifth birthday, along with two other Jesuits and the new viceroy Martim Afonso de Sousa, on board the Santiago. As he departed, Francis was given a brief from the pope appointing him apostolic nuncio to the East. From August until March 1542 he remained in Portuguese Mozambique, and arrived in Goa, then capital of Portuguese India on 6 May 1542, thirteen months after leaving Lisbon.
Xavier soon learned that along the Pearl Fishery Coast, which extends from Cape Comorin on the southern tip of India to the island of Mannar, off Ceylon (Sri Lanka), there was a Jāti of people called Paravas. Many of them had been baptised ten years before, merely to please the Portuguese, who had helped them against the Moors, but remained uninstructed in the faith. Accompanied by several native clerics from the seminary at Goa, he set sail for Cape Comorin in October 1542. He taught those who had already been baptised, and preached to those who weren't. His efforts with the high-caste Brahmins remained unavailing.
Europeans had already come to Japan: the Portuguese had landed in 1543 on the island of Tanegashima, where they introduced the first firearms to Japan.
He devoted almost three years to the work of preaching to the people of southern India and Ceylon, converting many. He built nearly 40 churches along the coast, including St. Stephen's Church, Kombuthurai, mentioned in his letters dated 1544.
In the spring of 1545 Xavier started for Portuguese Malacca. He laboured there for the last months of that year. About January 1546, Xavier left Malacca for the Maluku Islands, where the Portuguese had some settlements. For a year and a half he preached the Gospel there. He went first to Ambon Island, where he stayed until mid-June. He then visited other Maluku Islands, including Ternate, Baranura, and Morotai. Shortly after Easter 1547, he returned to Ambon Island; a few months later he returned to Malacca.
The role of Francis Xavier in the Goa Inquisition is controversial. He had written to King João III of Portugal in 1546, encouraging him to dispatch the Inquisition to Goa, which he did many years later in 1560. Francis Xavier died in 1552 without living to see the horrors of the Goa Inquisition, but some historians believe that he was aware of the Portuguese Inquisition's brutality. In an interview to an Indian newspaper, Historian Teotónio de Souza stated that Francis Xavier and Simão Rodrigues, another founder-member of the Society of Jesus, were together in Lisbon before Francis left for India. Both were asked to assist spiritually the prisoners of the Inquisition and were present at the very first auto-da-fé celebrated in Portugal in September 1540, at which 23 were absolved and two were condemned to be burnt, including a French cleric. Hence, he believes that Xavier was aware of the brutality of the Inquisition.
In Malacca in December 1547, Francis Xavier met a Japanese man named Anjirō. Anjirō had heard of Francis in 1545 and had travelled from Kagoshima to Malacca to meet him. Having been charged with murder, Anjirō had fled Japan. He told Francis extensively about his former life and the customs and culture of his homeland. Anjirō became the first Japanese Christian and adopted the name of 'Paulo de Santa Fe'. He later helped Xavier as a mediator and interpreter for the mission to Japan that now seemed much more possible.
From Amboina, he wrote to his companions in Europe: "I asked a Portuguese merchant, … who had been for many days in Anjirō’s country of Japan, to give me … some information on that land and its people from what he had seen and heard …. All the Portuguese merchants coming from Japan tell me that if I go there I shall do great Service for God our Lord, more than with the pagans of India, for they are a very reasonable people. (To His Companions Residing in Rome, From Cochin, 20 January 1548, no. 18, p. 178).
Francis Xavier reached Japan on 27 July 1549, with Anjiro and three other Jesuits, but he was not permitted to enter any port his ship arrived at until 15 August, when he went ashore at Kagoshima, the principal port of Satsuma Province on the island of Kyūshū. As a representative of the Portuguese king, he was received in a friendly manner. Shimazu Takahisa (1514–1571), daimyō of Satsuma, gave a friendly reception to Francis on 29 September 1549, but in the following year he forbade the conversion of his subjects to Christianity under penalty of death; Christians in Kagoshima could not be given any catechism in the following years. The Portuguese missionary Pedro de Alcáçova would later write in 1554:
He was hosted by Anjirō's family until October 1550. From October to December 1550, he resided in Yamaguchi. Shortly before Christmas, he left for Kyoto but failed to meet with the Emperor. He returned to Yamaguchi in March 1551, where he was permitted to preach by the daimyo of the province. However, lacking fluency in the Japanese language, he had to limit himself to reading aloud the translation of a catechism.
With the passage of time, his sojourn in Japan could be considered somewhat fruitful as attested by congregations established in Hirado, Yamaguchi, and Bungo. Xavier worked for more than two years in Japan and saw his successor-Jesuits established. He then decided to return to India. Historians debate the exact path he returned by, but from evidence attributed to the captain of his ship, he may have travelled through Tanegeshima and Minato, and avoided Kagoshima because of the hostility of the daimyo. During his trip, a tempest forced him to stop on an island near Guangzhou, China where he met Diogo Pereira, a rich merchant and an old friend from Cochin. Pereira showed him a letter from Portuguese prisoners in Guangzhou, asking for a Portuguese ambassador to speak to the Chinese Emperor on their behalf. Later during the voyage, he stopped at Malacca on 27 December 1551, and was back in Goa by January 1552.
In late August 1552, the Santa Cruz reached the Chinese island of Shangchuan, 14 km away from the southern coast of mainland China, near Taishan, Guangdong, 200 km south-west of what later became Hong Kong. At this time, he was accompanied only by a Jesuit student, Álvaro Ferreira, a Chinese man called António, and a Malabar servant called Christopher. Around mid-November he sent a letter saying that a man had agreed to take him to the mainland in exchange for a large sum of money. Having sent back Álvaro Ferreira, he remained alone with António. He died at Shangchuan, Taishan, China from a fever on 3 December 1552, while he was waiting for a boat that would take him to mainland China.
He was first buried on a beach at Shangchuan Island, Taishan, Guangdong. His incorrupt body was taken from the island in February 1553 and was temporarily buried in St. Paul's church in Portuguese Malacca on 22 March 1553. An open grave in the church now marks the place of Xavier's burial. Pereira came back from Goa, removed the corpse shortly after 15 April 1553, and moved it to his house. On 11 December 1553, Xavier's body was shipped to Goa. The body is now in the Basilica of Bom Jesus in Goa, where it was placed in a glass container encased in a silver casket on 2 December 1637. This casket, constructed by Goan silversmiths between 1636 and 1637, was an exemplary blend of Italian and Indian aesthetic sensibilities. There are 32 silver plates on all the four sides of the casket depicting different episodes from the life of the Saint:
The right forearm, which Xavier used to bless and baptise his converts, was detached by Superior General Claudio Acquaviva in 1614. It has been displayed since in a silver reliquary at the main Jesuit church in Rome, Il Gesù.
Francis Xavier was beatified by Paul V on 25 October 1619, and was canonized by Gregory XV on 12 March (12 April) 1622, at the same time as Ignatius Loyola. Pius XI proclaimed him the "Patron of Catholic Missions". His feast day is 3 December.
The Novena of Grace is a popular devotion to Francis Xavier, typically prayed either on the nine days before 3 December, or on 4 March through 12 March (the anniversary of Pope Gregory XV's canonisation of Xavier in 1622). It began with the Italian Jesuit missionary Marcello Mastrilli. Before he could travel to the Far East, Mastrilli was gravely injured in a freak accident after a festive celebration dedicated to the Immaculate Conception in Naples. Delirious and on the verge of death, Mastrilli saw Xavier, who he later said asked him to choose between travelling or death by holding the respective symbols, to which Mastrilli answered, "I choose that which God wills." Upon regaining his health, Mastrilli made his way via Goa and the Philippines to Satsuma, Japan. The Tokugawa shogunate beheaded the missionary in October 1637, after undergoing three days of tortures involving the volcanic sulphurous fumes from Mt. Unzen, known as the Hell mouth or "pit" that had supposedly caused an earlier missionary to renounce his faith.
Many churches all over the world, often founded by Jesuits, have been named in honour of Xavier. Those in the United States include the historic St. Francis Xavier Shrine at Warwick, Maryland, (founded 1720, and at which American founding father, Charles Carroll of Carrollton, (1737–1832), (longest living signer and only Catholic at the Continental Congress to sign the Declaration of Independence, 1776) and cousin to the first American-born Bishop John Carroll, (1735–1815), Bishop and later Archbishop of Baltimore, 1790–1815, (at the Roman Catholic Archdiocese of Baltimore) began their education), also the American educational teaching order Xaverian Brothers, the Basilica of St. Francis Xavier in Dyersville, Iowa, and the Mission San Xavier del Bac in Tucson, Arizona (founded in 1692, and known for its Spanish Colonial architecture).
Francis Xavier is the patron saint of his native Navarre, which celebrates his feast day on 3 December as a government holiday. In addition to Roman Catholic masses remembering Xavier on that day (now known as the Day of Navarra), celebrations in the surrounding weeks honour the region's cultural heritage. Furthermore, in the 1940s, devoted Catholics instituted the Javierada, an annual day-long pilgrimage (often on foot) from the capital at Pamplona to Xavier, where his order has built a basilica and museum and restored his family's castle.
Another of Xavier's arm bones was brought to Macau where it was kept in a silver reliquary. The relic was destined for Japan but religious persecution there persuaded the church to keep it in Macau's Cathedral of St. Paul. It was subsequently moved to St. Joseph's and in 1978 to the Chapel of St. Francis Xavier on Coloane Island. More recently the relic was moved to St. Joseph's Church.
As the foremost saint from Navarre and one of the main Jesuit saints, he is much venerated in Spain and the Hispanic countries where Francisco Javier or Javier are Common male given names. The alternative spelling Xavier is also popular in Portugal, Catalonia, Brazil, France, Belgium, and southern Italy. In India, the spelling Xavier is almost always used, and the name is quite Common among Christians, especially in Goa and the southern states of Tamil Nadu, Kerala, and Karnataka. The names Francisco Xavier, António Xavier, João Xavier, Caetano Xavier, Domingos Xavier et cetera, were very Common till quite recently in Goa. Fransiskus Xaverius is commonly used as a name for Indonesian Catholics, usually abbreviated as FX. In Austria and Bavaria the name is spelled as Xaver (pronounced (ˈk͡saːfɐ)) and often used in addition to Francis as Franz-Xaver (frant͡sˈk͡saːfɐ). Many Catalan men are named for him, often using the two-name combination Francesc Xavier. In English speaking countries, "Xavier" until recently was likely to follow "Francis"; in the 2000s, however, "Xavier" by itself has become more popular than "Francis", and since 2001 is now one of the hundred most Common male baby names in the U.S.A. Furthermore, the Sevier family name, possibly most famous in the United States for John Sevier originated from the name Xavier.
In 2006, on the 500th anniversary of his birth, the Xavier Tomb Monument and Chapel on the Shangchuan Island, in ruins after years of neglect under communist rule in China was restored with the support from the alumni of Wah Yan College, a Jesuit high school in Hong Kong.
Saint Francis Xavier is noteworthy for his missionary work, both as organiser and as pioneer, reputed to have converted more people than anyone else has done since Saint Paul. Pope Benedict XVI said of both Ignatius of Loyola and Francis Xavier: "not only their history which was interwoven for many years from Paris and Rome, but a unique Desire — a unique passion, it could be said — moved and sustained them through different human events: the passion to give to God-Trinity a glory always greater and to work for the proclamation of the Gospel of Christ to the peoples who had been ignored." By consulting with the earlier ancient Christians of St. Thomas in India, Xavier developed Jesuit missionary methods. His success also spurred many Europeans to join the order, as well as become missionaries throughout the world. His personal efforts most affected Christians in India and the East Indies (Indonesia, Malaysia, Timor). India still has numerous Jesuit missions, and many more schools. Xavier also worked to propagate Christianity in China and Japan. However, following the persecutions of Toyotomi Hideyoshi and the subsequent closing of Japan to foreigners, the Christians of Japan were forced to go underground to develop an independent Christian culture. Likewise, while Xavier inspired many missionaries to China, Chinese Christians also were forced underground and developed their own Christian culture.
Saint Francis Xavier is a major venerated saint in both Sonora and the neighbouring U.S. state of Arizona. In Magdalena de Kino in Sonora, Mexico in the Church of Santa María Magdalena, there is reclining statue of San Francisco Xavier brought by pioneer Jesuit missionary Padre Eusebio Kino in the early 18th century. The statue is said to be miraculous and is the object of pilgrimage for many of the region. Also Mission San Xavier del Bac is a pilgrimage site. The mission is an active parish church ministering to the people of the San Xavier District, Tohono O'odham Nation and nearby Tucson, Arizona. | <urn:uuid:cadf3af6-7a31-4380-b976-e2d4372c5420> | CC-MAIN-2021-21 | https://www.networthlist.org/francis-xavier-net-worth-57316 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00255.warc.gz | en | 0.97841 | 4,243 | 3.15625 | 3 |
One of the great crimes of the apartheid system in South Africa was the Sharpeville massacre which took place on March 21, 1960. 69 Africans were killed by the police and many others were wounded.
Originally uploaded by Pan-African News Wire File Photos.
By President Thabo Mbeki
President of the Republic of South Africa
Reprinted From the ANC Today
Friday, March 16, 2007
Two days before the publication of the next edition of this journal, our country will celebrate Human Rights Day, March 21st, bestowed to the nation by the patriots who were massacred at Sharpeville on this day in 1960. I am therefore pleased to dedicate this Letter to the forthcoming Human Rights Day, which, from all points of view, is one of our most important public holidays. I genuinely hope that all South Africans, black and white, will make a special effort to attend the public events that have been organised to celebrate this pre-eminent day on our national calendar.
Truly to honour our Human Rights Day, and to address a pressing challenge, this Letter discusses an issue that was central to our liberation struggle, that informs the content of the continuing national reconstruction and development process, and remains central to the entrenchment of human rights in our country - the elimination of racism and racial discrimination.
While it is true that our movement has set itself the task to ensure that our country achieves this goal, we must also remember that this, together with the non-sexism we discussed on our last Letter, is one of the national objectives prescribed by our Constitution. The Founding Provisions in the Constitution identify non-racialism and non-sexism among the values on which the democratic Republic is and must be founded.
MULDER & LEON
In the last edition of ANC TODAY we recalled remarks we had made during our response to the National Assembly debate on the State of the Nation Address on 15 February 2007, discussing the transformation of the South African mind. In that response at the National Assembly, I also drew attention to comments made by the leaders of the opposition parties, the Freedom Front +, and the Democratic Alliance, Pieter Mulder and Tony Leon respectively, which comments are directly relevant to the discussion in our country about "change or the absence of change in our minds".
Dr Mulder said: "We do not know each other and do not debate with each other. Two minute speeches from this (parliamentary) podium are not debates." Mr Leon said: "as a nation we should spend more time listening to each other, and not be too quick to judge as illegitimate the concerns and expressions of any group."
Perhaps the one issue on which we do not spend enough time listening to one another, the challenge we should debate honestly and fearlessly, is the scourge of racism that permeates so much of the fabric of our society. The favourite words used to close down and prohibit any discussion on racism in our country are - 'don't play the race card'! It is also argued that such discussion is inimical to the task to achieve national reconciliation.
However, the fact of the matter is that racism remains a daily feature of our lives, a demon that must be exorcised, precisely to achieve national reconciliation, which must be confronted openly and on a sustained basis, if we are to achieve the Constitutional imperative of a non-racial society, as we must.
KAFFIRS AT THE WORK PLACE
Recently I was privileged to receive a report prepared by a group of independent investigators who had been asked to assess the cause of a labour dispute, as well as conflicts within management, in one South African company. In truth it is some of the content of this report that convinced me of the need to address the issue of racism in this Letter.
The report says that one of the white managers affected by the investigation, Mr X, "admitted that he and other white managers used the term 'kaffir' generally in the everyday conversation, and he saw nothing wrong with this. However he always made certain that he did not use this word when Africans were present, and also avoided calling anybody 'kaffir' to their face. Another manager sometimes flattened his nose with his finger as a derogatory reference to Africans."
Others said that Mr X "had referred to the national flag as a 'kaffir' flag. He had described March 1st last year, the day for our last local elections, as the 'kaffir stem dag', a day for 'kaffirs' to vote, and vote for a 'kaffir' government. Those who went to vote, and therefore did not come to work, would be marked as having been absent from work."
Mr X said he browsed "Internet sites that argued that God is for whites. He also refers to Africans as 'kusiete', Cushites, or 'edomiete', Edomites, which are Biblical references interpreted as referring to Africans and non-Jewish people of the Middle East.
Mr X had freely discussed his beliefs with other white managers, including "his interest in social history. The discussions covered many topics including the potential for a massive Black on Black civil war in South Africa in which poor Blacks would rise up against the black elite."
The report says that the African workers "fear Mr X, who always threatens to fire them, and does not listen or care for them. Failure to take action against Mr X for his racist remarks could well raise the levels of conflict and dissatisfaction amongst workers and lead to further disruptions at the workplace."
This account tells the story that racism is alive and well at the workplace. It tells the story that this racism does not just amount to bad language, to which we must be opposed. It has a direct, negative material impact on the lives of our people, especially the working people, communicating the message that apartheid is not dead. It is not something we should put out of sight, and therefore out of mind, by responding to all attempts to confront it as "playing the race card".
ARE THEY KOSHER?
I will now turn to another account about contemporary South Africa, which was conveyed to me verbally. Not long ago, one of my African colleagues in government, Mr A, bought and moved into a house in the Northern suburbs of Johannesburg. After some time, the only other African who lived on this street, Mr B, paid a courtesy call, to welcome Mr A to the neighbourhood.
During this visit he informed Mr A that one of the white neighbours, Mr C, and his family, were very concerned and uneasy that Mr A and his family has moved into the neighbourhood. Mr C had inquired from Mr B, who had lived on the street for some time, whether he knew Mr A, explaining that he and his family had been asking themselves the question - since they are Black, how do we know they are kosher - how do we know they are not criminals!
However, I must also say that subsequently, Mr C, a native white South African, visited the newly-arrived Mr A. In the end, he felt that during this first 90-minute encounter with Mr A, he had learnt so much of which he had not been aware about our country and government, that he had to invite Mr A and his wife to join his family over dinner at his house to continue their conversation. Now he felt that the rest of his family, and indeed other South Africans, also needed to be exposed to what the "A" family had to say.
THE CHALLENGE OF WHITE FEARS
In the period immediately preceding the transition in 1994, our movement grappled constantly with the phenomenon in our country then characterised as "white fears". President Nelson Mandela again addressed this issue in his Political Report to the 50th National Conference of the ANC in 1997, saying: "The prophets of doom have re-emerged in our country. In 1994, these predicted that the transition to democracy would be attended by a lot of bloodshed...
"(Now) their task is to spread messages about an impending economic collapse, escalating corruption in the public service, rampant and uncontrollable crime, a massive loss of skills through white emigration and mass demoralisation among the people either because they are white and therefore threatened by the ANC and its policies which favour black people, or because they are black and consequently forgotten because the ANC is too busy protecting white privilege.
"A massive propaganda campaign has been conducted on the issue of crime, in many instances without any regard and respect for the truth. We will ourselves discuss this matter because of our own serious concern radically to bring down the levels of crime. However, what is necessary is that anybody genuinely committed to this goal should make an objective study of this problem and avoid the serious distortions which result from this exploitation of this issue for partisan political purposes.
"Such a study for example will show that for Johannesburg, murder, attempted murder and culpable homicide taken together, have been declining steadily since 1994. Facts and figures actually disprove the notion that there has been a rapid escalation of these crimes and confirm that we inherited the high levels of these crimes from the apartheid system."
We returned to the issue of "white fears" seven years later in a 'Letter from the President' in ANC TODAY, Volume 4, No 9, 5-11 March 2004, entitled: "Voters will not be swayed by fear or fiction". Among other things we said:
"The fear-factor has long been a feature of white politics in our country. For long periods, this section of our population has been subjected to the unimaginable terrors of 'die swart gevaar' and 'die rooi gevaar', the 'black' and 'red' dangers... The danger of an imaginary one-party state that is now being used to frighten our electorate is nothing but a variation on the same theme. The 'gevaar' is cloaked in different words. It remains the same 'gevaar' nevertheless.
"Interestingly, the use of fear is totally alien to the liberation movement and to liberation politics. Freedom from fear is a necessary part of the range of objectives of those who fight for freedom...
"Historically, it may be that those accustomed to living in a world of fear have always found it difficult to believe that those they defined as a threat could ever see them as part of a new world of hope, enjoying freedom from fear...Thus, even in changed circumstances, such as ours, when time and practice have proved that the phobias of the past were mere phobias, those used to frightening themselves or being frightened by others, would not find it too difficult to revert to the accustomed world of fear of the future."
THE BATTLE AGAINST CRIME
As Nelson Mandela observed in 1997, we have continued to express our own serious concern radically to bring down the levels of crime, and have consistently acted to achieve this result. The fact of unacceptably high levels of crime in our country is not in dispute. Nevertheless, in the light of what follows, none among us should be surprised when, as is customary, those who are determined to avoid confronting the difficult issues we raise in this Letter, seek to divert attention away from discussing the relationship between racism and the perception of crime, by falsely and dishonestly claiming that I am trying to deny or minimise the seriousness of the incidence of crime in our country.
The matter we seek to discuss, in the context of the struggle to create a non-racial society, is what Nelson Mandela identified as the phenomenon of the re-emergence of the prophets of doom, who "spread messages about...rampant and uncontrollable crime", conducting "a massive propaganda campaign...on the issue of crime, in many instances without any regard and respect for the truth." The question to ask is - why did this happen then, and why does it continue to happen now?
The answer is suggested by the question posed by the Mr C we mentioned above, who asked, anxiously - since they are Black, how do we know they are kosher! That answer lies in the deeply entrenched racism that has convinced Mr X that Africans are Cushites and Edomites, who have since time immemorial, been repudiated by a God who is only a God of the Whites.
THE BLACK PERSON'S BURDEN
The United States (US) has a long head-start relative to us, regarding the issue of racism and crime. All of us would do well to study US experience in this regard, especially the public discourse that has simultaneously sought to address both the incidence of crime and the manner in which racism feeds off this issue.
For instance in this regard, in 2000, Manning Marable, Professor of History and Political Science, and Director of the Institute for Research in African-American Studies at Columbia University, published an article entitled, "Racism, Prisons, and the Future of Black America". Among other things he said:
"For a variety of reasons, rates of violent crime, including murder, rape, and robbery, increased dramatically in the 1960s and 1970s. Much of this increase occurred in urban areas. By the late 1970s, nearly one half of all Americans were afraid to walk within a mile of their homes at night...
"Politicians like Richard M. Nixon, George Wallace, and Ronald Reagan began to campaign successfully on the theme of 'Law and Order'. The death penalty, which was briefly outlawed by the Supreme Court, was reinstated. Local, state, and federal expenditures for law enforcement rose sharply.
"Behind much of anti-crime rhetoric was a not-too-subtle racial dimension, the projection of crude stereotypes about the link between criminality and black people. Rarely did these politicians observe that minority and poor people, not the white middle class, were statistically much more likely to experience violent crimes of all kinds...
"The driving ideological and cultural force that rationalised and justifies mass incarceration is the white American public's stereotypical perceptions about race and crime. As Andrew Hacker perceptively noted in 1995, 'Quite clearly, 'black crime' does not make people think about tax evasion or embezzling from brokerage firms. Rather, the offences generally associated with blacks are those ...involving violence.' A number of researchers have found that racial stereotypes of African Americans -as 'violent', 'aggressive',
'hostile' and 'short-tempered' - greatly influence whites' judgments about crime."
More recently, on 22 May 2005, the Boston Globe newspaper carried an article by Christopher Shea, in which he said that UCLA Law Professor, Jerry Kang, "argued in the Harvard Law Review this spring, that obsessive coverage of urban crime by local television stations is one of the engines driving lingering racism in the United States. So counterproductive is local broadcast news, he says, that it is time the FCC (the governmental Federal Communications Commission), stopped using the number of hours a station devotes to local news as evidence of the station's contribution to the 'public interest'...Far from contributing to the public interest, Kang argues, local news, with its parade of images of urban criminality, serves as a 'Trojan Horse' or 'virus' keeping racism alive in the American mind."
THE KAFFIRS ARE COMING!
With regard to the foregoing, the fact of the matter is that we still have a significant proportion of people among the white minority, but by no means everybody who is white, that continues to live in fear of the black, and especially African majority. For this section of our population, that does not "find it too difficult to revert to the accustomed world of fear of the future", every reported incident of crime communicates the frightening and expected message that - the kaffirs are coming!
The colleague in government to whom I referred, Mr A, posed the rhetorical question - why are the Whites so determined to frighten themselves! The answer of course is that they have taken no such decision. Rather, the problem is that entrenched racism dictates that justification must be found for the persisting white fears of "die swart gevaar".
All incidents of crime, preferably broadcast as loudly as possible, provide such justification, as have other issues, such as those mentioned by Nelson Mandela in 1997, and as the impending victory of the ANC in 2004 was used to incite white fears that our movement was about to establish a one-party state!
As we observed earlier in this Letter, Dr Mulder said: "We do not know each other and do not debate with each other. Two minute speeches from this podium are not debates." Mr Leon said: "as a nation we should spend more time listening to each other, and not be too quick to judge as illegitimate the concerns and expressions of any group."
As we celebrate Human Rights Day, it remains to be seen whether we have the will to know one another and to debate with one another; whether we are willing to spend more time listening to one another, educating ourselves not be too quick to judge as illegitimate the concerns and expressions of any group; and whether we have the courage to engage in a truth and reconciliation process even with regard to the challenge of openly confronting the cancer of deeply dehumanising racist stereotypes that developed over many centuries.
The resolve to educate ourselves to not be too quick to judge as illegitimate the concerns and expressions of any group must include not being too quick to judge as illegitimate the concerns and expressions of the African people, the historic victims of racism, who remain deeply disturbed that some in positions of power still think it is normal to speak of them as "kaffirs", and others among our white compatriots think that it is natural to ask the question - since they are Black, how do we know they are not criminals!
A YOUTHFUL DREAM
The website www.fin24.co.za carries a moving letter written by a young, 22-year-old, African professional, Bonga Bangani, who worked as an intern at the Investec offices in Cape Town. He wrote this letter out of frustration at the racism he experienced in this branch of one of our major financial institutions. Knowing that this might get him into trouble at the workplace, he said:
"Losing my job over issues that I feel strongly about is the least of my concerns, as unemployment, crime and poverty have been a part of my life since the day I was born. I was born and bred in a black South African township. I'm not ashamed to say that because it's part of my history and my history is part of who I am. However, despite all those obstacles, I've managed to get myself to where I am today - working for the Investec Treasury and Specialised division - an achievement I'm very proud of."
Bonga goes on to say: "The truth is that our democracy is still young. We don't fully understand one another (black and white) because we've been segregated for so long; we probably don't trust one another enough because we're not sure of each other's intentions. The only way that could solve this is by interacting with one another and by cleansing ourselves from the stereotypes we have about each other. The one thing we all have to accept is that for as long as we live in this country, we're stuck with each other and the sooner we learn to accept each others' differences and start treating each other with mutual respect and fairness, the sooner we will get to understanding each other and working towards a common goal and living in a South Africa in which our families and children can prosper and live in peace & harmony with one another. A South Africa in which the benefits of diversity (that are currently being missed by most SA corporates) would be realised. That's if we're willing to do so. I sure hope Investec Cape Town is willing to go in that same direction someday."
There can be no better message to all of us as we celebrate Human Rights Day than these very wise words from a young African professional in his early twenties. The historic task to build a non-racial democracy, to achieve social and national cohesion, to advance the goal of national reconciliation, to secure the human rights of all our people, black and white, demands that all of us must answer the question honestly - did all of us, including the corporations, really listen when young Bonga Bangani dared to speak out to communicate to all of us the dreams of our youth for a new South Africa that truly belongs to all who live in it, united in their diversity! | <urn:uuid:7a3692a2-ecec-495b-889c-9b3c280d07e8> | CC-MAIN-2021-21 | https://panafricannews.blogspot.com/2007/03/freedom-from-racism-fundamental-human.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991378.48/warc/CC-MAIN-20210515070344-20210515100344-00172.warc.gz | en | 0.972117 | 4,206 | 2.609375 | 3 |
The problem of Position Determination based on Low Earth Orbit (LEO) satellites signal of opportunity in uncovered and denied GNSS areas such as oceanic regions, north regions and deserts can be significant one. How to maintain tracking information of airlines especially during distress and emergency? To achieve that, a new design for Search and Rescue (SAR) positioning information is developed. Position and speed of the transceiver could be estimated based on distributed Doppler information fusion from LEO satellites. In emergency cases, especially when considering an aircraft, dynamic, trajectory, antenna axis rotation or any other obstruction or damage makes the positioning as well as the tracking of the aircraft are very difficult to achieve and are non-resilient. This article shows a proof of concept of the innovative solution under development, by selecting Iridium Next LEO constellation, considered as an emerging upgraded satellite technology, and privileged for search and rescue applications. Post processing based on experimental data collection demonstrated very good performances and a first experimental and theoretical results show very promising results for position determination using Doppler shift estimation and analysis.
Doppler shift based positioning has been experimented and developed during the last 50 years until GNSS constellations became fully operational including GPS, GLONASS, Galileo and more recently BeiDou. Different satellite systems such as ARGOS and COSPAS-SARSAT use Doppler shift position determination algorithms for Search and Rescue applications and offer acceptable positioning accuracy compared to more accurate GNSS solutions (see Ashton et alia in Additional Resources). The main advantages to use these constellations for positioning is that they can work in Denied GNSS environments and can provide Positioning and Timing information to the users. According to that, and with the development necessity of new technologies that are able to deliver navigation solutions whatever the environment and RF conditions, systems called PNT: Positioning Navigation and Timing were developed based on GPS and other GNSS signals as well as based on totally independent positioning systems such as communication satellites like: Orbcomm, Globalstar and recently with Iridium Next satellites constellation. To better ensure accuracy, adaptive and robust Kalman filtering approaches were developed and proposed such as in many related problems published in the literature (Benzerrouk et alia, also Winger, Additional Resources).
Recently, with the development and the upgrade of Iridium satellites network into Iridium Next constellation, and with various research and development conducted mainly by Boeing company, Satelles and Iridium Next, one can find various and tremendous results through different coupling, filtering and PNT resiliency and reliability demonstration with and without GPS or GNSS signals (Cobb et alia, Additional Resources). In addition, it has been proved that GNSS signals are of the most importance; not only for positioning and navigation, but also for timing application and synchronization in different sectors such as telecommunications, networks and sensor networks synchronizations, etc. In addition, all require a resilient PNT solution. In particular, and by assuming a denied GNSS environment, we propose an alternative PNT as a Position Determination (PDS) solution based on Iridium Next LEO constellation.
This solution can be directly applied to stationary then extended to dynamical positioning, for oceanic flight tracking, aircraft navigation safety in uncovered areas during emergency and distress situations. Thus, we consider a general design, however the application is demonstrated for stationary radio beacon using Iridium Next signal of opportunity (SoOp).
Position Determination Solution Using LEO Satellites
In many navigation applications, positioning systems can be non-available or non-resilient, in particular inside denied or uncovered regions or restricted areas. Thus, an interesting question arises on how we can maintain PDS information to deliver range, range-rate, position, azimuth and timing anytime, anywhere in the Globe? This article addresses this problem, which was motivated after the 2014 loss of the Malaysia MH370 airline over the Indian Ocean. Since that incident, ICAO has involved and oriented considerable efforts and expert’s involvement from aviation and aerospace industry to make flights safer by the integration of additional PDS, ensuring safety, tracking and communication in all conditions. Unlike GNSS, and other navigation aids, the solution the authors propose uses selected low latency LEO satellites, see (Figure 1 and Figure 2) with reliable positioning capabilities. Moreover, these augmented systems are resilient PNT/PDS solutions and can be integrated as an additional aid to redundant inertial navigation on-board vehicle, on-body, multiple vessels, aircraft, etc. To solve this challenging problem, the Iridium Next satellite constellation is considered. Downlink signals can significantly outperform GNSS signals relatively to the power of the signal, instead of other constellation candidates such as Globalstar, Orbcomm and Inmarsat, that are described, analyzed and compared in another work made by the authors and recently experimented by the authors in the paper by Ardito et alia in Additional Resources.
It is demonstrated that Iridium can deliver a position information with C.E.P radius of 1–2Km in the best cases (without advanced signal processing or data fusion) and then can be interpreted in state estimation as an important nonlinear inequality constraint, or a good initialization information. More than, that, and based on the distinguished work and efforts provided by the authors (Bahrami, also Benzerrouk et alia, 2018)Iridium-Next (Iridium previously) has demonstrated clear capability in computing instantaneous position using one LEO satellite, during what each connected terminal can be detected and located with acceptable accuracy (1–10 Km) by the satellite operator, see Figure 6. Test results are shown and discussed later in this paper. During our analysis, experimental Iridium modem called 9523N transceiver was used and given as an alternative PNT/PDS solution in denied GNSS signal (GPS, GLONASS, GALILEO and BEIDOU). Using the 9523N Development Board can significantly reduce the time of tests and first positioning results by using AT-Command provided by the operator, through what global flight tracking and air traffic management systems continuity would be possible based on Iridium Next PNT resilient solution. However, the accuracy is limited and additional signal processing is then more than necessary when considering Iridium Next as a SoOp!
Doppler Based Positioning Using Iridium Next
In the last several decades, previous analysis reported the feasibility of integrated satellites Doppler/INS based on Kalman filter for general vehicle navigation problems and applications (see Benzerrouk et alia and Lopez et alia, Additional Resources). This fusion has been considered as resilient integrated navigation systems, especially when using satellite Doppler measurement in denied GNSS environment. Different numerical methods for frequency shift estimation were developed and introduced to the positioning engineering community, see Figures 6 to Figure 8. As an examp le, we compared INS/GPS past and present architectures to the models developed for Doppler aided INS integrated systems. Previously, different methods of positioning applied to TRANSIT, COSPAS-SARSAT and ARGOS were developed. More recently, Globalstar, Iridium and Orbcomm were investigated (Ardito et alia, also Morales et alia). In this article, a very high attention is devoted to Iridium Next satellite constellation network, which has been upgraded during 2017 and was completed by early January 2019. Iridium Next with more than 77 new satellites brings their total number in the sky to more than 140 LEO satellites including the previous spacecraft (most of them were still active during the constellation upgrades, but have been deactivated since January 2019.
It is important to mention that the geometry of LEO satellites being different from GNSS constellations and their visibility offer serious advantages that may deserve an attention and deep analysis. Another attractive characteristic of Iridium Next is their performing and useful communication mode and protocol in positioning is called SBD (Short Burst Data), widely used in flight tracking applications to transmit burst messages, GPS or flight data to a ground server through Iridium Gateways. Iridium Next constellation is then considered as an attractive and significantly resilient PNT/PDS solution for many reasons; one of the most important is: Iridium Next LEO satellites are on a circular orbit at 780 Km altitude which makes their signal much higher than GNSS signals from 300 to 2400 times stronger. The User Terminal (UT); or modem; can receive signals from visible satellites over the horizon and select the closest one for communication and registration on the Network. The main downlink signals transmitted from the Iridium Next satellites are mentioned above:
STL signal, Ring Alert, and MSG primary message, transmitted respectively on the following frequencies: 1626.27 Mhz- 1626.104 Mhz and 1626.43 Mhz, which can be clearly seen on Figures 17 to Figure 21. In addition, the latency time has been reduced from 700 ms to 100 ms, which makes Iridium Next on top as a potential PNT solution for remote control, telemetry, unmanned systems monitoring, positioning, and the development of Doppler shift estimation and other methods based on range, range rate and angle of arrival estimation from the LEO satellites. To the best of the authors’ knowledge, the author in (see Levanon and Lopez et alia in Additional Resources) developed the first demonstration of instantaneous positioning algorithm using one or two LEO satellites without navigation messages transmitted to the transceiver, with extended work to two LEO satellites. In particular, including the range rate deterministic error analysis, see Figure 6.
To better understand the problem statement, readers can refer to the latest works using Globalstar signal, Orbcomm signal which is transmitted from a constellation of 48 LEO and 51 LEO satellites at respectively low orbit of 1400 Km altitude and 740-975km (Morales et alia, Additional Resources). In other works, different methods were developed such as the intensive work by Lopez et alia on Doppler signal using Kalman filter methods instead of Least Square positioning for both static and dynamic positioning using Argos LEO Doppler signals. In the paper by Morales et alia, the authors for inertial integrated navigation system also used the Orbcomm LEO satellites Doppler shift, and it was implemented on SDR-USRP E-312 and tested on-board multi-rotor UAV during real flight, at the end of what, very attractive and optimistic results were demonstrated. In this study and analysis, one can observe the Doppler Shift is exactly symmetric and opposite to the range rate value by comparing figures curves of Doppler shift and range rate values such as in Figure 5. It is also important to pay attention to the Iridium based instantaneous positioning using Range and Range error modelling such in both papers by Levanon, and presented on Figure 6. Thus, it is possible to observe on Figure 6 along track error accuracy variation of between 400-1200 meters (m) accuracy, which is considered as an interesting first preliminary result. We recommend readers seeking a better understating of theoretical models refer to and understand models developed by the authors in [Levanon 99 and Xi et al. 2016]. In this challenging environment without GNSS, the most critical point is to estimate accurately iridium Next Downlink frequency for Doppler Shift analysis and estimation. This challenge requires of course mathematical models and algorithms to achieve that purpose, by estimating Doppler frequency as well as Doppler rate, equivalent respectively to Range rate and Range rate rate. Before describing the theory and practice of this PNT solution, a brief discussion and reference to pioneers in this field is necessary and mandatory; such as the Boeing Company and Stanford GPS Laboratory. In addition to their intensive work on iridium augmented GPS receivers, between 2009-2011 calling that as a BTL (Boeing Timing and Location) signal by using encrypted downlink Iridium signal frequency used on the pager frequency-1626.104Mhz and presently owned and developed by Satelles Company. Very attractive and exciting results were shown by specialists during various Indoor and Outdoor tests providing iridium only positioning as well as Iridium augmenting GPS signal in Canyons Urban environment. The solution is presently called STL (Satelles Location and Timing) and is already operational on some evaluations board developed by the same company.
Background and Advances in STL Positioning and Timing
Using the specific downlink Iridium signal as a PNT resilient solution, one can find the tremendous results obtained and demonstrated by Satelles. In our case, we do not restrict Iridium signal to one Downlink frequency but we assume and we investigate all downlink bandwidth received by the iridium 9523N development board at first, then by using a software defined radio USRP-E310 from ettus to process iridium Next as a SoOP.
In our concept, Iridium transceiver location or USRP E-310 Geolocation and positioning problem can be assimilated to Multi-Range rate measurement based positioning by Least Square algorithm or by the implementation of a Kalman filter. Later in our tests, we will use USRP E-310 Software Defined Radio to capture iridium Downlink signal and record the I/Q samples for Doppler estimation by using algorithms described in the paper by Su et alia, Additional Resources.
Iridium Next Satellite Doppler and Visibility Analysis
In this subsection, analytical analysis is described to introduce briefly the LEO circular orbit parameters, Doppler equation with satellite visibility applied to Iridium Next satellite for different angles of elevations (See Figure 10). By considering the parameters such as described in Figure 8, one can easily define such as in Fossa et alia the slant range mathematical definition such as given below:
Let us define t0 as the time when the iridium Terminal (modem) observe the iridium satellite SVi at the maximum elevation angle θmax, and let us define the Zero-Doppler as the measured null Doppler at this point. From the work and the development of Chen et alia, one can derivative of the slant range such as given below:
Such as in Figure 8, we can demonstrate that:
Which can be used to differentiate the slant range and obtain the following equation. From this development and from the following satisfied condition (Pythagorean Theorem), it is possible to write:
Observation: In addition, by introducing the iridium satellites angular velocity expressed in the ECF (Earth Centred Frame), it is possible to write and determine the normalized Doppler formula such as given in the following equation:
In this work, two different iridium transceivers, 9523N and the USRP E310 Software Defined Radio with Iridium Patch antenna RST-720 were used to collect Data and for signal processing (Omni Directional antenna).
Instantaneous one Iridium Next satellite based positioning
Before developing the state space model solution and information fusion based on multiple Doppler range and range rate measurement, it is interesting to analyse the potential error of instantaneous positioning algorithm such as developed by Chen et alia and Levanon using one only LEO Globalstar satellite, then Two LEO Globalstar satellites respectively, with Chinese LEO satellites. From Figure 11, one can observe that for short period of measurement and observation between the iridium modem and the iridium satellite (1 min-7min), and according again to Chen et alia and Levanon, it is possible to assume the trajectory of the satellite as a line instead of the circular orbit. Based on this assumption and approximation. It is possible to rewrite the range and range rate measurement, they can be computed using the following equations:
Important note. It is possible to explore recent and modern approaches based on TDOA/FDOA measurement between a fixed reference station and a second mobile transceiver in order to estimate position, ensuring navigation and providing resilient timing information such as proposed by Gavin, Additional Resources. To better understand our approach and results, in Figure 9 and Figure 10, one can observe real data of Multi- iridium Next LEO satellites navigation, real orbit determination tracking during a pass (10 minutes), with real Doppler curves measured and estimated by USRP E-310 (Software Defined Radio) at Lassena laboratory on the roof of ETS in Montreal (See Figure 12).
Thus, the cross track error variance and the along track error variance are given by (Hsu et alia, and Levanon, Additional Resources):
These equations are obtained from the observation Matrix H and the Weight Matrix W.
Where the following parameters are described:
Std: Standard deviation
is the pseudorange rate std
is the pseudorange std
is the cross track std
is the along track std
is the pseudorange of ith satellite
is the pseudorange rate of ith satellites
In this problem, there was no consideration to the altitude of the User terminal (Modem) or its impact on geolocation estimation using Doppler shift frequency detection and computation. The effect of the altitude should be subject of another analysis and experimental work made by the authors.
Mathematical Problem Formulation: The mathematical problem is very correlated with previously and more recently Doppler based positioning analysis using GNSS signal by the authors in Knogl et alia, and Benzerrouk and Nebylov. The goal is to determine the position of the SDR USRP E310 receiving Iridium Dowlink signals on different frequencies, and converge to the real position defined by the vector: rusrp=[xusrpyusrpzusrp]T. It is extremely difficult to measure the Doppler shift with the precision required for the positioning especially without very accurate timing source. We demonstrate however, the possibility to measure Doppler shifts for the following three iridium Next signals: STL, Ring Alert and Primary Message such as described on Figure 16 and Figure 17. Another technique using double range could be adopted to eliminate clock offset (Soualle et alia).
The Doppler Shift can be modelled with the following product equation:
Let us define: Vir, i = 1..n are the satellite relative velocity vectors, assumed known (predicted with GPredict Software–See Figure 3), and also define: rir, i = 1..n, are the satellite position vectors, assumed known and the delta range for i = 1..n are fully defined by the following equation (see Ashton et alia and Levanon 1999):
Unlike GPS, one of the main problems is that the position of Iridium Next satellites is not known precisely, by the way to estimate the position of Iridium satellites, we integrate a TLE-SGP4 file update in order to predict the orbit and the position of iridium SVi at each time we measure the Doppler shift from the received signal. We also use Gpredict Software based on TLE files and SGP4 model to initialize EKF later (Morales et alia, Additional Resources).
GPredict then could play an important role in the initialization of the Extended Kalman filter or the Levenberg Marquardt using TLE prediction. However, during the estimation process of the transceiver location, we augment the state vector with the position of the iridium satellite in order to increase the accuracy of the positioning algorithm during navigation phases when applied to ground vehicle, UAV or an aircraft (Morales et alia).
Thus, it is possible to develop the following differential equations for Position determination using nonlinear least square algorithms as described in the paper by Bahrami in Additional Resources:
The techniques described previously have been applied to multiple LEO constellation such as Globalstar, ARGOS, COSPAS-SARSAT, Orbcomm, etc. In this paper, the original approach, which we developed, is based on nonlinear state estimation by considering a nonlinear least square problem as well as an Extended Kalman Filter (EKF), in addition to the consideration of Iridium Next LEO satellites as a mobile Network. From each visible Iridium Next Satellite, range-range rate information can be computed, predicted and estimated in real time or during post processing.
(1) Ardito, C. T., J. J. Morales, J. Khalife, A. A. Abdallah, Z. Kassas, and M Zaher, ”Performance Evaluation of Navigation Using LEO Satellite Signals with Periodically Transmitted Satellite Positions,” Proceedings of the 2019 International Technical Meeting of The Institute of Navigation, Reston, Virginia, January 2019, pp. 306-318. https://doi.org/10.33012/2019.16743.
(2) Ashton, C., B. Shuster, A., G. Colledge, and M. Dickinson, (2015). The Search for MH370. Journal of Navigation, 68(1), 1-22. doi:10.1017/S037346331400068.
(3) Bahrami, M, GNSS Doppler Positioning (an Overview), Univer- sity College London, Geomatics Lab, A Paper prepared for GNSS SIG Technical Reading Group, 12 p, 2008.
(4) Benzerrouk, H.; A. Nebylov, M. Li, M., Multi-UAV Doppler Informa- tion Fusion for Position Determination Based on Distributed High Degrees Information Filters. Aerospace2018, 5, 28
(5) Benzerrouk H., and A. Nebylov, ”Robust nonlinear filtering ap- plied to integrated navigation system INS/GNSS under non Gaussian measurement noise effect,” 2012 IEEE Aerospace Conference, Big Sky, MT, 2012, pp. 1-8.
(6) Chen, Xi et alia. “Analysis on the Performance Bound of Doppler Positioning Using One LEO Satellite.” 2016 IEEE 83rd Vehicular Technology Conference (VTC Spring) (2016): 1-5.
(7) Cobb, S., D. Lawrence, G. Gutt, M. O’Connor, “Differential and Rubidium Disciplined Test Results from an Iridium-Based Secure Timing Solution,” Proceedings of the 2017 International Technical Meeting of The Institute of Navigation, Monterey, California, January 2017, pp. 1111-1116.
(8) Gavin, H. P., The Levenberg-Marquardt method for nonlinear least squares curve-fitting problems c . (2013).
(9) Hsu, Wu-Hung and Shau-Shiun Jan. Assessment of using Doppler shift of LEO satellites to aid GPS positioning. 2014 IEEE/ION Position, Location and Navigation Symposium—PLANS 2014 (2014): 1155-1161.
(10) Knogl, J. S., P. Henkel, and C. Günther, Precise positioning of a geostationary data relay using LEO satellites. Proceedings ELMAR-2011 (2011): 325-328.
(11) Lawrence, D., H. Cobb, Stewart, G. Gutt, F. Tremblay, P. Laplante, M. O’Connor, “Test Results from a LEO-Satellite-Based Assured Time and Location Solution,” Pro- ceedings of the 2016 International Technical Meeting of The Institute of Navigation, Monterey, California, January 2016, pp. 125-129.
(12) Levanon, N., “Quick position determination using 1 or 2 LEO satellites,” in Aerospace and Electronic Systems, IEEE Transactions on , vol.34,no.3, pp.736-754, Jul 1998.
(13) Levanon, N. (1999), Instant Active Positioning with One LEO Satellite. Navigation, 46: 87C95. doi: 10.1002/j.2161-4296.1999.tb02397.x.
(14) Lopez, R., J. Malard´e, F. Royer and P. Gaspar, “Improving Argos Doppler Location Using Multiple-Model Kalman Filtering,” in IEEE Transactions on Geoscience and Remote Sensing, vol.52, no.8, pp.4744-4755,Aug.2014.doi: 10.1109/TGRS. 2013. 2284293.
(15) Morales, J., J. Khalife, A. Abdallah, C. Ardito, and Z. Kassas, Inertial navigation system aiding with Orbcomm LEO satellite Doppler measurements. ION Global Navigation Satellite Systems Conference, Sep. 24-28, 2018, Miami, FL.
(16) Reid, T., A. M. Neish, T. Walter, P. K. Enge, “Broadband LEO Constellations for Navigation”, NAVIGATION, Journal of The Institute of Navigation, Vol. 65, No. 2, Summer 2018, pp. 205-220
(17) Winger, D. J., ”Error analysis of an integrated inertial/Doppler-satellite navigation system with continuous and multiple satellite coverage ” (1971). Retrospective Theses and Dis- sertations. 4864.
(19) http://gpsworld.com/iridium-constellation-provides-low-earth-orbit- satnav-service
(20) Su, Y. T. and Ru-Chwen Wu, “Frequency acquistion and tracking in high dynamic environments,” in IEEE Transactions on Vehicular Technology, vol. 49, no. 6, pp. 2419-2429, Nov. 2000.
(21) Perspectives of PNT Services Supported by Mega-Constellastions”, Francis Soualle, Airbus DS (Germany) International Technical Symposium on Navigation and Timing (ITSNT) Toulouse, 14th November 2018, ENAC.
(22) Fossa, C. E., R. A. Raines, G. H. Gunsch and M. A Temple, “An overview of the IRIDIUM (R) low Earth Orbit (LEO) satellite system,” Proceedings of the IEEE 1998 National Aerospace and Electronics Conference. NAECON 1998. Celebrating 50 Years. (Cat. No.98CH36185), Dayton, OH, USA, 1998, pp. 152-159.
Hamza Benzerrouk is a Postdoctoral Fellow at ETS-Lassena since July 2018, PhD in Applied Mathematics from State University of Saint Petersburg SbPU, candidate of science in Aerospace Instrumentation at Saint Petersburg State University of Aerospace Instrumentation guap, Russia. He accumulated experience in satcom systems for COMNAV applications, and has more than eight years’ experience in aeronautics (DO160) and satellite transmission engineering and telecommunication systems. His Field of scientific expertise: Information Fusion, Target Tracking, Kalman filtering, Bayesian Filtering, inertial navigation, GNSS, non-Gaussian Signal Processing.
Xiaoxing Fang is a Postdoctoral Research Fellow in the Department of Electrical Engineering at the École de Technologie Supérieure (ETS). She received her Bachelor degree in automation (2002), her Master degree (2006) and Ph.D. degree (2016) in Navigation, Guidance and Control from Beihang University (P.R. China). At the same time, she worked as a GNC engineer in Research Institute of Unmanned Aerial Vehicle in Beihang University, which brings her over 10 years of design and development experiences in aerospace engineering, especially in flight control law , flight management system, and flight simulation.
René Jr Landry is professor at the department of electrical engineering at École de technologie supérieure (ÉTS) and the Director of LASSENA Laboratory. His expertise in embedded systems, navigation and avionic is appliqued notably in the field of transport, aeronautic and space technologies. Graduated in 1992 from École Polytechnique of Montréal in the Space Technologies program of the electrical engineering department, he got a Master of Science in Satellite Communication Engineering at the University of Surrey (Guildford, United Kingdom) in 1993, a Master in space electronics and a DEA in microwave at ISAE (Toulouse, France) in 1994 and his PhD on GPS anti-jamming technologies in collaboration with Thales, the French civil aviation (DGAC) and ESA (Noordwick). He has joined ÉTS in 1999, after a year as postdoctoral fellow at the National French Space Agency (CNES, Toulouse).
Abdessamad Amrhar received a Master of Applied Science degree in electrical engineering from École de Technologie Supérieure of Montréal (Canada). As a member LASSENA laboratory, he worked on the design and implementation multi-mode software defined radio system for avionics. His research interests are: software defined radios, embedded systems, avionics, and digital communication.
Anh-Quang Nguyen got his B.Eng from Back Khoa University (Viet Nam) in Aerospace Engineering in 2015. From 2016 to 2018, he was a Master student at Ecole de Technologie de Superieure (Montreal, Quebec, Canada) and worked in the AVIO-505 Project at LASSENA. From 2018, he works at SII Canada in an R&D project related to the application of UAV in Search and Rescue mission.
Hamza Rasaee received his Bachelor of Science degree in Electronic and Computer engineering from Babol Noshirvani University of Technology.Now he is pursuing his Master of Science in Electrical Engineering in École de Technologie Supérieure (ÉTS). In 2019, he joined the LASSENA group and he is working on the clock drift issue in the ibNav project. His expertise is in the embedded system programming, electronic board design and programming for different operating systems. | <urn:uuid:66ee61f3-9804-425d-bf9c-91dfa6fb93b7> | CC-MAIN-2021-21 | https://insidegnss.com/iridium-next-leo-satellites-as-an-alternative-pnt-in-gnss-denied-environments-part-1/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991737.39/warc/CC-MAIN-20210514025740-20210514055740-00294.warc.gz | en | 0.904795 | 6,255 | 2.65625 | 3 |
In science, as in all professions, some people try to cheat the system. Charles Dawson was one of those people – an amateur British archaeologist and paleontologist born in 1864. By the late nineteenth century, Dawson had made a number of seemingly important fossil discoveries. Not prone to modesty, he named many of his newly discovered species after himself. For example, Dawson found fossil teeth of a previously unknown species of mammal, which he subsequently named Plagiaulax dawsoni. He named one of three new species of dinosaur he found Iguanodon dawsoni and a new form of fossil plant Salaginella dawsoni. His work brought him considerable fame: He was elected a fellow of the British Geological Society and appointed to the Society of Antiquaries of London. The British Museum conferred upon him the title of Honorary Collector, and the English newspaper The Sussex Daily News dubbed him the "Wizard of Sussex."
His most famous discovery, however, came in late 1912, when Dawson showed off parts of a human-looking skull and jawbone to the public and convinced scientists that the fossils were from a new species that represented the missing link between man and ape. Dawson's "Piltdown Man," as the find came to be known, made quite an impact, confounding the scientific community for decades, long after Dawson's death in 1915. Though a few scientists doubted the find from the beginning, it was largely accepted and admired.
In 1949, Kenneth Oakley, a professor of anthropology at Oxford University, dated the skull using a newly available fluorine absorption test and found that it was 500 years old rather than 500,000. Yet even Oakley continued to believe that the skull was genuine, but simply dated incorrectly. In 1953, Joseph Weiner, a student in physical anthropology at Oxford University, attended a paleontology conference and began to realize that Piltdown Man simply did not fit with other human ancestor fossils. He communicated his suspicion to his professor at Oxford, Wilfred Edward Le Gros Clark, and they followed up with Oakley. Soon after, the three realized that the skull did not represent the missing link, but rather an elaborate fraud in which the skull of a medieval human was combined with the jawbone of an orangutan and the teeth of a fossilized chimpanzee. The bones were chemically treated to make them look older, and the teeth had even been hand filed to make them fit with the skull. In the wake of this revelation, at least 38 of Dawson's finds have been found to be fakes, created in his pursuit of fame and recognition.
Advances in science depend on the reliability of the research record, so thankfully, hucksters and cheats like Dawson are the exception rather than the norm in the scientific community. But cases like Dawson's play an important role in helping us understand the system of scientific ethics that has evolved to ensure reliability and proper behavior in science.
The role of ethics in science
Ethics is a set of moral obligations that define right and wrong in our practices and decisions. Many professions have a formalized system of ethical practices that help guide professionals in the field. For example, doctors commonly take the Hippocratic Oath, which, among other things, states that doctors "do no harm" to their patients. Engineers follow an ethical guide that states that they "hold paramount the safety, health, and welfare of the public." Within these professions, as well as within science, the principles become so ingrained that practitioners rarely have to think about adhering to the ethic – it's part of the way they practice. And a breach of ethics is considered very serious, punishable at least within the profession (by revocation of a license, for example) and sometimes by the law as well.
Scientific ethics calls for honesty and integrity in all stages of scientific practice, from reporting results regardless to properly attributing collaborators. This system of ethics guides the practice of science, from data collection to publication and beyond. As in other professions, the scientific ethic is deeply integrated into the way scientists work, and they are aware that the reliability of their work and scientific knowledge in general depends upon adhering to that ethic. Many of the ethical principles in science relate to the production of unbiased scientific knowledge, which is critical when others try to build upon or extend research findings. The open publication of data, peer review, replication, and collaboration required by the scientific ethic all help to keep science moving forward by validating research findings and confirming or raising questions about results (see our module Scientific Literature for further information).
Some breaches of the ethical standards, such as fabrication of data, are dealt with by the scientific community through means similar to ethical breaches in other disciplines – removal from a job, for example. But less obvious challenges to the ethical standard occur more frequently, such as giving a scientific competitor a negative peer review. These incidents are more like parking in a no parking zone – they are against the rules and can be unfair, but they often go unpunished. Sometimes scientists simply make mistakes that may appear to be ethical breaches, such as improperly citing a source or giving a misleading reference. And like any other group that shares goals and ideals, the scientific community works together to deal with all of these incidents as best as they can – in some cases with more success than others.
Ethical standards in science
Scientists have long maintained an informal system of ethics and guidelines for conducting research, but documented ethical guidelines did not develop until the mid-twentieth century, after a series of well-publicized ethical breaches and war crimes. Scientific ethics now refers to a standard of conduct for scientists that is generally delineated into two broad categories (Bolton, 2002). First, standards of methods and process address the design, procedures, data analysis, interpretation, and reporting of research efforts. Second, standards of topics and findings address the use of human and animal subjects in research and the ethical implications of certain research findings. Together, these ethical standards help guide scientific research and ensure that research efforts (and researchers) abide by several core principles (Resnik, 1993), including:
- Honesty in reporting of scientific data;
- Careful transcription and analysis of scientific results to avoid error;
- Independent analysis and interpretation of results that is based on data and not on the influence of external sources;
- Open sharing of methods, data, and interpretations through publication and presentation;
- Sufficient validation of results through replication and collaboration with peers;
- Proper crediting of sources of information, data, and ideas;
- Moral obligations to society in general, and, in some disciplines, responsibility in weighing the rights of human and animal subjects.
Ethics of methods and process
Scientists are human, and humans don't always abide by the law. Understanding some examples of scientific misconduct will help us to understand the importance and consequences of scientific integrity. In 2001, the German physicist Jan Hendrik Schön briefly rose to prominence for what appeared to be a series of breakthrough discoveries in the area of electronics and nanotechnology. Schön and two co-authors published a paper in the journal Nature, claiming to have produced a molecular-scale alternative to the transistor (Figure 2) used commonly in consumer devices (Schön et al., 2001). The implications were revolutionary – a molecular transistor could allow the development of computer microchips far smaller than any available at the time. As a result, Schön received a number of outstanding research awards and the work was deemed one of the "breakthroughs of the year" in 2001 by Science magazine.
However, problems began to appear very quickly. Scientists who tried to replicate Schön's work were unable to do so. Lydia Sohn, then a nanotechnology researcher at Princeton University, noticed that two different experiments carried out by Schön at very different temperatures and published in separate papers appeared to have identical patterns of background noise in the graphs used to present the data (Service, 2002). When confronted with the problem, Schön initially claimed that he had mistakenly submitted the same graph with two different manuscripts. However, soon after, Paul McEuen of Cornell University found the same graph in a third paper. As a result of these suspicions, Bell Laboratories, the research institution where Schön worked, launched an investigation into his research in May 2002. When the committee heading the investigation attempted to study Schön's notes and research data, they found that he kept no laboratory notebooks, had erased all of the raw data files from his computer (claiming he needed the additional storage space for new studies), and had either discarded or damaged beyond recognition all of his experimental samples. The committee eventually concluded that Schön had altered or completely fabricated data in at least 16 instances between 1998 and 2001. Schön was fired from Bell Laboratories on September 25, 2002, the same day they received the report from the investigating committee. On October 31, 2002, the journal Science retracted eight papers authored by Schön; on December 20, 2002, the journal Physical Review retracted six of Schon's papers, and on March 5, 2003, Nature retracted seven that they had published.
These actions – retractions and firing – are the means by which the scientific community deals with serious scientific misconduct. In addition, he was banned from working in science for eight years. In 2004, the University of Konstanz in Germany where Schön received his PhD, took the issue a step further and asked him to return his doctoral papers in an effort to revoke his doctoral degree. In 2014, after several appeals, the highest German court upheld the right of the university to revoke Schön's degree. At the time of the last appeal, Schön had been working in industry, not as a research scientist, and it is unlikely he will be able to find work as a research scientist again. Clearly, the consequences of scientific misconduct can be dire: complete removal from the scientific community.
The Schön incident is often cited as an example of scientific misconduct because he breached many of the core ethical principles of science. Schön admitted to falsifying data to make the evidence of the behavior he observed "more convincing." He also made extensive errors in transcribing and analyzing his data, thus violating the principles of honesty and carefulness. Schön's articles did not present his methodology in a way such that other scientists could repeat the work, and he took deliberate steps to obscure his notes and raw data and to prevent the reanalysis of his data and methods. Finally, while the committee reviewing Schön's work exonerated his coauthors of misconduct, a number of questions were raised over whether they exhibited proper oversight of the work in collaborating and co-publishing with Schön. While Schön's motives were never fully identified (he continued to claim that the instances of misconduct could be explained as simple mistakes), it has been proposed that his personal quest for recognition and glory biased his work so much that he focused on supporting specific conclusions instead of objectively analyzing the data he obtained.
The first step toward uncovering Schon's breach of ethics was when other researchers
Ethics of topics and findings
Despite his egregious breach of scientific ethics, no criminal charges were ever filed against Schön. In other cases, actions that breach the scientific ethic also breach more fundamental moral and legal standards. One instance in particular, the brutality of Nazi scientists in World War II, was so severe and discriminatory that it led to the adoption of an international code governing research ethics.
During World War II, Nazi scientists launched a series of studies: some designed to test the limits of human exposure to the elements in the name of preparing German soldiers fighting the war. Notorious among these efforts were experiments on the effects of hypothermia in humans. During these experiments, concentration camp prisoners were forced to sit in ice water or were left naked outdoors in freezing temperatures for hours at a time. Many victims were left to freeze to death slowly while others were eventually re-warmed with blankets or warm water, or other methods that left them with permanent injuries.
At the end of the war, 23 individuals were tried for war crimes in Nuremberg, Germany, in relation to these studies, and 15 were found guilty (Figure 3). The court proceedings led to a set of guidelines, referred to as the Nuremberg Code, which limits research on human subjects. Among other things, the Nuremberg Code requires that individuals be informed of and consent to the research being conducted; the first standard reads, "The voluntary consent of the human subject is absolutely essential." The code also states that the research risks should be weighed in light of the potential benefits, and it requires that scientists avoid intentionally inflicting physical or mental suffering for research purposes. Importantly, the code also places the responsibility for adhering to the code on "each individual who initiates, directs or engages in the experiment." This is a critical component of the code that implicates every single scientist involved in an experiment – not just the most senior scientist or first author on a paper. The Nuremberg Code was published in 1949 and is still a fundamental document guiding ethical behavior in research on human subjects that has been supplemented by additional guidelines and standards in most countries.
Other ethical principles also guide the practice of research on human subjects. For example, a number of government funding sources limit or exclude funding for human cloning due to the ethical questions raised by the practice. Another set of ethical guidelines covers studies involving therapeutic drugs and devices. Research investigating the therapeutic properties of medical devices or drugs is stopped ahead of schedule if a treatment is found to have severe negative side effects. Similarly, large-scale therapeutic studies in which a drug or agent is found to be highly beneficial may be concluded early so that the control patients (those not receiving the effective drug or agent) can be given the new, beneficial treatment.
The Nuremberg Code holds __________ responsible for protecting human subjects.
Mistakes versus misconduct
Scientists are fallible and make mistakes – these do not qualify as misconduct. Sometimes, however, the line between mistake and misconduct is not clear. For example, in the late 1980s, a number of research groups were investigating the hypothesis that deuterium atoms could be forced to fuse together at room temperature, releasing tremendous amounts of energy in the process. Nuclear fusion was not a new topic in 1980, but researchers at the time were able to initiate fusion reactions only at very high temperatures, so low temperature fusion held great promise as an energy source.
Two scientists at the University of Utah, Stanley Pons and Martin Fleischmann, were among those researching the topic, and they had constructed a system using a palladium electrode and deuterated water to investigate the potential for low temperature fusion reactions. As they worked with their system, they noted excess amounts of heat being generated. Though not all of the data they collected was conclusive, they proposed that the heat was evidence for fusion occurring in their system. Rather than repeat and publish their work so that others could confirm the results, Pons and Fleischmann were worried that another scientist might announce similar results soon and hoped to patent their invention, so they rushed to publicly announce their breakthrough. On March 23, 1989, Pons and Fleischmann, with the support of their university, held a press conference to announce their discovery of "an inexhaustible source of energy."
The announcement of Pons' and Fleischmann's "cold fusion" reactor (Figure 4) caused immediate excitement in the press and was covered by major national and international news organizations. Among scientists, their announcement was simultaneously hailed and criticized. On April 12, Pons received a standing ovation from about 7,000 chemists at the semi-annual meeting of the American Chemical Society. But many scientists chastised the researchers for announcing their discovery in the popular press rather than through the peer-reviewed literature. Pons and Fleischmann eventually did publish their findings in a scientific article (Fleischmann et al., 1990), but problems had already begun to appear. The researchers had a difficult time showing evidence for the production of neutrons by their system, a characteristic that would have confirmed the occurrence of fusion reactions. On May 1, 1989, at a dramatic meeting of the American Physical Society less than five weeks after the press conference in Utah, Steven Koonin, Nathan Lewis, and Charles Barnes from Caltech announced that they had replicated Pons and Fleischmann's experimental conditions, found numerous errors in the scientists' conclusions, and further announced that they found no evidence for fusion occurring in the system. Soon after that, the US Department of Energy published a report that stated "the experimental results ...reported to date do not present convincing evidence that useful sources of energy will result from the phenomena attributed to cold fusion."
While the conclusions made by Pons and Fleischmann were discredited, the scientists were not accused of fraud – they had not fabricated results or attempted to mislead other scientists, but had made their findings public through unconventional means before going through the process of peer review. They eventually left the University of Utah to work as scientists in the industrial sector. Their mistakes, however, not only affected them but discredited the whole community of legitimate researchers investigating cold fusion. The phrase "cold fusion" became synonymous with junk science, and federal funding in the field almost completely vanished overnight. It took almost 15 years of legitimate research and the renaming of their field from cold fusion to "low energy nuclear reactions" before the US Department of Energy again considered funding well-designed experiments in the field (DOE SC, 2004).
When faulty research results from mistakes rather than deliberate fraud,
Everyday ethical decisions
Scientists also face ethical decisions in more common ways and everyday circumstances. For example, authorship on research papers can raise questions. Authors on papers are expected to have materially contributed to the work in some way and have a responsibility to be familiar with and provide oversight of the work. Jan Hendrik Schön's coauthors clearly failed in this responsibility. Sometimes newcomers to a field will seek to add experienced scientists' names to papers or to grant proposals to increase the perceived importance of their work. While this can lead to valuable collaborations in science, if those senior authors simply accept "honorary" authorship and do not contribute to the work, it raises ethical issues over responsibility in research publishing.
A scientist's source of funding can also potentially bias their work. While scientists generally acknowledge their funding sources in their papers, there have been a number of cases in which lack of adequate disclosure has raised concern. For example, in 2006 Dr. Claudia Henschke, a radiologist at the Weill Cornell Medical College, published a paper that suggested that screening smokers and former smokers with CT chest scans could dramatically reduce the number of lung cancer deaths (Henschke et al., 2006). However, Henschke failed to disclose that the foundation through which her research was funded was itself almost wholly funded by Liggett Tobacco. The case caused an outcry in the scientific community because of the potential bias toward trivializing the impact of lung cancer. Almost two years later, Dr. Henschke published a correction in the journal that provided disclosure of the funding sources of the study (Henschke, 2008). As a result of this and other cases, many journals instituted stricter requirements regarding disclosure of funding sources for published research.
Enforcing ethical standards
A number of incidents have prompted the development of clear and legally enforceable ethical standards in science. For example, in 1932, the US Public Health Service located in Tuskegee, Alabama, initiated a study of the effects of syphilis in men. When the study began, medical treatments available for syphilis were highly toxic and of questionable effectiveness. Thus, the study sought to determine if patients with syphilis were better off receiving those dangerous treatments or not. The researchers recruited 399 black men who had syphilis, and 201 men without syphilis (as a control). Individuals enrolled in what eventually became known as the Tuskegee Syphilis Study were not asked to give their consent and were not informed of their diagnosis; instead they were told they had "bad blood" and could receive free medical treatment (which often consisted of nothing), rides to the clinic, meals, and burial insurance in case of death in return for participating.
By 1947, penicillin appeared to be an effective treatment for syphilis. However, rather than treat the infected participants with penicillin and close the study, the Tuskegee researchers withheld penicillin and information about the drug in the name of studying how syphilis spreads and kills its victims. The unconscionable study continued until 1972, when a leak to the press resulted in a public outcry and its termination. By that time, however, 28 of the original participants had died of syphilis and another 100 had died from medical complications related to syphilis. Further, 40 wives of participants had been infected with syphilis, and 19 children had contracted the disease at birth.
As a result of the Tuskegee Syphilis Study and the Nuremberg Doctors' trial, the United States Congress passed the National Research Act in 1974. The Act created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research to oversee and regulate the use of human experimentation and defined the requirements for Institutional Review Boards (IRBs). As a result, all institutions that receive federal research funding must establish and maintain an IRB, an independent board of trained researchers who review research plans that involve human subjects to assure that ethical standards are maintained. An institution's IRB must approve any research with human subjects before it is initiated. Regulations governing the operation of the IRB are issued by the US Department of Health and Human Services.
Equally important, individual scientists enforce ethical standards in the profession by promoting open publication and presentation of methods and results that allow for other scientists to reproduce and validate their work and findings. Federal government-based organizations like the National Academy of Sciences publish ethical guidelines for individuals. An example is the book On Being a Scientist, which can be accessed via the Resources section (National Academy of Sciences, 1995). The US Office of Research Integrity also promotes ethics in research by monitoring institutional investigations of research misconduct and promoting education on the issue.
Ethics in science are similar to ethics in our broader society: They promote reasonable conduct and effective cooperation between individuals. While breaches of scientific ethics do occur, as they do in society in general, they are generally dealt with swiftly when identified and help us to understand the importance of ethical behavior in our professional practices. Adhering to the scientific ethic assures that data collected during research are reliable and that interpretations are reasonable and with merit, thus allowing the work of a scientist to become part of the growing body of scientific knowledge.
Ethical standards are a critical part of scientific research. Through examples of scientific fraud, misconduct, and mistakes, this module makes clear how ethical standards help ensure the reliability of research results and the safety of research subjects. The importance and consequences of integrity in the process of science are examined in detail.
Ethical conduct in science assures the reliability of research results and the safety of research subjects.
Ethics in science include: a) standards of methods and process that address research design, procedures, data analysis, interpretation, and reporting; and b) standards of topics and findings that address the use of human and animal subjects in research.
Replication, collaboration, and peer review all help to minimize ethical breaches, and identify them when they do occur. | <urn:uuid:3806bdae-6a6b-4311-89ed-3579d28d99dd> | CC-MAIN-2021-21 | https://visionlearning.com/en/library/Process-of-Science/49/Scientific-Ethics/161 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988882.7/warc/CC-MAIN-20210508121446-20210508151446-00614.warc.gz | en | 0.964555 | 4,770 | 3.59375 | 4 |
Special Educational Needs Policy
(Incorporating: Learning Support, Resource Allocation, Special Needs Assistants, Resource Teaching Policy and Assessment and Reporting Policy)
The Commons N. S. is a co-educational mainstream primary school catering for children from mixed social and cultural backgrounds. The purpose of this policy is to provide practical guidance for teachers, parents and other interested parties on the provision
of effective learning support to pupils experiencing low achievement and /or learning difficulties, as well as to fulfill our obligations under the Education Act 1998. Equal Status Act 2000, Education Welfare Act 2000 and Education for Persons with Disabilities
Act 2004, to enable children with special educational needs to join in the normal activities of the school along with other children.
All our children have a right to an education, which is appropriate
to them as individuals. We want all our children to feel that they are a valued part of our school community. As far as possible, it is our aim to minimise the difficulties that children may experience. We are dedicated to helping each child to achieve his/her
individual potential. The provision of a quality system of learning support and an inclusive curriculum is integral to this commitment.
We take into account the different backgrounds, experiences, interests and strengths that influence the
way in which children learn when we plan our approaches to teaching and learning throughout the school.
Description of current provision:
The school was provided a learning support service in June 1995 as part of
a five school cluster. In 2005 it was provided with one base post and a general allocation of 1.00 under the terms of the General Allocation model. In 2012 the allocation was reduced to 0.8. (Sharing a 0.2 allocation with one other school). The school currently
has the following provisions to cater for children with Special Education Needs:
- One Shared Learning Support Teacher.
- One Shared Part-time Resource Teacher.
- One Special Needs Assistant.
categories of pupils are prioritized to receive supplementary teaching from the Learning Support Teacher and the Resource Teacher under the General Allocation Model outlined in DES Circular 02/05:
- Pupils whose achievement is at or below the 12th
percentile on standardized tests in English and Mathematics.
- Pupils with Learning difficulties, pupils with mild social or emotional difficulties and pupils with mild co-ordination or attention control difficulties associated with identified conditions.
- Pupils who have been identified as being in the low incidence categories (Appendix 1 Sp. Ed. Circular 02/05) will receive an individual allocation of support through the Special Needs Organiser. (SENO)
- Pupils who have special educational needs
arising from high incidence disabilities (borderline mild general learning disability and specific learning disability).
Implementation and Review
The implementation of this policy will be reviewed
every three years.
Ratification and Communication
This policy was ratified by the Board of Management of the Commons N.S. on 31/05/2016. A copy of the policy will be made available
to teachers, parents of Special Educational Needs pupils and other parents on request.
Learning Support Policy for The Commons National School
Beliefs and Principles:
learning- support service is designed to ensure that all the pupils achieve basic literacy and numeracy by the time they complete their primary education.’ (Learning Support Guidelines 2000-pg.14).
The principles of Learning-
Support include :
- Effective whole- school policies and parental involvement.
- Prevention of failure.
- Provision of intensive early intervention.
- Direction of Resources towards pupils in greatest need.
‘The principal aim of learning support is to optimize the teaching process in order to enable pupils with learning difficulties to achieve levels of proficiency in
literacy and numeracy before leaving primary school.’ (Learning Support Guidelines 2000-pg.15)
Subsidiary aims include-
- Enabling these pupils to participate in the full curriculum for their
- Developing positive self-esteem and positive attitudes about school and learning in these pupils.
- Providing supplementary teaching and additional support and resources for these pupils in English and/ or Mathematics.
parents in supporting their children’s learning through effective parent- support programmes.
- Promoting collaboration among teachers and the implementation of whole- school policies on learning- support for these pupils.
early intervention programmes and other programs designed to enhance learning and to prevent/ reduce difficulties in learning.
- Enable pupils to monitor their own learning and become independent learners.
- To establish early intervention to
enhance learning and prevent/reduce difficulties in learning.
- To enhance basic skills and learning strategies to a level which enables pupils to participate in the full curriculum.
- To expose children to stimulating learning experiences so
that reading and writing are enjoyed and valued.
- To develop a partnership with parents/carers in order that their knowledge, views and experience can assist us in assessing and providing for their children.
- To take into account the ascertainable
wishes of the children concerned and, whenever possible, directly involve them in decision making in order to provide more effectively for them.
- To inform and include parents of children who are receiving support teaching of the aims and implementation
of the learning support programme.
- To promote cooperation among teachers and the learning support team in the implementation of the learning support policy.
- To ensure that all staff are aware of their responsibilities towards children with
special needs and are able to exercise them.
- That all children regardless of their ability are included and are part of all activities and are part of the school community.
Staff roles and responsibilities:
In attempting to achieve the above aims the B.O.M., principal and staff will take all reasonable steps within the limits of the resources available to fulfil the requirements outlined in this policy document and the ‘Learning Support Guidelines’2000.
Board of Management.
The B.O.M. will fulfil its statutory duties towards pupils with special needs. It will ensure that the provision required is an integral part of the school development plan. Members will be knowledgeable
about the school’s special educational needs provision- funding, equipment and personnel.
“The Principal teacher has overall responsibility for the school’s learning
support programme and for the operation of the services for children with special educational needs.”(Learning Support Guidelines 2000-pg.39)
The principal teacher’s role includes:
- Developing and
implementing learning –support and special needs services.
- Supporting the work of the class teacher.
- Supporting the work of the learning- support teacher.
- Working with parents, out-of-school agencies and the school community.
The duties of coordinating learning-support and special needs services will be fulfilled by the learning-support teacher.
- Maintaining a list of pupils
who are receiving supplementary teaching and special educational services.
- Helping to coordinate the caseloads/ work schedules of the learning-support and resource teacher.
- Supporting the implementation of a tracking system at the whole-
school level to monitor the progress of pupils with learning difficulties.
- Advising parents on procedures for availing of special needs services.
- Liaising with external agencies such as psychological services to arrange assessments and special
provision for pupils with special needs.
- Arranging for classroom accommodation and resources, as appropriate.
‘A key element of successful learning- support intervention is a
very high level of consultation between the class teacher and the learning- support teacher’. (Learning Support Guidelines 2000-pg.43).
The class teacher has primary responsibility for the progress of all the pupils in his/ her
class(es), including those selected for supplementary teaching.
The class teacher has a role in:
- Developing and implementing the school plan on learning support.
- Collaborating with the learning support teacher.
with the parents of pupils in receipt of supplementary teaching.
- Identifying and supporting pupils with general or specific learning disability.
- Creating a classroom environment in which learning difficulties can be prevented or alleviated.
- Contribute to developing the learning targets in the pupil’s Individual Profile and Learning Programme and adjust the class programme in line with the agreed learning targets.
- Differentiate the class curriculum appropriately to meet the
needs of all pupils within the class.
Learning Support Teacher:
The role of the learning support teacher includes:
- Collaborating with the principal teacher.
- Collaborating with the class
teacher on the identification of pupils who may need diagnostic assessment, taking into account the pupils’ scores on an appropriate standardised screening measure.
- Coordinating the selection of pupils for supplementary teaching.
- Conducting assessments and maintaining records.
- Coordinating provision for children with special educational needs.
- Assist in the implementation of whole-school strategies designed to enhance early learning
and to prevent learning difficulties.
- Consulting and collaborating with the parents of each pupil who has been selected for diagnostic assessment and to discuss results, learning targets and to devise an IPLP and how the targets can be supported
- Consulting and collaborating with parents of each pupil who is in receipt of supplementary teaching on an ongoing basis and at the end of an instructional term to review pupil’s attainment and to discuss future levels of supplementary
- Coordinate the implementation of whole-school procedures for the selection of pupils for supplementary teaching.
- To contribute to decision-making regarding the purchase of learning resources, books and materials to be made available
to pupils with learning difficulties in their mainstream classrooms, in the school library and in the learning support room.
- Liase with external agencies including NEP’s and the regional SENO and organising assessments.
- Maintain a list
of pupils who are receiving supplementary teaching and/or special educational services.
- Resource Teacher.
The resource teacher helps to provide an education which meets the needs and
abilities of children assessed as having difficulties. Specifically, the resource teacher works with children who have been designated hours by the department of education, or currently the local SENO (Special Educational Needs Organiser). In addition, the
resource teacher should advise and liaise with other teachers, parents/guardians and other professionals in the children’s interests. More specifically, the Resource Teacher has responsibility for:
- Developing an IPLP for each pupil
in consultation with other partners in education.
- Assessing and recording the child’s needs and progress.
- Setting specific time related targets for each child and agreeing those with the class teacher.
- Direct teaching of the
child, either in a separate room or within the mainstream class.
- Team teaching and co-teaching when the child concerned will derive benefit from it.
- Meeting and advising parents/guardians when necessary accompanied by the class teacher as
- Meeting other relevant professionals in the child’s interest.
- Keeping a record of all such meetings.
Role of Special Needs Assistants:
Special Needs Assistants form part of the learning support team, along with the Learning Support and Resource Teacher. Their role will be to:
- Foster the participation of special needs pupils in the social and academic process of
the school and to enable pupils to become independent learners.
- To work as part of the learning support team and the wider school community to promote an inclusive curriculum and environment for children with special needs
- Be available to
work with other children in the school with special needs, apart from the child they have been appointed to
- Work closely with the class teacher to develop a plan as how best to support an individual child’s needs, for example, physical disability
or attention deficit.
Role of the Pupil:
The development, implementation and review of their own learning.
As a means of preventing the occurrence of learning difficulties, as far as possible, the following strategies are being implemented:
- The development and implementation of agreed whole
school approaches to language development eg. Phonological awareness
- The development and implementation of agreed whole school approaches to the Mathematics programme e.g. Math’s language
- Promotion of parental involvement through attendance
at enrollment of incoming Junior Infants
- Formal and informal parent/ teacher meetings
- School circulars
- Ongoing observation and assessment of pupils by class teacher
‘Research evidence indicates that the implementation of an intensive early intervention programme in the early primary classes is an effective response to meeting the needs of children who experience low achievement
and / or learning difficulties.’ (Learning Support Guidelines 2000- pg.22)
Strategies for early intervention programmes-
- Dividing the school year into instructional terms, each between 13 and 20 weeks
- A shared expectation of success by everyone involved
- Small group teaching , station teaching, team teaching or one-to- one teaching where small group teaching has not been effective
- Intensive in terms of frequency of lessons
and the pace of instruction
- A strong focus on the development of oral language
- An emphasis on the development of phonemic awareness and word identification skills
- Frequent supervised oral and silent reading
- An interconnection
between the nature of listening, speaking, reading and writing
- In Mathematics, a focus on the language development and mathematical procedures and concepts.
- Ongoing teacher observation and assessment
- In infants the ‘Belfield
Infant Assessment Profile’ (BIAP) is administered by the learning support teacher to children who the class teacher feels may be falling behind and then appropriate action will be taken following assessment
- The M.I.S.T. (Middle Infants Screening
Test) is administered by the Learning Support Teacher to all pupils in Senior Infants after Easter. The results are discussed and analysed by the learning support and class teacher and then the children are grouped for the ‘Early Intervention’
programme with the learning support teacher and an appropriate programme developed.
Types of instruction:
The learning support teacher decides the size of groups, taking into
account the individual needs of pupils and the overall caseload. One to one teaching is provided to meet the needs of individual children. Supplementary teaching can take place in the classroom or in the learning support room, according to the individual child’s
needs. In keeping with overall literacy and numeracy yearly schemes and planning, team teaching, station teaching and in-class support will be included as forms of support. Lessons will focus on the development of phonetic awareness, word identification
strategies, oral work, reading skills and planned reading, comprehension skills, writing skills, spelling skills and mathematical procedures and concepts.
Identification and Selecting pupils for Supplementary teaching.
Criteria for selection of pupils:
Children will be selected to receive supplementary teaching based on -
a) Results of screening tests (e.g. Micra-T / Sigma-T) to be
carried out in each school during the last term of the school year.
b) Results of Belfield Profiles (B.I.A.P.) Junior Infants (and occasionally senior infants depending on the child’s D.O.B.) and M.I.S.T.- Senior
c) Class teacher’s judgment and informal assessment of child’s difficulties.
Procedure following selection:
assessment to be carried out by learning support teacher -
Present tests available -Quest / Neale Analysis /Aston Index / Jackson Phonics /Marino Reading Test/ Schonnell Spelling / Single Word Spelling Test/ British Reading Test/ Norman France
Mathematics Test/ Computer based ‘ Maths diagnostic programme, Bangor Dyslexia Test and Belfield infant.
These tests are administered in June or September to children who have been selected for learning support using the criteria
listed earlier. These tests may also be administered to other children during the school year, if for example a class teacher expresses concerns about a child’s performance or to a new pupil entering the school during the year, if his/her class teacher
is of the opinion that he/she may need supplementary teaching.
Recommendations are made for the nature of intervention to be provided to the pupil following the analysis of diagnostic tests and standardised tests administered. This may be in the
form of additional support from the class teacher or learning support teacher in a group or on a one to one basis, depending on the child’s individual needs. In consultation with the class teacher either a classroom support plan or a school support plan
will be drawn up and an individual or group learning programme where necessary.
- Consultation with parent to discuss results and written consent before supplementary teaching commences.
Based on diagnostic assessment, in consultation with the class teacher and parents, a school support plan, a learning programme and individual profile will be compiled for the pupil.
support programme in literacy and numeracy consists of a range of interventions and the teaching of a selection of different strategies to the pupils experiencing difficulties. The aim of the learning support programme is to optimise the teaching and learning
process so as to enable pupils with learning difficulties to achieve adequate levels of proficiency in literacy before leaving primary school. The learning support programme is a team effort in which the learning support teacher and the class teachers cooperate
with each other, with parents and with relevant outside agencies. We feel it is important to attempt to build up confidence, morale and self-esteem in pupils. Pupils who have a history of failure are given an opportunity to enjoy and succeed in their reading
related activities. Due to the differences in pupils’ strengths, needs, targets and learning activities as outlined in the Individual Education Plans, it is not possible to adhere to a strict learning support programme
be given to:
- Children who score at or below the 12th percentile on Standardised tests of achievement in English.
- Senior infants / First class , early intervention programmes for low achievers in English based
on class teacher’s observation and supported by standardised tests and / or diagnostic testing.
- Children who score at or below the 12th percentile on standardised tests of achievement in Maths.
- Pupils from Junior classes
( 1st -3rd) preforming at Sten 4 scores in standardised tests of achievement in English.
- Pupils from the Senior classes preforming below the 20th percentile in standardised tests of achievement in English.
- Pupils from Junior classes ( 1st -3rd) preforming below the 20th percentile in standardised tests of achievement in Maths.
- Pupils from Senior classes preforming below the 20th percentile in standardised
tests of achievement in Maths.
- Pupils who are:
a) pending educational assessment and evaluation by outside agencies e.g. NEPS, Dr. Mc Dyre’s team
b) pending Resource hour allocation
c) diagnosed as having general
or specific learning disabilities or developmental disorders / delays who are not receiving resource assistance.
d) Children who have had a classroom support plan in place and continue to have difficulty.
- When caseload is overloaded, literacy
needs will be prioritized over numeracy in cases a child meets the criteria for support in both.
Criteria for reducing/discontinuing support:
- Pupil progress will be reviewed /evaluated
at the end of a 13/20 week instructional term by informal assessment, re-testing and consultation with class teacher.
- If satisfied with progress a review of level of support now required by the pupil will be assessed and a reduction, a classroom support
plan and or discontinuance of support will follow.
- In the interest of providing an overall effective learning support service within the cluster an effort
will be made to restrict the number of pupils to less than 30 in any given instructional term.
‘The learning support teacher should maintain records of the outcomes of diagnostic assessments,
of the agreed learning programmes and of pupil progress.’ (Learning Support Guidelines 2000 pg.65)
Records will take the form of
- Classroom support plans
- School support plans
- Individual pupil profiles
and learning programmes
- Group profiles and learning programmes
- Short-term planning and progress records
Liaising with parents:
communication with parents is critically important to the success of a learning support programme’ (Learning Support guidelines 2000 Pg. 48)
Parents are the child’s primary educators. Therefore, it is particularly important that there are
close links between the learning environment of home and school. A collaborative approach between parents, teachers and others involved in the child’s education is essential.
Such communication should take the form of –
- Meeting the parents to discuss outcomes of diagnostic assessment
- Ongoing communication to discuss progress and/ or difficulties.
- Consultation at the end of an instructional term to discuss and review pupil’s programmes
- Advising parents on ways they can support their child at home
Links with outside agents:
Name of Service:
Dept. of Education
Parent’s consent/ Collaboration
Parent’s consent/ Collaboration
Dept. of Education
All communication with outside agencies will be recorded and filed.
Referral to Out of School agencies:
Learning Support Teacher coordinates the referral of pupils to outside agencies, e.g. Education Psychologist.
- The Principal and /or Learning Support Teacher and /or the class teacher will consult with the parents to discuss the need for a referral
and to seek consent.
- The class teacher and /or learning support teacher completes the necessary referral forms in consultation with appropriate school personnel.
- The external professional visits the school to meet the pupil, parents, Principal,
class teacher and learning support teacher as appropriate and assessment is conducted.
- Findings of assessment are discussed and the recommendations are considered and an appropriate response is agreed.
Children for NEPS assessment:
At present the school has no assigned psychologist. It has an annual allocation of three psychological assessments per school year based on present enrollment. The school has the services of a psychologist from the
NEPS panel to carry out the assessment. While we haven’t the services of an assigned psychologist the school priorities children to receive assessment by reviewing:
- Children who display symptoms of childhood disorders e.g. Autism, ADHD,
Language disorders (Low incidence disability categories)
- Children who display symptoms of learning difficulties where the level of difficulty experienced by the child could entitle the child to extra support e.g. Dyslexia association, IT support
(High incidence disability categories)
- Children who have been assessed by an outside agency e.g. Speech therapy, Health Board and an assessment is recommended to secure placement or confirm a diagnosis.
In cases where the number of children
needing assessment in the school year exceeds the allocation, the school will discuss the need for assessment with parents and
a) Recommend the option of a private assessment
b) Discuss deferring the
assessment until the next academic year
Should a NEPS psychologist be assigned to the school the process of selection would be made by making the psychologist aware of the needs of all children in the above categories. Advice will be sought
regarding recommendations and further assessment (where deemed appropriate) for children with behavioural and emotional issues.
Information will be communicated between learning support
- Class teacher by- regular informal meetings/ discussion on progress/ sharing records/ test results
- Parents by- regular meetings/ parent’s day/phone calls
The annual Department of Education grants will be used for resources which meet the needs of the pupils availing of Learning Support and with Special Educational Needs within the school. The Testing grant will be used to fund the supply of annual screening
tests and investment in future diagnostic tests.
If parents/carers have a complaint about the special educational provision made, then they should in the first instance make an appointment to speak to
the learning support teacher or resource teacher and then the principal. The complaint will be investigated and dealt with as early as possible. If the matter is not resolved to the parent/carers satisfaction, then the matter proceeds to the board of management.
Monitoring and Review of policy
This policy will be reviewed every three years
Resource Teaching Policy
purpose of this policy is to provide practical guidance for teachers, parents and other relevant persons on the provision of effective teaching support for children experiencing a learning disability or any special needs to fulfill our obligations under the
Education Act 1998, Equal Status Act 2000, Education Welfare Act 2000 and Education for Persons with Disabilities Act 2004.
Definition of Special needs:
We understand Special Needs to be that as defined in the Department
of Education circulars.
Identification and selection of children with Special Needs:
Concern about children may arise in a number of ways –
- Parents inform Principal or class teacher of a concern they have regarding
- Teachers may have a concern regarding a child in their class.
- Concerns may arise following standardised testing.
Procedures to be followed:
Having consulted with the teacher
and parents involved, the Principal will seek appropriate assessment through NEPS with a view to qualifying for support from a Resource teacher.
In a situation where the parents refuse to grant consent for their child to attend for either a psychological
assessment or learning support, a record of the offer and its rejection should be kept in the child’s file.
Where a parent refuses to give consent the Board may apply to the Circuit court for an order that an assessment of the child to be carried
out. (Section 10-5)
The Aims of Resource / Special needs teaching:
The aims of the Resource /Special Needs Teacher are:
- To support as far as possible the integration of the child with special needs into the mainstream
- To develop positive self- esteem and positive attitudes about school and learning for the child
- To promote collaboration among staff in the implementation of the whole school policies on special needs
Role of the Resource Teacher:
The role of the Resource teacher is to provide support for the children with special needs by-
- Developing an individual learning programmes for each pupil in consultation with other partners
- Assessing and recording the child’s needs and progress
- Setting specific time –related targets for each child and agreeing these with the class teacher and principal
- Direct teaching of the child, either
in a separate room or within the mainstream class in the form of co-teaching
- Advising class teachers with regard to adapting the curriculum, teaching strategies, text books. I.C.T. and other related matters
- Meeting and advising parents
when necessary, accompanied by the class teacher as necessary
- Meeting other relevant professionals in the child’s interest e.g. psychologists, speech and language therapists and visiting teachers.
- The provision of special needs teachers is in addition to regular teaching
- Time allocated per child will depend on the demands on the service and the hours authorized by the SENO
- Every effort will be made to ensure that pupils do
not miss out on the same curricular area each time they attend except where a pupil has been exempted from a subject by the Department of Education
- Likewise the school will endeavor to ensure that pupils do not miss classes they particularly enjoy
such as Art, P.E. or Computers
Role of Class teacher, Parents, Principal and the Board of Management
The role of the above in the education of children who have been allocated resource teaching is as in the preceding section, i.e. the section on learning support.
Ratified by the Board of Management on 31/05/2016
Chairperson, Board of Management | <urn:uuid:2fc4cc96-38b2-4f85-99aa-0da2a4194f1c> | CC-MAIN-2021-21 | http://www.commonsns.com/427839202 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00536.warc.gz | en | 0.929251 | 5,737 | 2.6875 | 3 |
The Dental Implant Guide was created by Zachary Teach to help patients understand how a dental implant can improve their smile by replacing a missing a tooth or repairing one that is damaged.
Dental implants are devices that can fill in the gap where a tooth used to be, and they are recommended for individuals who do not wish to receive dental bridges or partial dentures. In a small surgical operation, an implant is screwed into the jawbone, and the titanium base of the implant gradually fuses with the bone, similar to how the roots of teeth naturally do so. Dental implants may require more invasive treatment in order to be implemented, but they are the most durable substitute for real teeth. The solid bond that the titanium base of the implant forms with the jawbone is all but permanent, making dental implants ideal for individuals seeking a lifelong solution to edentulism (the loss of teeth).FREE DENTAL IMPLANT CONSULTATION
A dental implant comprises a ceramic crown, a connective piece called an abutment, and a titanium screw. The ceramic crown resembles the upper portion of a real tooth and can be designed to blend in seamlessly with an individual’s neighboring teeth. The screw stands in for the root of the tooth, and because titanium is biocompatible, this base easily forms a stable and strong bond with the jawbone in the months after implantation.
We Install the Dental Implants Right in Our Office
In order to install a dental implant, a dentist must first examine the relevant area of the mouth to ascertain whether an implant would likely be viable. Through an X-ray or CT scan, the dentist can check for the condition of the gums, the thickness of the jawbone, and the location of the sinuses; each of these elements plays a role in whether a dental implant can last in its intended position. Recent advances in technology can now transform the results of these exams into a three-dimensional model for near-immediate viewing on a monitor, allowing a dentist and a patient alike to have access to a comprehensive view of the patient’s teeth, gums, and bony tissue. The dentist can more easily visualize the space in which the implant will be placed, inspecting the area for any limiting factors.
During the actual operation, your dentist will anesthetize the area and then screw the dental implant through the gum into the jawbone. Three to six months after the operation, your dentist will use an impression of your teeth in order to determine the optimal shape of the crown. Once designed, the ceramic crown will be placed atop the implant’s abutment and held in place using cement. As the titanium screw will have bonded with the jawbone by this point, the implant will look and feel like a natural tooth.
As an alternative to the ceramic crown, when multiple teeth are missing, implant-based dentures are a more functional and realistic option. This type of denture is fastened onto the dental implants that are screwed into the jawbone. The titanium base provides a far more stable base for the dentures than simple adhesives, and the use of implants in dentures preserves the integrity of the jawbone while facilitating chewing and speaking.
Frequently Asked Questions | Dental Implant Guide
What is a Dental Implant?
Dental implants are heavily marketed in today’s dental world, and a lot of patients don’t necessarily understand what a dental implant is. An actual dental implant is a prosthetic device that’s artificially made to replace the root of a natural tooth. A lot of times a patient will say, “Oh, I want an implant,” and they think it’s a tooth. Really, the implant is the part that replaces the root of the tooth.
After a certain amount of time – that goes from the surgical phase, so the body can heal around that implant. There will be another step taken where we begin the restorative phase to place a crown, which will eventually incorporate some sort of internal screw inside that implant, where either a post or some sort of fixation device will then be used to formulate a crown that we can place and be it would be supported by that implant.
What Are Dental Implants Made Of?
The majority of dental implants nowadays are usually made of some sort of titanium. There are other options where they can be made from other substances, depending on the case at hand, but the majority of implants used at Teach Dental Group will be made of titanium, which is a very biocompatible substance. What biocompatible means is the body has a very slim or very low-risk chance of having any sort of anaphylactic or allergic response to the material.
Are Dental Implants Painful?
I had a patient come in the other day inquiring about a dental implant, and she’s had some prior dental history issues with extractions, infection, post-operative pain and issues that have kind of put her aside from doing any kind of surgical procedure. I tried to reiterate to her, with confidence, that unfortunately I wasn’t there to handle her previous dental situation, and I felt sorry for her that she had issues with pain with prior procedures. I tried to reiterate and I tried to talk to her about how simple and how non-invasive an actual implant being placed into a patient can really be.
Again, a lot of times if you’re interested in a dental implant or want to come to the office, we can discuss all the risks, the benefits, any questions that you might have, just so you have a solid foundation and idea of what actually is going to happen. A lot of medications that we use to deal with any kind of post-operative pain can be easily managed with over-the-counter medications that are used to take care of normal headaches. If a certain situation might require a stronger medication, those issues will be addressed for those specific circumstances.
Are Dental Implants Safe?
I can reiterate, with 100% confidence, that it is a very safe procedure, much like many normal dental procedures that people come into the office and see me for. Whether it’s fillings, crowns, bridges, even surgical extractions, implants kind of fall in the same category of these normal procedures that take place. They’re not done in a hospital setting; they’re done right here in the office and that is one of the other reasons to just try to reiterate to the patient that this is a safe procedure and all the risk, benefits and questions that patients have about the safety of implants will be thoroughly discussed prior to beginning any treatment.
Are Implants FDA Approved?
All implants that are going to be used at this practice are FDA approved, and have gone through rigorous amounts of scientific control study to be used in the human body. A lot of times, the only issue that can really happen with certain implants are failures, but these are things that can be discussed during the times of treatment planning, to know if you’re a good candidate to receive a dental implant.
How Do I Fill in Missing Teeth?
A lot of times, during implant treatment planning, we can figure out some sort of temporary situation if the patient is requiring a tooth in the space that they want to replace with an implant. But for a majority of the time for posterior or back teeth we will usually leave that open to heal and we won’t necessarily replace that with a tooth.
A lot of times, patients are very interested, if they’re going to go through with dental implants, to replace a lot of their front teeth in their smile zone. It’s very aesthetic, and there are multiple options in terms of what we can do to temporarily place teeth there, whether it be some sort of temporary bridge fixated to other teeth or some sort of removal prosthetic appliance to get the result.
Am I a Candidate for Dental Implants?
One of the main questions that is always asked of me is, “Am I a good candidate to receive dental implants?” Two of the main things I like to ask my patients are if they are in good general health and if they maintain good oral health. If so, I’d say about 100% of those patients are adequate candidates to receive dental implants.
One of the main risk factors, in terms of possibly not receiving a dental implant, is if someone has excessive habits, whether it’s drinking and smoking or taking certain types of drugs. That might inhibit the stabilization of a dental implant. Again, these will all be addressed in the first appointment. When a patient comes in to see me, we’ll do a very in-depth review of their medical history so that we can figure out if they’ll be a good candidate to receive a dental implant.
What is the Different Between Conventional vs Mini Dental Implants?
A conventional dental implant usually has a diameter greater than three millimeters and a mini dental implant has a diameter less than three millimeters. A lot of times, to replace natural teeth, we like to use a conventional dental implant because the size of the implant. The part of the implant that’s going to go into the jaw and replace that root form is more realistic to the size of the actual natural root that is lost. It is more of a robust prosthetic attachment that we can then fixate a tooth to get you back to normal function.
Certain circumstances in dentistry may require a mini dental implant, for instance a missing lower incisor. A very small amount of space might be needed there to only be able to fixate a mini dental implant in that area, but for the majority of the time, a mini dental implant – and for my practice purposes – will be used to either fixate a denture or place in the jaw for a source of anchorage to help an orthodontist with any type of tooth movement or future orthodontic treatment for a younger patient.
What Are Mini Dental Implants?
One of the hot topics in dentistry nowadays is mini dental implants. To clarify, what a mini dental implant is has to do with diameter of the actual implant being placed. For a conventional implant, anything above the standard size of a three-millimeter diameter will be a conventional dental implant; anything below a diameter of three millimeters will be deemed a mini dental implant.
How Strong Are Dental Implants?
Dental implants, specifically conventional dental implants, are very strong. If maintained and taken care of properly with six-month visits to the dentist to make sure everything’s going okay, they can be a very predictable way to replace a tooth that will function just as long and just as well as that person’s natural tooth.
How Noticeable are Dental Implants?
If the implant is done correctly, and is planned correctly, the implant and the restoring crown on top of that implant can be made to look just as natural as the tooth that was lost.
How Are Dental Implants Placed?
During the phase of dental implant treatment, the most ideal situation when an implant is placed is to get that implant stabilization by the surrounding bone. During this process, a small surgical incision will be made into the gum tissue to expose the bone and drill a hole into the patient’s jaw. Sequentially, we will widen that hole and eventually place some screw threads or pattern into the jawbone, which we will then fixate our implant.
The best-case scenario during an implant situation is to get stability of that implant in with the bone. During the healing phase, after the implant is placed, we will suture the tissue closed to allow the maturity of that implant, so your body can grow its natural bone around it and make it nice, secure and stable.
How Long Does an Implant Take to Heal?
For most implant cases, especially the simple cases, a lot of times it’s anywhere in between eight weeks to six months. The specifics for any case can be discussed at the beginning of treatment before anything takes place.
What Should I Expect for Recovery?
People heal at different phases, people will respond differently to different treatments, but for the majority of time, most of my patients, when they come to me for implants or the surgical phase of the implant treatment, they’ll say, “How’s this going to feel afterwards?” or “What can I expect?” I would say probably 80 to 90% of the time, patients will go home the next day and call me back and say, “What did you do? Did you do anything? Because I’m in no pain whatsoever.” Other times, there can be some complications, which we will then address, but for the majority of the time, everything can be easily maintained with over-the-counter non-prescription drugs that you would normally take to deal with a headache or mild fever.
Is There Any Special Care?
A lot of times when an implant’s placed and an actual tooth has been fixated to the implant, a lot of the normal care required will be the same thing for your natural tooth. Implants can be exposed to the same harms as natural teeth, whether it’s bone loss or any kind of periodontal disease and issues of infection. If a patient is maintaining adequate oral hygiene, whether it’s brushing and flossing normally day-to-day and coming to their dentist or hygienist for their annual maintenance checkups and cleanings, implants will function just as well as normal teeth and require the same maintenance as normal teeth.
What Can I Eat After Surgery?
After any surgical procedure a patient will ask me, “What can I do after this?” Every time I sit down with a patient, I will take some time to answer any questions that they might have pertaining to what they need to do when they go home. I have a detailed post-operative sheet that I will hand them when they leave the office, just in case. I know a lot of times when I talk with people sometimes I forget things and I need to be reminded, so I will have that conversation with the patient, just so I can answer any specific questions, but I also give them some sort of feedback or personal handout that they can take home with them to remind them of what they need to do after surgery. For most of the time, after implant surgery, you can go about your normal function and act common good sense, and nothing specific or nothing needs to be done.
What is the Dental Implant Success Rate?
If patients are in general good health overall and come in for normal dental routine cleanings, most of the time, implants will probably hold about a 95% success rate when placed. This is if they make sure there aren’t any underlying issues currently with their mouth, whether it’s periodontal disease, broken down teeth, or issues with grinding at their dentist visits. I like to be able to tell patients, with confidence, if they take care of their body, they take care of their mouth and they get an implant, it’s 95% successful.
Does It Matter How Old I Am?
If I had to guess one of the barriers to trying to convince a patient that a dental implant will be the best way to restore a tooth or replace a missing tooth it would be age. Many older patients will tell me, “I’m not going to be around for that much longer,” or “Why am I going to invest this kind of money into saving my mouth?” After sitting down and talking with them about all the benefits about keeping their teeth or replacing missing teeth, I try to reiterate to them that there’s really no age limit to receiving a dental implant. If a patient is in general good health and is taking care of their mouth, I recommend treatment to any age.
Does the Body Reject Dental Implants?
There’s a lot of factors for certain people, or issues with implants, for people that I feel should not have them or should not be placed. The major risk factor for implants working or being successful is the fact if a patient smoke or not. A lot of times, I will still place implants on patients that smoke, but I try to do some smoking cessation, try and get these patients to want to quit smoking because it is the number one risk factor for implant failure. There’re other things that can complicate success of implants, and they can be discussed at future appointments with your dentist.
Can I Get an Implant If I Smoke?
Sometimes a situation arises and a patient will come to me. They’ve been a smoker for a long period of time, they don’t have any intention of quitting. I’ve done my best to try and motivate them to quit, prescribed certain drugs to help them quit, and gave them many different avenues to try, but it didn’t work. The patient now lost a tooth, wants to replace a tooth, and is interested in a dental implant. They’ll ask me if they can get one and I have to say, “Well, what are we going to do about your smoking? Are you going to try and quit?” When they ask if it’s going to be dependent upon them quitting smoking, I tell them yes. What I try to do is get a patient on board to at least try to reduce their smoking habit and then try to come up with another smoking cessation situation that might even eventually get them to quit, so I can get them motivated about their treatment, about receiving a dental implant, and hopefully then I’ll nail two birds with one stone – get them to quit smoking, then get them a dental implant, and then replace their missing tooth.
How Long Do Dental Implants Last?
Many patients want to know how they can replace their teeth and make their smile good again. The main concern most have is about the cost and outcome of the procedure. They want to know they are getting their money’s worth.
I like to talk to patients from a basic, overall, general oral health standpoint. We need to assess your mouth, first and foremost. We need to know if there are any underlying issues – whether it has to do with gum disease, periodontal disease, other issues such as grinding – any things that might have created previous destruction to your mouth. Once we can handle those situations and get them under control, then we can discuss moving forward with dental implants or dental implant treatment.
A lot of times, once the mouth is taken care of and put back into a healthy state, and an implant is placed and restored with a natural tooth, those implants will last just as long or just as well as a natural tooth. Again, I always try to stress and reiterate to anyone, we must take care of a certain situation. Just because you’re going to come in and necessarily get a dental implant, doesn’t mean you can neglect it. When anything that is done, normal hygiene, normal routine maintenance care will be very important. I think implants will last several years if not a lifetime.
What is the Treatment for a Failed Dental Implant?
Not everything always works perfectly in life, and sometimes even with the best intentions, things can fail. There are options if dental implants do fail. If I have a patient coming in for a dental implant and we go through the surgical phase, everything goes okay but for some reason that dental implant fails, I like to readdress the situation. I try to ascertain why the dental implant failed, whether it was part of a situation that happened during surgery or a situation that a patient might have possibly neglected, we go back and try again. That’s my number one go-to, to just try again and see if we can get it to take a second time. If the situation occurs where it fails for a second time, then other options can be discussed to at least try to attain the same result with some sort of other treatment other than implant dentistry.
What is Digitally Guided Implant Surgery?
The great part about dentistry today is technology. Technology is allowing so many things in dentistry to move leaps and bounds over what it used to be even 10 years ago. One of the best things about dental implants is that technology is allowing things to be done at a faster, more efficient and safer circumstance for the patient. A lot of times, patients will come to me and ask, “What is a digitally guided implant surgery?” A lot of times we need to gather some information first including x-rays of the space that we want to replace with the dental implant of the surrounding bone. Most of the time, I will have that patient get a 3-D scan, a CBCT or a cone beam computed tomography scan, so we can ascertain all the dimensions of the bone where we want that implant placed.
What we’ll do then, in conjunction with the lab, is digitally plan a tooth or a restoration to replace that missing space and idealize the placement of that implant into the jawbone based on where that tooth is going to be for that patient’s future prosthetic needs. The best part about doing something digitally guided is it’s going to be much safer for the patient so we can avoid anatomical structures or situations in the patient’s body that we want to avoid, place things with much more accuracy and confidence, and do it faster for the patient, which makes the entire dental implant experience that much more enjoyable for the patient.
Do Implants Make Dentures Comfortable?
A lot of times, dental implants can be used not just to replace missing teeth but many times, patients will come to me with dentures, whether they’re loose or ill-fitting, and they’ll ask me, “Can I receive dental implants to make my dentures fit better?” The standard of care for any sort of lower denture would be to receive two dental implants to help retain that denture. There are multiple different prosthetic devices that can be discussed to get you familiarized with what type of dentures or fitments can be made to your denture with dental implants. Without a doubt, implants can be used to make your dentures fit better, feel better and get a lot better sense of stability when going out into public, wanting to eat, wanting to socialize with people and just function normally through life.
How Many Implants Do I Need to Stabilize Dentures?
For the upper jaw, you’re going to need a minimum of four to six implants to help those dentures be stabilized. What a lot of people don’t know once a denture is stabilized by implants, we can remove that palatal aspect of the denture and make it a much more comfortable prosthetic for the patient.
It all depends on many contributing factors from each individual patient, but minimum, two implants to retain a lower denture. At best, probably about four implants for the lower jaw to stabilize a denture, depending on the needs of the patient and the requirements based on muscular jaw function, and a lot of other things that will be discussed prior to moving forward with any treatment, at the beginning, during the consultation phase.
How Do I Get My Dentures to Fit Better?
Sometimes we might decide to fixate the denture with screws that will not be able to be removed by the patient, and they’ll have to come in for two to three-month cleanings to have those dentures unscrewed, removed, cleaned thoroughly and then replaced by the dentist with new screw attachments. A lot of times patients are always very happy with how retained a denture can be, even if it’s a situation where it’s an in-and-out attachment with a nylon O-ring or an overdenture. | <urn:uuid:52665ffa-3bda-46d0-bfe8-65e2de2473cb> | CC-MAIN-2021-21 | https://teachdentalgroup.com/dental-implant-guide/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991685.16/warc/CC-MAIN-20210512070028-20210512100028-00576.warc.gz | en | 0.940823 | 4,983 | 2.921875 | 3 |
infographic outlines the components of a popular FCEV design, explaining in
brief what part each bit of tech plays in the drivetrain, from hydrogen
storage, through power electronics, to re-combining the oxygen from air with
hydrogen to give us electricity and water. The only thing stopping FCEVs
from being fully exploited in the quest for zero emission mobility, is the
lack of and confused state of EV infrastructure
FCEV stands for
Fuel Cell Electric Vehicle. FCEVs are a type of vehicle that use compressed hydrogen gas as fuel to generate electric power via a highly efficient energy converter, a fuel cell. The fuel cell transforms the hydrogen directly into electricity to power an electric
The FCEVs currently being prepared for commercialisation have a driving range comparable to petrol and diesel vehicles, typically between 385 and 700 km (240 and 435 miles) on a full tank.
Industrial gas companies have developed hydrogen fuel dispensing systems that are safe and simple to use. International standards ensure compatibility between refuelling stations and vehicles, and the refuelling process takes around 3 to 5 minutes offering refuelling times similar to those of conventional
Fuel cells in vehicles generate electricity generally using oxygen from the
air and compressed hydrogen. Most fuel cell vehicles are classified as zero-emissions vehicles that emit only water and heat. As compared with internal combustion vehicles, hydrogen vehicles centralize pollutants at the site of the hydrogen production, where hydrogen is typically derived from reformed natural gas. Transporting and storing hydrogen may also create pollutants.
THE NEXT GREEN CAR 21 APRIL 2021
Driving a fuel cell car is a breeze. As with any all-electric or automatic transmission vehicle, there are no gears to worry about, the main decision being the level or strength of regenerative braking required. Since FCEVs are electric vehicles, power is available instantly as all mass production models include a battery as part of the power-train. This is to act as a buffer and provide instant power from the electric motor, instead of having to wait for the fuel cell to kick in and provide an electric charge on demand.
However, ownership is not all plain sailing. The current hydrogen refuelling infrastructure (or lack of it) is the biggest constraint with less than half a dozen publicly available refilling stations currently in the UK. The technology is more restrictive than buying a battery electric car since at least most people have access to a home-based socket with which to charge their EV. Given that very few households have a hydrogen refuelling unit in their garage, living within a convenient distance of a public hydrogen refuelling station is essential.
Apart from the lack of locations, refuelling a FCEV is almost as simple and quick as using a petrol pump, and anyone who has used an LPG vehicle will find plenty of similarities. The driver fixes the refuelling station's nozzle to the car and locks it in place creating a sealed system. The pump will then check the seal can withstand the pressure by pre-conditioning it, before proceeding to dispense the hydrogen at the industry standard 70 MPa (10,000 psi) if all is ok.
Using a state-of-the-art 70 MPa refuelling dispenser, a few minutes refuel will provide most FCEVs with around 300 of range miles - as opposed to the half an hour rapid charge for a battery electric car offering around 100 miles of driving. Look out for older 35 MPa units however, as these will only provide a half-fill, so limiting driving range. Once completed, the user simply unlocks the nozzle and replaces it at the pump before driving off on hydrogen fuel cell electric power with zero harmful emissions.
For those who are tempted by fuel cell technology, only a few models are commercially available. Unfortunately, these remain expensive compared to similarly sized petrol, diesel or even battery electric rivals. Toyota's Mirai, for example, costs around £60,000 with Toyota expected to lose money on every one sold. This price includes the £4,500 Category 1 UK Plug-in Car Grant (PiCG) which includes the Mirai as an eligible vehicle.
In the UK, it isn't yet possible to buy a FCEV outright; Hyundai and Toyota only offer cars on lease. While this is mainly due to the limited refuelling infrastructure, it also protects owners from any technical and durability issues associated with a new technology.
Hydrogen is sold in kilograms rather than volume (litres or gallons), and current prices are around £10 to £15 per kg. As the Mirai's tank holds approximately 5 kg, a full hydrogen refill would cost between £50 and £75 meaning that hydrogen FCEVS are more expensive per distance able to travel than both internal combustion vehicles and BEVs. With increased hydrogen use though, costs are likely to come down in the future. Manufacturers are removing this problem though by incorporating fuel costs into the cost of the lease. Therefore, you won't need to pay anything 'at the pump' and the entire motoring costs are paid in one lump sum each month.
The rest of the car's running costs again bear a close resemblance to BEVs: servicing costs are significantly less than an internal combustion car because of reduced numbers of moving parts, while consumables such as brake pads are used less because of brake energy recuperation. FCEVs are also exempt from the London Congestion Charging Zone and, with no CO2 emissions, are exempt from paying Vehicle Excise Duty (VED or road tax). By Chris Lilly
FUEL CELLS RANGE EXTENDED
Symbio designs hydrogen fuel
cell kits that can be incorporated into various different types of electric vehicles (utility vehicles, vans, buses, heavy-goods vehicles, boats, etc.) and are associated with a range of digital services (vehicle repairs, remote fleet management, etc.). Once equipped in this way, these vehicles provide enhanced ease of use (full in three minutes, autonomy twice that of their battery equivalents, etc.) while remaining “zero emissions”. There are several hundred of these vehicles – for the most part, light utility vehicles such as the Kangoo ZE H2) – on the roads in France and across Europe. Founded in 2010, the CEA, ENGIE and Michelin all own a stake in
IN THE US
FCEVs and the hydrogen infrastructure to fuel them are in the early stages of implementation. The U.S. Department of Energy leads research efforts to make hydrogen-powered vehicles an affordable, environmentally friendly, and safe transportation option.
Hydrogen is considered an alternative fuel under the Energy Policy Act of 1992 and qualifies for alternative fuel vehicle tax credits.
More hydrogen stations are planned, particularly in California, where, as of 2019, according to Hydrogen View, there had been over 7,500 FCEVs sold or leased. Critics doubt whether hydrogen will be efficient or cost-effective for automobiles, as compared with other zero emission technologies, and in 2019, USA Today stated "what is tough to dispute is that the hydrogen fuel cell dream is all but dead for the passenger vehicle market."
As of July 2020, there were 43 publicly accessible hydrogen refueling stations in the US, 41 of which were located in California. In 2013, Governor Jerry Brown signed AB 8, a bill to fund $20 million a year for 10 years to build up to 100 stations. In 2014, the California Energy Commission funded $46.6 million to build 28 stations.
Japan got its first commercial hydrogen fueling station in 2014. By March 2016, Japan had 80 hydrogen fueling stations, and the Japanese government aims to double this number to 160 by 2020. In May 2017, there were 91 hydrogen fueling stations in Japan. Germany had 18 public hydrogen fueling stations in July 2015. The German government hoped to increase this number to 50 by end of 2016, but only 30 were open in June 2017.
FUEL CELL ELECTRIC CARS
The Daimler GLC F-CELL is set to combine innovative fuel-cell and battery technology in the form of a plug-in hybrid: in addition to hydrogen it will also run on electricity. With 4.4 kg of hydrogen on board, the preproduction model produces enough energy for a range of up to 437* km in the
NEDC. F-CELL drivers will also benefit from a range of up to 49 km in the NEDC thanks to the large lithium-ion battery and its output of 147 kW.
The Toyota Mirai (which means ‘future’ in Japanese) signals the start of a new age of vehicles. Using hydrogen – an important future energy carrier – as fuel to generate electricity, the Mirai achieves superior environmental performance with the convenience and driving pleasure expected of any car. The Mirai is fitted with two 700 bar hydrogen tanks enough to provide a driving range of 500 km. It is the first mass produced sedan fuel cell vehicle with excellent performance of 113 kW and a low centre of gravity.
The Honda Clarity Fuel Cell houses an advanced Honda built fuel cell stack in the engine compartment. As a result, the Honda Clarity Fuel Cell sedan is capable of seating five occupants. Its powertain delivers 130 kW and 300 Nm maximum torque. The Clarity Fuel Cell offers a generous range of 650 km
(NEDC) with hydrogen stored at 700 bar. The FCX Clarity concept car was introduced in 2008 for leasing by customers in Japan and Southern California and discontinued by 2015. From 2008 to 2014, Honda leased a total of 45 FCX units in the US. Over 20 other FCEVs prototypes and demonstration cars were released in that time period, including the GM HydroGen4, and Mercedes-Benz
Retail deliveries of the 2017 Honda Clarity Fuel Cell began in California in December 2016. The 2017 Clarity has the highest combined and city fuel economy ratings among all hydrogen fuel cell cars rated by the EPA, with a combined city/highway rating of 67 miles per gallon gasoline equivalent
(MPGe), and 68 MPGe in city driving. In 2019, Katsushi Inoue, the president of
Honda Europe, stated, "Our focus is on hybrid and electric vehicles now. Maybe hydrogen fuel cell cars will come, but that’s a technology for the next era."
The B-Class F-Cell (Daimler) vehicles are fitted with a 700-bar hydrogen tank in the sandwich floor unit. Its electric motor develops an output of 100 kW, with a torque of 290 Nm, and thus has the power rating of a
two-litre gasoline engine. The zero-emission drive system consumes the equivalent of 3.3 litres of diesel per 100 kilometres
The Hyundai ix35 FCEV Fuel Cell vehicle has been available for lease since 2014, when 54 units were leased.
Sales of the Toyota Mirai to government and corporate customers began in Japan in December 2014. Pricing started at ¥6,700,000 (~US$57,400) before taxes and a government incentive of ¥2,000,000 (~US$19,600). Former European Parliament President Pat Cox estimated that Toyota initially would lose about $100,000 on each Mirai sold. As of December 2017, global sales totaled 5,300 Mirais. The top selling markets were the U.S. with 2,900 units, Japan with 2,100 and Europe with 200.
By 2017, Daimler phased out its FCEV development, citing declining battery costs and increasing range of EVs, and most of the automobile companies developing hydrogen cars had switched their focus to battery
There are also demonstration models of buses, and in 2011 there were over 100 fuel cell buses deployed around the world. Most of these buses were produced by UTC Power, Toyota, Ballard, Hydrogenics, and Proton Motor. UTC buses had accumulated over 970,000 km (600,000 mi) of driving. Fuel cell buses have a 30-141% higher fuel economy than diesel buses and natural gas buses. Fuel cell buses have been deployed in cities around the world, although a Whistler, British Columbia project was discontinued in 2015. The Fuel Cell Bus Club is a global cooperative effort in trial fuel cell buses. Notable projects include:
- 12 Fuel cell buses were deployed in the Oakland and San Francisco Bay area of California.
- Daimler AG, with thirty-six experimental buses powered by Ballard Power Systems fuel cells, completed a successful three-year trial, in eleven cities, in 2007.
- A fleet of Thor buses with UTC Power fuel cells was deployed in California, operated by SunLine Transit Agency.
- The first hydrogen fuel cell bus prototype in Brazil was deployed in São Paulo. The bus was manufactured in Caxias do Sul, and the hydrogen fuel was to be produced in São Bernardo do Campo from water through electrolysis. The program, called "Ônibus Brasileiro a Hidrogênio" (Brazilian Hydrogen Autobus), included three buses.
In 2020, Hyundai started to manufacture hydrogen powered 34-ton cargo trucks under the model name XCIENT, making an initial shipment of 10 of the vehicles to Switzerland. They are able to travel 400 kilometres (250 mi) on a full tank and they take 8 to 20 minutes to fill up.
In 2020, Daimler announced the Mercedes-Benz GenH2 liquid hydrogen concept expected to be produced beginning in 2023.
The environmental impact of fuel cell vehicles depends on the primary energy with which the hydrogen was produced. Fuel cell vehicles are only environmentally benign when the hydrogen was produced with renewable energy. If this is the case fuel cell cars are cleaner and more efficient than fossil fuel cars. They are not as efficient as battery electric vehicles which consume much less energy in the conversion chain. Usually a fuel cell car consumes 2.4 times more energy than a battery electric car, because electrolysis and storage of hydrogen is much less efficient than using electricity to directly load a battery.
As of 2009, motor vehicles used most of the petroleum consumed in the U.S. and produced over 60% of the carbon monoxide emissions and about 20% of greenhouse gas emissions in the United States, however production of hydrogen for hydro cracking used in gasoline production chief amongst its industrial uses was responsible for approximately 10% of fleet wide greenhouse gas emissions. In contrast, a vehicle fueled with pure hydrogen emits few pollutants, producing mainly
water and heat, although the production of the hydrogen would create pollutants unless the hydrogen used in the fuel cell were produced using only renewable energy.
In a 2005 Well-to-Wheels analysis, the DOE estimated that fuel cell electric vehicles using hydrogen produced from natural gas would result in emissions of approximately 55% of the CO2 per mile of internal combustion engine vehicles and have approximately 25% less emissions than hybrid vehicles. In 2006, Ulf Bossel stated that the large amount of energy required to isolate hydrogen from natural compounds (water, natural gas, biomass), package the light gas by compression or liquefaction, transfer the energy carrier to the user, plus the energy lost when it is converted to useful electricity with fuel cells, leaves around 25% for practical use." Richard Gilbert, co-author of Transport Revolutions: Moving People and Freight without Oil (2010), comments similarly, that producing hydrogen gas ends up using some of the energy it creates. Then, energy is taken up by converting the hydrogen back into electricity within fuel cells. "'This means that only a quarter of the initially available energy reaches the electric motor' ... Such losses in conversion don't stack up well against, for instance, recharging an electric vehicle (EV) like the Nissan Leaf or Chevy Volt from a wall socket".
A 2010 Well-to-wheels analysis of hydrogen fuel cell vehicles report from Argonne National Laboratory states that renewable H2 pathways offer much larger green house gas benefits. This result has recently been confirmed. In 2010, a US DOE Well-to-Wheels publication assumed that the efficiency of the single step of compressing hydrogen to 6,250 psi (43.1 MPa) at the refueling station is 94%. A 2016 study in the November issue of the journal Energy by scientists at Stanford University and the Technical University of Munich concluded that, even assuming local hydrogen production,"investing in all-electric battery vehicles is a more economical choice for reducing
carbon dioxide emissions, primarily due to their lower cost and significantly higher energy efficiency."
In 2008, professor Jeremy P. Meyers, in the Electrochemical Society journal Interface wrote, "While fuel cells are efficient relative to combustion engines, they are not as efficient as batteries, due primarily to the inefficiency of the oxygen reduction reaction. ... They make the most sense for operation disconnected from the grid, or when fuel can be provided continuously. For applications that require frequent and relatively rapid start-ups ... where zero emissions are a requirement, as in enclosed spaces such as warehouses, and where hydrogen is considered an acceptable reactant, a [PEM fuel cell] is becoming an increasingly attractive choice [if exchanging batteries is inconvenient]". The practical cost of fuel cells for cars will remain high, however, until production volumes incorporate economies of scale and a well-developed supply chain. Until then, costs are roughly one order of magnitude higher than DOE targets.
Also in 2008, Wired News reported that "experts say it will be 40 years or more before hydrogen has any meaningful impact on gasoline consumption or global warming, and we can't afford to wait that long. In the meantime, fuel cells are diverting resources from more immediate
solutions." The Economist magazine, in 2008, quoted Robert Zubrin, the author of Energy Victory, as saying: "Hydrogen is 'just about the worst possible vehicle fuel'". The magazine noted that most hydrogen is produced through steam reformation, which creates at least as much emission of carbon per mile as some of today's gasoline cars. On the other hand, if the hydrogen could be produced using renewable energy, "it would surely be easier simply to use this energy to charge the batteries of all-electric or plug-in hybrid vehicles." The Los Angeles Times wrote in 2009, "Any way you look at it, hydrogen is a lousy way to move cars." The Washington Post asked in November 2009, "[W]hy would you want to store energy in the form of hydrogen and then use that hydrogen to produce electricity for a motor, when
electrical energy is already waiting to be sucked out of sockets all over America and stored in auto batteries...?"
The Motley Fool stated in 2013 that "there are still cost-prohibitive obstacles [for hydrogen cars] relating to transportation, storage, and, most importantly, production." Volkswagen's Rudolf Krebs said in 2013 that "no matter how excellent you make the cars themselves, the laws of physics hinder their overall efficiency. The most efficient way to convert energy to mobility is electricity." He elaborated: "Hydrogen mobility only makes sense if you use green energy", but ... you need to convert it first into hydrogen "with low efficiencies" where "you lose about 40 percent of the initial energy". You then must compress the hydrogen and store it under high pressure in tanks, which uses more energy. "And then you have to convert the hydrogen back to electricity in a fuel cell with another efficiency loss". Krebs continued: "in the end, from your original 100 percent of electric energy, you end up with 30 to 40 percent."
In 2014, electric automotive and energy futurist Julian Cox calculated the emissions produced per EPA combined cycle driven mile, well to wheel, by real-world hydrogen fuel cell vehicles and figures aggregated from the test subjects enrolled in the US DOE's long term NREL FCV study. The report presented official data that refutes marketers' claims of any inherent benefits of hydrogen fuel cells over the drive trains of equivalent conventional gasoline hybrids and even ordinary small-engined cars of equivalent drive train performance due to the emissions intensity of hydrogen production from natural gas. The report demonstrated the economic inevitability of continued methane use in hydrogen production due to the cost tripping effect of hydrogen fuel cells on renewable mileage due to conversion losses of electricity to and from hydrogen when compared to the direct use of electricity in an ordinary electric vehicle.
The analysis contradicts the marketing claims of vehicle manufacturers involved in promoting hydrogen fuel cells. The analysis concluded that public policy in relation to hydrogen fuel cells has been misled by false equivalences to very large, very old or very high powered gasoline vehicles that do not accurately reflect the choices of emissions reduction technologies readily available amongst lower cost and pre-existing newer vehicle choices available to consumers. Cox wrote in 2014 that producing hydrogen from
methane "is significantly more carbon intensive per unit of energy than
coal. Mistaking fossil hydrogen from the hydraulic fracturing of shales for an environmentally sustainable energy pathway threatens to encourage energy policies that will dilute and potentially derail global efforts to head-off climate change due to the risk of diverting investment and focus from vehicle technologies that are economically compatible with renewable energy." The Business Insider commented in 2013:
Pure hydrogen can be industrially derived, but it takes energy. If that energy does not come from renewable sources, then fuel-cell cars are not as clean as they seem. ... Another challenge is the lack of infrastructure. Gas stations need to invest in the ability to refuel hydrogen tanks before FCEVs become practical, and it's unlikely many will do that while there are so few customers on the road today. ... Compounding the lack of infrastructure is the high cost of the technology. Fuel cells are "still very, very expensive".
In 2014, former Dept. of Energy official Joseph Romm wrote three articles stating that FCVs still had not overcome the following issues: high cost of the vehicles, high fueling cost, and a lack of fuel-delivery infrastructure. He stated: "It would take several miracles to overcome all of those problems simultaneously in the coming decades." Moreover, he said, "FCVs aren't green" because of escaping methane during natural gas extraction and during the production of hydrogen, 95% which is produced using the steam reforming process. He concluded that renewable energy cannot economically be used to make hydrogen for an FCV fleet "either now or in the future." GreenTech Media's analyst reached similar conclusions in 2014. In 2015, Clean Technica listed some of the disadvantages of hydrogen fuel cell vehicles as did Car Throttle. Another Clean Technica writer concluded, "while hydrogen may have a part to play in the world of energy storage (especially seasonal storage), it looks like a dead end when it comes to mainstream vehicles."
infographic reveals the inefficiencies of the hydrogen conversion chain,
that hundreds of researchers are doing their best to overcome with more
efficient electrolyzers. PEM fuel cells are unlikely to get much above 50%
by way of turning hydrogen gas into electricity,
but should there be a breakthrough, that would be a huge bonus.
A 2017 analysis published in Green Car Reports found that the best hydrogen fuel cell vehicles consume "more than three times more electricity per mile than an electric vehicle ... generate more greenhouse-gas emissions than other powertrain technologies ... [and have] very high fuel costs. ... Considering all the obstacles and requirements for new infrastructure (estimated to cost as much as $400 billion), fuel-cell vehicles seem likely to be a niche technology at best, with little impact on U.S. oil consumption. In 2017, Michael Barnard, writing in Forbes, listed the continuing disadvantages of hydrogen fuel cell cars and concluded that "by about 2008, it was very clear that hydrogen was and would be inferior to battery technology as a storage of energy for vehicles. By 2025 the last hold outs should likely be retiring their fuel cell dreams.”
A 2019 video by Real Engineering noted that using hydrogen as a fuel for cars does not help to reduce carbon emissions from transportation. The 95% of hydrogen still produced from fossil fuels releases carbon dioxide, and producing hydrogen from water is an energy-consuming process. Storing hydrogen requires more energy either to cool it down to the liquid state or to put it into tanks under high pressure, and delivering the hydrogen to fueling stations requires more energy and may release more carbon. The hydrogen needed to move a FCV a kilometer costs approximately 8 times as much as the electricity needed to move a BEV the same distance. Also in 2019, Katsushi Inoue, the president of Honda Europe, stated, "Our focus is on hybrid and electric vehicles now. Maybe hydrogen fuel cell cars will come, but that’s a technology for the next era." A 2020 assessment concluded that hydrogen vehicles are still only 38% efficient, while battery EVs are 80% efficient.
FCEVs cannot get away from the poor conversion efficiency, but hydrogen in
SmartNet Service Stations, provides a way of storing large amounts of energy, where batteries cannot compete. Especially, for load leveling of
grids, so plugging the infrastructure mix that is at the moment chaotic to say the least, so insecure.
most common fuel cell is comprised of a stack of PEM modules.
EV AUTO MANUFACTURERS:
OF ELECTRIC TRUCKS
OF ELECTRIC BUSES & COACHES
use our A-Z
INDEX to navigate this site
website is provided on a free basis as a public information service.
copyright © Climate Change Trust 2021. Solar
Studios, BN271RF, United Kingdom. | <urn:uuid:202b7160-de56-481e-8a68-eacdde1523de> | CC-MAIN-2021-21 | http://www.hydrogenbatteries.org/FCEVs_Fuel_Cell_Electric_Vehicles.htm | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991801.49/warc/CC-MAIN-20210515100825-20210515130825-00376.warc.gz | en | 0.949728 | 5,229 | 3.90625 | 4 |
South Lebanon conflict (1985–2000)
|South Lebanon conflict|
|Part of the Israeli–Lebanese conflict, Hezbollah–Israel conflict, and the Iran–Israel proxy conflict|
Israeli APCs approaching an SLA outpost in South Lebanon, 1987
South Lebanon Army
|Commanders and leaders|
Aql Hashem †
Erez Gerstein †
Abbas al-Musawi †|
IDF: 1,000–1,500 troops
|Casualties and losses|
621 killed (1978–2000)
639 wounded (1982–1999)
559 killed (256 in combat)
1,276 killed (1982–2000)
270 Lebanese civilians killed |
500 Lebanese civilians wounded.
7 Israeli civilians killed by rockets
The South Lebanon conflict (1985–2000) (known in Israel as the Security Zone in Lebanon Campaign) refers to 15 years of warfare between the Lebanese proxy militias SLA with military and logistic support of Israel Defense Forces against Lebanese Muslim guerrillas led by Hezbollah, within what was defined as the "Security Zone" in South Lebanon. It can also refer to the continuation of conflict in this region, beginning with the Palestine Liberation Organization (PLO) operations transfer to South Lebanon, following Black September in the Kingdom of Jordan. Historical tension between Palestinian refugees and Lebanese factions fomented the violent Lebanese internal political struggle between many different factions. In light of this, the South Lebanon conflict can be seen as a part of the Lebanese Civil War.
In earlier conflicts prior to the 1982 Israeli invasion, including Operation Litani, Israel attempted to eradicate PLO bases from Lebanon and support Christian Maronite militias. The 1982 invasion resulted in the PLO's departure from Lebanon. The creation of the Security Zone in South Lebanon benefited civilian Israelis, although at great cost to Palestinian and Lebanese civilians. Despite this Israeli success in eradicating PLO bases and its partial withdrawal in 1985, the Israeli invasion increased the severity of conflict with local Lebanese militias and resulted in the consolidation of several local Shia Muslim movements in Lebanon, including Hezbollah and Amal, from a previously unorganized guerrilla movement in the south. Over the years, military casualties of both sides grew higher, as both parties used more modern weaponry, and Hezbollah progressed in its tactics. By the early 1990s, Hezbollah, with support from Syria and Iran, emerged as the leading group and military power, monopolizing guerrilla activity in South Lebanon.
With no clear end-game in South Lebanon, the IDF was unfamiliar with the type of warfare that Hezbollah waged, and while it could inflict losses on Hezbollah, there was no long-term strategy. With Hezbollah targeting the Galilee with rockets, the official purpose of the security zone - to protect Israel's northern communities - seemed contradictory. Hezbollah also excelled at psychological warfare, often recording their attacks on Israeli soldiers. Following the 1997 Israeli helicopter disaster, the Israeli public began to seriously question whether the occupation of South Lebanon was worth maintaining. The Four Mothers movement rose to the forefront of the public discourse, and played a leading role in swaying the public in favor of a withdrawal.
It was common knowledge in Israel that the security zone was not permanent, but its governments hoped that a withdrawal could be carried out in the context of a wider agreement with Syria and - by extension - Lebanon. However, talks with Syria failed. By the year 2000, following an election campaign promise, newly elected Prime Minister Ehud Barak unilaterally withdrew Israeli forces from Southern Lebanon within the year, in accordance with UN Security Council Resolution 425, passed in 1978; the withdrawal consequently resulted in the immediate total collapse of the South Lebanon Army, with many of its members escaping to Israel. The Lebanese government and Hezbollah still consider the withdrawal incomplete until Israel withdraws from Shebaa Farms. Following the withdrawal, Hezbollah has monopolized its military and civil control of the southern part of Lebanon. In 2020, Israel retrospectively recognized the conflict as a war.
Following the 1948 Arab–Israeli War, the 1949 Armistice Agreements were signed with United Nations mediation. The Lebanese–Israeli agreement created the armistice line, which coincided exactly with the existing international boundary between Lebanon and Palestine from the Mediterranean to the Syrian tri-point on the Hasbani River. From this tri-point on the Hasbani the boundary follows the river northward to the village of Ghajar, then northeast, forming the Lebanese–Syrian border. (The southern line from the tri-point represents the Palestine–Syria border of 1923.) Israeli forces captured and occupied 13 villages in Lebanese territory during the conflict, including parts of Marjayun, Bint Jubayl, and areas near the Litani River, but withdrew following international pressure and the armistice agreement.
Although the Israel–Lebanon border remained relatively quiet, entries in the diary of Moshe Sharett point to a continued territorial interest in the area. On 16 May 1954, during a joint meeting of senior officials of the defense and foreign affairs ministries, Ben Gurion raised the issue of Lebanon due to renewed tensions between Syria and Iraq, and internal trouble in Syria. Dayan expressed his enthusiastic support for entering Lebanon, occupying the necessary territory and creating a Christian regime that would ally itself with Israel. The issue was raised again in discussions at the Protocol of Sèvres.
The Israeli victory in the 1967 Six-Day War vastly expanded their area occupied in all neighboring countries, with the exception of Lebanon, but this extended the length of the effective Lebanon–Israel border, with the occupation of the Golan Heights. Although with a stated requirement for defense, later Israeli expansion into Lebanon under very similar terms followed the 1977 elections, which for the first time, brought the Revisionist Likud to power.
Emerging conflict between Israel and Palestinian militants
Beginning with the late 1960s and especially in the 1970s, following the defeat of PLO in Black September in Jordan, displaced Palestinians, including militants affiliated with the Palestinian Liberation Organization, began to settle in South Lebanon. The unrestrained buildup of Palestinian militia, and the large autonomy they exercised, led to the popular term "Fatahland" for South Lebanon. Since the mid 1970s the tensions between the various Lebanese factions and Palestinians had exploded, resulting in Lebanese Civil War.
Following multiple attacks launched by Palestinian organizations in the 1970, which increased with the Lebanese Civil War, the Israeli government decided to take action. Desiring to break up and destroy this PLO stronghold, Israel briefly invaded Lebanon in 1978, but the results of this invasion were mixed. The PLO was pushed north of the Litani River and a buffer zone was created to keep them from returning, with the placement of the United Nations Interim Force in Lebanon (UNIFIL). In addition and despite earlier covert support, Israel established a second buffer with renegade Saad Haddad's Christian Free Lebanon Army enclave (initially based only in the towns of Marjayoun and Qlayaa); the now-public Israeli military commitment to the Christian forces was strengthened. For the first time however, Israel received substantive adverse publicity in the world press due to damage in South Lebanon, in which some 200,000 Lebanese (mostly Shia Muslims) fled the area and ended up in the southern suburbs of Beirut; this indirectly resulted in the Syrian forces in Lebanon turning against the Christians in late June and complicated the dynamics of the ongoing Lebanese Civil War.
1982 Israeli invasion
In 1982, the Israeli military began "Operation Peace for Galilee", a full-scale invasion of Lebanese territory. The invasion followed the 1978 Litani Operation, which gave Israel possession of the territory near the Israeli–Lebanese border. This follow-up invasion attempted to weaken the PLO as a unified political and military force and eventually led to the withdrawal of PLO and Syrian forces from Lebanon. By the end of this operation, Israel got control over Lebanon from Beirut southward, and attempted to install a pro-Israeli government in Beirut to sign a peace accord with it. This goal had never realized, partly because of the assassination of President Bashir Gemayel in September 1982, and the refusal of the Lebanese Parliament to endorse the accord. The withdrawal of the PLO forces in 1982 forced some Lebanese nationalists to start a resistance against the Israeli army led by the Lebanese Communist Party and Amal movement. During this time, some Amal members started the formation of an Islamic group supported by Iran that was the nucleus of the future "Islamic Resistance", and eventually become Hezbollah.
Occupation period 1982–1985 – the emergence of Hezbollah
This section does not cite any sources. (May 2017) (Learn how and when to remove this template message)
Increased hostilities against the US resulted in the April 1983 United States Embassy bombing. In response, the US brokered the May 17 Agreement, in an attempt to stall hostilities between Israel and Lebanon. However, this agreement eventually failed to take shape, and hostilities continued. In October, the United States Marines barracks in Beirut was bombed (usually attributed to the Islamic Resistance groups). Following this incident, the United States withdrew its military forces from Lebanon.
Suicide bombings became increasingly popular at this time, and were a major concern of the Israel Defense Forces (IDF) both near Beirut and in the South. Among the most serious were the two suicide bombings against the Israeli headquarters in Tyre, which killed 103 soldiers, border policemen, and Shin Bet agents, and also killed 49–56 Lebanese. Israel believes those acts were among the first organized actions made by Shi'ite militants, later forming into Hizbullah. Subsequently, Israel withdrew from the Shouf Mountains, but continued to occupy Lebanon south of the Awali River.
An increased number of Islamic militias began operating in South Lebanon, launching guerrilla attacks on Israeli and pro-Israel militia positions. Israeli forces often responded with increased security measures and airstrikes on militant positions, and casualties on all sides steadily climbed. In a vacuum left with eradication of PLO, the disorganized Islamic militants in South Lebanon began to consolidate. The emerging Hezbollah, soon to become the preeminent Islamic militia, evolved during this period. However, scholars disagree as to when Hezbollah came to be regarded as a distinct entity. Over time, a number of Shi’a group members were slowly assimilated into the organization, such as Islamic Jihad members, Organization of the Oppressed on Earth, and the Revolutionary Justice Organization.
Israeli withdrawal to Security Zone
On 16 February 1985, Israel withdrew from Sidon and turned it over to the Lebanese Army, but faced attacks: 15 Israelis were killed and 105 wounded during the withdrawal. Dozens of SLA members were also assassinated. Under the Iron Fist policy, Israel retaliated in a series of raids. On March 11, Israeli forces raided the town of Zrariyah, killing 40 men. On March 10, a suicide bomber killed twelve Israeli soldiers from a convoy near Metula, inside Israel. From mid-February to mid-March, the Israelis lost 18 dead and 35 wounded. On 9 April, a Shiite girl drove a car bomb into an IDF convoy, and the following day, a soldier was killed by a land mine. During that same period, Israeli forces killed 80 Lebanese guerrillas in five weeks. Another 1,800 Shi'as were taken as prisoners. Israel withdrew from the Bekaa valley on 24 April, and from Tyre on the 29th, but continued to occupy a security zone in Southern Lebanon.
Beginning of the security zone conflict
In 1985 Hezbollah released an open letter to "The Downtrodden in Lebanon and in the World", which stated that the world was divided between the oppressed and the oppressors. The oppressors were named to be mainly the United States and Israel. This letter legitimized and praised the use of violence against the enemies of Islam, mainly the West.
Israeli and SLA forces in the security zone began to come under attack. The first major incident occurred in August 1985, when Lebanese guerrillas believed to have been from Amal ambushed an Israeli convoy: two Israeli soldiers and three of the attackers were killed in the ensuing firefight.
Lebanese guerrilla attacks, mainly the work of Hezbollah, increased. Fighting the Israeli occupation included hit-and-run guerrilla attacks, suicide bombings, and the Katyusha rocket attacks on civilian targets in Northern Israel, including Kiryat Shmona. The Katyusha proved to be an effective weapon and became a mainstay of Hezbollah military capabilities in South Lebanon. The attacks resulted in both military and civilian casualties. However, a considerable number of Lebanese guerillas were killed fighting Israeli and SLA troops, and many were captured. Prisoners were often detained in Israeli military prisons, or by the SLA in the Khiam detention center, where detainees were often tortured. Lebanese prisoners in Israel were arrested and detained for participating in guerrilla movements, and many were held for long periods of time.
In 1987 Hezbollah fighters from the Islamic Resistance stormed and conquered an outpost in Bra’shit belonging to the South Lebanon Army in the security zone. A number of its defenders were killed or taken prisoner and the Hezbollah flag was raised on top of it. A Sherman tank was blown up and a M113 Armored Personal Carrier was captured and driven triumphantly all the way to Beirut.
In May 1988, Israel launched an offensive codenamed Operation Law and Order in which 1,500–2,000 Israeli soldiers raided the area around the Lebanese village of Maidun. In two days of fighting, the IDF killed 50 Hizbullah fighters while losing 3 dead and 17 wounded.
After Israel destroyed Hezbollah's headquarters in the town of Marrakeh, a Hezbollah suicide bomber destroyed an Israeli transport truck carrying soldiers on the Israel-Lebanon border. In response, Israeli forces ambushed two Hezbollah vehicles, killing eight Hezbollah fighters.
On 28 July 1989, Israeli commandos captured Sheikh Abdul Karim Obeid, the leader of Hezbollah. This action led to the adoption of United Nations Security Council Resolution 638, which condemned all hostage takings by all sides.
The Lebanese Civil War officially came to an end with the 1989 Ta'if Accord, but the armed combat continued at least until October 1990, and in South Lebanon until at least 1991. In fact, the continued Israeli presence in South Lebanon resulted in continued low-intensity warfare and sporadic major combat until the Israeli withdrawal in 2000.
Post Civil War conflict
Though the majority of the Lebanese civil war conflicts ended in the months following the Ta'if Accord, Israel kept maintaining a military presence in South Lebanon. Consequently, the Islamic Resistance, by now dominated by Hezbollah, continued operations in the South. On 16 February 1992, Hezbollah leader Abbas al-Musawi was killed along with his wife, son and four others when Israeli AH-64 Apache helicopter gunships fired three missiles at his motorcade. The Israeli attack came in retaliation for the killings of three Israeli soldiers two days earlier when their camp was infiltrated. Hezbollah responded with rocket fire onto the Israeli security zone, and Israel then fired back and sent two armored columns past the security zone to hit Hezbollah strongholds in Kafra and Yater. Musawi was succeeded by Hassan Nasrallah. One of Nasrallah's first public declarations was the "retribution" policy: If Israel hit Lebanese civilian targets, then Hezbollah would retaliate with attacks on Israeli territory. Meanwhile, Hezbollah continued attacks against IDF targets within occupied Lebanese territory. In response to the attack, Ehud Sadan, the chief of security at the Israeli Embassy in Turkey was assassinated by a car bomb.
In 1993, hostilities flared again. After a month of Hezbollah shelling on Israeli towns and attacks on its soldiers, Israel conducted a seven-day operation in July 1993 called Operation Accountability in order to hit Hezbollah. One Israeli soldier and 8–50 Hezbollah fighters were killed in the operation, along with 2 Israeli and 118 Lebanese civilians. After one week of fighting in South Lebanon, a mutual agreement mediated by the United States prohibited attacks on civilian targets by both parts.
The end of Operation Accountability saw a few days of calm before light shelling resumed. On 17 August, a major artillery exchange took place, and two days later, nine Israeli soldiers were killed in two Hezbollah attacks. Israel responded with airstrikes against Hezbollah positions, killing at least two Hezbollah fighters.
Continued hostility in late 1990s
In May 1994, Israeli commandos kidnapped an Amal leader, Mustafa Dirani, and in June, an Israeli airstrike against a training camp killed 30–45 Hezbollah cadets. Hezbollah retaliated by firing four barrages of Katyusha rockets into northern Israel.
In May 1995, four Hezbollah fighters were killed in a firefight with Israeli troops while trying to infiltrate an Israeli position.
Operation Grapes of Wrath in 1996 resulted in the deaths of more than 150 civilians and refugees, most of them in the shelling of a United Nations base at Qana. Within a few days, a ceasefire was agreed between Israel and Hezbollah, committing to avoid civilian casualties; however, combat continued for at least two months. A total of 14 Hezbollah fighters, about a dozen Syrian soldiers, and 3 Israeli soldiers were killed in the fighting.
Brig. Gen. Eli Amitai, the IDF commander of the security zone, was lightly injured 14 December 1996 when an IDF convoy he was travelling in was ambushed in the eastern sector of the security zone. Less than a week later Amitai was again lightly injured when Hezbollah unleashed a mortar barrage on an SLA position near Bra'shit he was visiting together with Maj. Gen. Amiram Levine, head of the IDF's Northern Command.
In December 1996, two SLA soldiers were killed in three days of fighting, and a Hezbollah fighter was also killed by Israeli soldiers.
On 4 February 1997, two Israeli transport helicopters collided over She'ar Yashuv in Northern Israel while waiting for clearance to fly into Lebanon. A total of 73 IDF soldiers were killed in the disaster. On 28 February one Israeli soldier and four Hezbollah guerrillas were killed in a clash.
Throughout 1997, Israeli special forces, particularly the Egoz Reconnaissance Unit, hampered Hezbollah's ability to infiltrate the security zone and plant roadside bombs by staking out Hezbollah infiltration trails. Encouraged by these successes, Israeli commandos began conducting raids north of the security zone to kill Hezbollah commanders. In one particular raid, carried out on the night of 3–4 August 1997, Golani Brigade soldiers raided the village of Kfour and left behind three roadside bombs packed with ball bearings that were detonated from an Israeli Air Force UAV hours later, killing five Hezbollah members including two commanders. However, on 28 August, a major friendly fire incident occurred in Wadi Saluki during a clash between IDF troops from the Golani Brigade, together with air and artillery support, and Amal militants. Although four Amal militants were killed, Israeli shelling started a fire that engulfed the area, killing four soldiers.
On 5 September 1997, a raid by 16 Israeli Shayetet 13 naval commandos failed after the troops stumbled into a Hezbollah and Amal ambush. As the force headed towards its target, it was ambushed with IEDs and subjected to withering fire that killed the commander, Lt. Col. Yossi Korakin, and caused bombs being carried by another soldier to explode, killing more of the force. The survivors radioed for help, and Israel immediately dispatched a rescue team from Unit 669 and Sayeret Matkal in two CH-53 helicopters. A rescue force of helicopters and missile boats arrived to provide support as the rescuers evacuated the dead and survivors, conducting airstrikes. Lebanese Army anti-aircraft units put up anti aircraft fire and fired illumination rounds at the helicopters, and an Israeli F-16 subsequently attacked an anti-aircraft position. Hezbollah put up mortar fire, killing a doctor with the rescue force and damaging a helicopter and Israeli missile boats fired at the source of the mortar fire. The battle ended when Israel, by means of contacting the US government and delivering a message to be passed on to Syria and from there to Hezbollah, threatened to respond with massive force if Hezbollah tried to stop the rescue mission, causing Hezbollah and Amal to cease fire while the Lebanese Army moved in. Twelve Israelis were killed, along with six Hezbollah and Amal fighters and two Lebanese soldiers. In 2010 Hassan Nasrallah claimed that Hezbollah had managed to hack into Israeli UAV:s flying over Lebanon and thus learn which route the commandos were planning to take and thus prepared the ambush accordingly. On September 13–14, IDF raids in Lebanon killed a further four Hezbollah fighters and six Lebanese soldiers.
On 12 September 1997, three Hezbollah fighters were killed in an ambush by Egoz commandos on the edge of the security zone. One of them was Hadi Nasrallah, the son of Hezbollah leader Hassan Nasrallah. On 25 May 1998 the remains of Israeli soldiers killed in the failed commando raid were exchanged for 65 Lebanese prisoners and the bodies of 40 Hezbollah fighters and Lebanese soldiers captured by Israel. Among the bodies returned to Lebanon were the remains of Hadi Nasrallah.
During 1998, 21 Israeli soldiers were killed in southern Lebanon. Israel undertook a concerted campaign to hamper Hezbollah's capabilities, and in December 1998, the Israeli military assassinated Zahi Naim Hadr Ahmed Mahabi, a Hezbollah explosives expert, north of Baalbek.
Less than a week later (28 February) a roadside bomb exploded on the road between Kaukaba and Arnoun in the Israeli-occupied security zone. Brigadier General Erez Gerstein, commander of the Golani Brigade and head of the IDF Liaison Unit in Lebanon, thus the highest ranking Israeli officer serving in Lebanon at the time, as well as two Druze Israeli soldiers and one Israeli journalist were killed in the blast.
In May 1999 Hezbollah forces simultaneously attacked 14 Israeli and SLA outposts in south Lebanon. The outpost in Beit Yahoun compound belonging to the SLA was overrun and one SLA soldier was taken prisoner. The Hizbullah fighters made off with an Armoured Personnel Carrier (APC). The area was bombed by the Israeli Air Force. The captured APC was paraded through the southern suburbs of Beirut.
In August 1999, Hezbollah commander Ali Hassan Deeb, better known as Abu Hassan, a leader in Hezbollah's special force, was assassinated in an Israeli military operation. Deeb was driving in Sidon when two roadside bombs were detonated by a remote signal from a UAV overhead.
Overall, in the course of 1999, several dozen Hezbollah and Amal fighters were killed. Twelve Israeli soldiers and one civilian were also killed, one of them in accident.
2000: Israeli withdrawal and collapse of South Lebanon Army
In July 1999, Ehud Barak became Israel's Prime Minister, promising Israel would unilaterally withdraw to the international border by July 2000. Prior to his actions, many believed that Israel would only withdraw from South Lebanon upon reaching an agreement with Syria.
In January 2000, Hezbollah assassinated the commander of the South Lebanon Army's Western Brigade, Colonel Aql Hashem, at his home in the security zone. Hashem had been responsible for day-to-day operations of the SLA and was a leading candidate to succeed General Antoine Lahad. After this assassination there were doubts about the leadership of the South Lebanon Army (SLA). The pursuit and assassination of Hashim was documented step by step and the footage was broadcast on Hezbollah TV channel al-Manar. The operation and the way it was presented in media dealt a devastating blow to the morale in the SLA.
During the spring of 2000, Hezbollah operations stepped up considerably, with persistent harassment of Israeli military outposts in occupied Lebanese territory. As preparation for the major withdrawal plan, Israeli forces began abandoning several forward positions within the security zone of South Lebanon. On 24 May, Israel announced that it would withdraw all troops from South Lebanon. All Israeli forces had withdrawn from Lebanon by the end of the next day, more than six weeks before its stated deadline of 7 July.
The Israeli pullout resulted in the collapse of the SLA and the rapid advance of Hezbollah forces into the area. As the Israeli Defense Forces (IDF) withdrew, thousands of Shi'a Lebanese rushed back to the South to reclaim their properties. This withdrawal was widely considered a victory for Hezbollah and boosted its popularity in Lebanon. The completeness of the withdrawal is still disputed as Lebanese Government and Hezbollah claim Israel still holds Shebaa farms, a small piece of territory on the Lebanon-Israel-Syria border, with disputed sovereignty.
As a Syrian-backed Lebanese government refused to demarcate its border with Israel, Israel worked with UN cartographers led by regional coordinator Terje Rød-Larsen to certify Israel had withdrawn from all occupied Lebanese territory. On 16 June 2000, UN Security Council concluded that Israel had indeed withdrawn its forces from all of Lebanon, in accordance with United Nations Security Council Resolution 425 (1978).
Israel considered this move as tactical withdrawal since it always regarded the Security Zone as a buffer zone to defend Israel's citizens. By ending the occupation, Barak's cabinet assumed it would improve its worldwide image. Ehud Barak has argued that "Hezbollah would have enjoyed international legitimacy in their struggle against a foreign occupier", if the Israelis had not unilaterally withdrawn without a peace agreement.
Upon Israel's withdrawal, an increasing fear that Hezbollah would seek vengeance against those thought to have supported Israel became widespread among the Christian Lebanese of the Southern Lebanon. During and after the withdrawal around 10,000 Lebanese, mostly Maronites, fled into Galilee. Hezbollah later met with Lebanese Christian clerics to reassure them that the Israeli withdrawal was a victory for Lebanon as a nation, not just one sect or militia.[dubious ]
The tentative peace, resulting from the withdrawal, did not last. On 7 October 2000 Hezbollah attacked Israel. In a cross-border raid, three Israeli soldiers, who were patrolling the Lebanese border were attacked and abducted. The event escalated into a 2-month fire exchanges between Israel and Hezbollah, primarily at the Hermon ridge. The bodies of the abducted soldiers were returned to Israel in a January 2004 prisoner exchange involving 450 Lebanese prisoners held in Israeli jails. The long-time Lebanese prisoner Samir al-Quntar was excluded from the deal. The government of Israel, however, had agreed to a "further arrangement", whereby Israel would release Samir al-Quntar if it was supplied with "tangible information on the fate of captive navigator Ron Arad".
According to Harel and Issacharoff the second phase of the prisoner exchange deal was only a "legal gimmick". Israel was not satisfied with the information supplied by Hezbollah and refused to release al-Quntar. "Cynics may well ask whether it was worth getting entangled in the Second Lebanon War just to keep Kuntar […] in prison for an extra few years."
In July 2006, Hezbollah performed a cross-border raid while shelling Israeli towns and villages. During the raid Hezbollah succeeded in kidnapping two Israeli soldiers and killing eight others. In retaliation Israel began the 2006 Lebanon War to rescue the abducted soldiers and to create a bufferzone in Southern Lebanon.
- "Land for Peace Timeline". British-Israeli Communications & Research Centre. 2006. Archived from the original on 22 December 2010. Retrieved 25 January 2011.
- "The Israeli Withdrawal from Southern Lebanon". Jewish Virtual Library. The American-Israeli Cooperative Enterprise. 2011. Retrieved 25 January 2011.
- "Hezbollah 101: Who is the militant group, and what does it want?". The Christian Science Monitor. 19 July 2012. Retrieved 4 October 2012.
Iran has also played an instrumental role in building up Hezbollah's military capabilities over the years, which enabled the group's impressive military wing to oust Israel from south Lebanon in 2000
- Luft, Gal. "Israel's Security Zone in Lebanon - A Tragedy?". Middle East Forum. Retrieved 6 February 2015.
- "SOUTH OF LEBANON- SOUTH LEBANON ARMY". Retrieved 6 February 2015.
- In the Path of Hezbollah. Ahmad Nizar Hamzeh. December 2004. ISBN 9780815630531. Retrieved 7 July 2015.
- 657 killed from 1982-1985 (Wars, Internal Conflicts, and Political Order: A Jewish Democracy in the Middle East, Gad Barzilai, pp. 148), 1,216 killed from 1982-2000 (Imperfect Compromise: A New Consensus Among Israelis and Palestinians, Michael I. Karpin) = 559 killed 1985-2000
- A Hezbollah recruiting drive covers its losses and deeper involvement inside Syria Archived 23 December 2015 at the Wayback Machine
- Human Rights Watch/Middle East (1996). Civilian Pawns: Laws of War Violations and the Use of Weapons on the Israel-Lebanon Border. Human Rights Watch. p. 8. ISBN 1-5643-2167-3.
- "Letter Dated 7 May 1996 from the Secretary-General Addressed to the President of the Security Council". United Nations Security Council. 7 May 1996. Archived from the original on 20 May 2007. Retrieved 5 July 2007.
- "QUESTION OF THE VIOLATION OF HUMAN RIGHTS IN THE OCCUPIED ARAB TERRITORIES, INCLUDING PALESTINE". United Nations. United Nations Commission on Human Rights. 11 March 2004. Archived from the original on 10 March 2007. Retrieved 13 July 2006.
- Luft, Gal. "Israel's Security Zone in Lebanon - A Tragedy?" Middle East Quarterly, September 2000, 13-20.
- IDF to recognize 18-year occupation of south Lebanon as official campaign, Times of Israel, Nov 4, 2020. Accessed Nov 5, 2020.
- Online NewsHour: Final Pullout – May 24, 2000 (Transcript). "Israelis evacuate southern Lebanon after 22 years of occupation." Retrieved 15 August 2009.
- "Hezbollah makes explosive return: Israel's proxy militia under fire in south Lebanon". Charles Richards, The Independent. 18 August 1993. Retrieved 15 August 2009.
- Israel’s Frustrating Experience in South Lebanon, Begin-Sadat Center, 25 May 2020. Accessed 25 May 2020.
- Four Mothers Archive, at Ohio State University-University Libraries.
- UN Press Release SC/6878. (18 June 2000). Security Council Endorses Secretary-General's Conclusion on Israeli Withdrawal From Lebanon As of 16 June.
- Naseer H. Aruri, Preface to the 3rd(?) edition, Israel's Sacred Terrorism, Livia Rokach, Association of Arab-American University Graduates, ISBN 978-0-937694-70-1
- Livia Rokach, Israel's Sacred Terrorism, Association of Arab-American University Graduates, ISBN 978-0-937694-70-1
- Avi Shlaim, The Protocol of Sèvres,1956: Anatomy of a War Plot, International Affairs, 73:3 (1997), 509–530
- Urban Operations: An Historical Casebook. "Siege of Beirut", by George W. Gawrych. US Army Combat Studies Institute, Fort Leavenworth, KS. 2 October 2002. Available at globalsecurity.org.
- Major George C. Solley, The Israeli Experience in Lebanon, 1982–1985, US Marine Corps Command and Staff College, Marine Corps Development and Education Command, Quantico, Virginia. 10 May 1987. Available from GlobalSecurity.org
- 1982 Lebanon Invasion. BBC News.
- Norton, Augustus Richard; Journal of Palestine, 2000
- Khoury, Hala (1982). "Israel leaves front lines in south Lebanon". UPI. Retrieved 31 July 2019.
- Tabitha, Petran (1987). The struggle over Lebanon. New York: Monthly Review Press. p. 378. ISBN 0853456518. Retrieved 11 April 2021.
- Friedman, Thomas L.; Times, Special to The New York (6 August 1985). "2 Israeli Soldiers and 3 Guerrillas Killed in South Lebanon Shootout". Retrieved 25 April 2019 – via NYTimes.com.
- Blanford, Nicholas, Warriors of God - Inside Hezbollah's Thirty-Year Struggle Against Israel, Random House, New York, 2011, pp. 85-86
- Journal of Palestine Studies. Volume XVII No 3 (67) Spring 1988. ISSN 0377-919X. Page 221. Chronology compiled by Katherine M. LaRiviere
- Ross, Michael The Volunteer: The Incredible True Story of an Israeli Spy on the Trail of International Terrorists (2006)
- UN Security Council (31 July 1989). "26. The Question of Hostage-Taking and Abduction" (PDF). United Nations. Retrieved 26 April 2019.
- UN Resolution 638, reprinted by Jewish Virtual Library
- Tension grows in South Lebanon as Israel bombs guerrilla targets. New York Times, 8 November 1991.
- Time Magazine: Vengeance is Mine (2 March 1992)
- Cowell, Alan (8 March 1992). "Car Bomb Kills an Israeli Embassy Aide in Turkey". Retrieved 25 April 2019 – via NYTimes.com.
- Pike, John (30 July 2006). "Operation Accountability". Global Security. Archived from the original on 6 September 2018. Retrieved 25 January 2011.
- "9 Israeli Soldiers Killed in 2 Guerrilla Ambushes : Mideast: Hezbollah attack in southern Lebanon is the deadliest in five years. Israeli planes strike back". LA Times. 20 August 1993. Retrieved 25 April 2019.
- Haberman, Clyde (3 June 1994). "Dozens Are Killed As Israelis Attack Camp in Lebanon". The New York Times. Retrieved 25 January 2011.
- Haberman, Clyde (3 June 1994). "Dozens Are Killed As Israelis Attack Camp in Lebanon". The New York Times.
- "LEBANON: ISRAELI SOLDIERS SHOOT DEAD 4 HEZBOLLAH GUERILLAS". AP Archive. Associated Press. Retrieved 25 April 2019.
- Segal, Naomi (16 December 1996). "Fighting Erupts in Lebanon After Rockets Hit Jewish State". JTA. Retrieved 10 November 2011.
- Segal, Naomi (20 December 1996). "Senior IDF Officer Wounded on Visit to Southern Lebanon". JTA. Retrieved 10 November 2011.
- "South Lebanon - Hezbollah guerrilla shot dead". AP Archive. Retrieved 25 April 2019.
- "Israeli Soldier, 4 Guerrillas Die in Lebanon Clash". Los Angeles Times. 1 March 1997.
- "4 Israelis Killed When Fellow Troops Start Fire". LA Times. 29 August 1997. Retrieved 25 April 2019.
- Lappin, Yaakov (11 August 2010). "Nasrallah recalls '97 Shayetet to 'deflect pressure'". The Jerusalem Post. Retrieved 3 July 2013.
- Blanford, p. 190-192
- Survey of Arab-Israeli Relations, p. 232
- Israeli Security Sources (26 January 2004). "Background on Israeli POWs and MIAs". Ministry of Foreign Affairs (Israel). Retrieved 4 December 2011.
- "Israel Kills Hezbollah Bomb Expert". Los Angeles Times. 2 January 1999. Retrieved 19 January 2012.
- Sontag, Deborah (24 February 1999). "Israel Mourns More War Dead in Lebanon". The New York Times. Retrieved 3 July 2013.
- Blanford, Nicholas (24 February 1999). "3 Israelis killed in Hizbullah ambush". The Daily Star. Retrieved 3 July 2013.
- "Lebanon Liaison Unit Commander Killed in Security Zone Explosion". Globes. 1 March 1999. Retrieved 25 August 2016.
- Blanford, Nicholas (17 May 1999). "Hizbullah overruns SLA post, makes off with APC". The Daily Star. Retrieved 3 July 2013.
- Farhat, Sally (18 May 1999). "Hizbullah parades captured APC". The Daily Star. Retrieved 3 July 2013.
- "Israel Blamed in Fatal Bomb Attack on a Hezbollah Leader". Los Angeles Times. 17 May 1999. Retrieved 17 August 2013.
- Blanford, p. 204
- "jewishvirtualibrary.org". Retrieved 6 February 2015.
- Lebanon Country Assessment Archived 21 September 2006 at the Wayback Machine. United Kingdom Home Office, October 2001.
- Jabir, Kamil (29 July 2007). "خالد بزي (قاسم) يكتب ملحمة بنت جبيل (Khalid Bazzi (Qasim) writes the Bint Jbeil epic)". al-Akhbar. Retrieved 3 January 2012.
- Blanford, pp. 243-244
- Harb, Zahera, Channels of Resistance in Lebanon - Liberation Propaganda, Hezbollah and the Media, I.B. Tauris, London-New York, 2011, pp.214-216
- Country Profile: Lebanon Timeline, .
- Camp David and After: An Exchange. (An Interview with Ehud Barak). New York Review of Books, Volume 49, Number 10. 13 June 2002. Retrieved online, 15 August 2009.
- "Government statement on prisoner exchange". MFA. 24 January 2004. Retrieved 4 December 2011.
- Avi Issacharoff and Amos Harel (19 October 2007). "Closing the Arad file?". Retrieved 14 December 2011.
- Margaret Hall, American Myopia: American Policy on Hizbollah. The Muslim World: Questions of Policy and Politics. Cornell University undergraduate research symposium. 8 April 2006.
- "...Hezbollah enjoys enormous popularity in Lebanon, especially in southern Lebanon...", Ted Koppel on NPR report: Lebanon's Hezbollah Ties. All Things Considered, 13 July 2006.
- BBC: "On This Day, May 26th".
- CNN report: Hezbollah flag raised as Israeli troops withdraw from southern Lebanon. 24 May 2000. | <urn:uuid:33039157-ea82-4441-94b8-cf72e65f41ba> | CC-MAIN-2021-21 | https://en.wikipedia.org/wiki/South_Lebanon_conflict_(1982%E2%80%932000) | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989012.26/warc/CC-MAIN-20210509183309-20210509213309-00216.warc.gz | en | 0.946843 | 7,835 | 2.78125 | 3 |
A review of Taking Darwin Seriously, by Michael Ruse
The controversy surrounding the work of Charles Darwin is a moral and political debate as well as a scientific one. A year ago last December it was brought before the Supreme Court of the United States, which heard oral argument about a Louisiana law that requires the teaching of Creation science in any public school where Darwinian evolution is taught. One of the lawyers, arguing that this law was an unconstitutional establishment of religion, offered as evidence a quotation from the legislative testimony of one of the law's supporters, who had said:
I think if you teach children that they are evolved from apes, then they will start acting like apes. If we teach them possibly that they were created by almighty God, they will believe they are creatures of God and start acting like God's children.
Henry Morris, one of the leading proponents of Creation science, has warned:
Evolution is the root of atheism, of communism, Nazism, behaviorism, racism, economic imperialism, militarism, libertinism, anarchism, and all manner of anti-Christian systems of belief and practice.
He may have exaggerated the danger somewhat. But even Friedrich Nietzsche warned of the nihilistic consequences of Darwin's teaching as a doctrine that he considered "true but deadly."
The deadliness of Darwin's teaching seems evident, for example, in the way it apparently subverts the natural rights teaching of the Declaration of Independence. Thomas Jefferson appeals to the "laws of nature and of nature's God." But Darwin seems to teach us that, by the laws of nature, we are just one life form among many, with no natural end or purpose except to survive and reproduce. Jefferson claims that human beings are naturally equal in that they are endowed with a special dignity that naturally entitles them to certain rights. The reasoning behind this thought, which could be found in the works of Aristotle, Cicero, and Locke, is that human beings are equal in their worth as human beings, who are set apart from and above all other animals by virtue of their rationality. But Darwin denies that human beings are different in kind from other animals. Thus Darwin seems to force us to confront the abysmal thought of nihilism: that there is no rationally discoverable standard in nature for giving moral weight to human life.
I would argue, however, that a proper interpretation of Darwinian biology still permits us to look to nature as a source of standards for human life. The tradition of natural right has always rested upon a biological foundation. Thomas Aquinas speaks of the natural law as the law nature has given to all animals. And Aristotle supports his conception of natural right with biological claims. I believe a comparison of Aristotle's biological writings with those of Darwin would suggest that Aristotle's biological understanding of natural right is still defensible even in the light of Darwin's advances. But Michael Ruse's Taking Darwin Seriously reminds us that many of Darwin's contemporary supporters would not agree with me.
Nevertheless, I think Ruse's book is one of the best ever written to help us think through the implications of Darwinism for political philosophy. (Among recent books, I would also recommend Leon Kass's Toward a More Natural Science, Hans Jonas's The Imperative of Responsibility, and H. Tristram Engelhardt's Foundations of Bioethics.) My disagreements with Ruse do not lessen my respect for his arguments, because for me they serve as instructive provocations.
Ruse's defense of Darwin is not as instructive as it might have been, however, if he had seen that to take Darwin seriously we must also take the Bible and Greek philosophy seriously. Prior to the modern era, Greek rationalism and biblical revelation were the two great sources for explaining the meaning of human existence. The founders of early modern science (such as Descartes and Bacon) sought a third alternative grounded in scientific methodology. Darwin seemed to fulfill that project by explaining the origin of all living beings through the scientific method without reliance on either philosophic speculation or biblical faith.
I would agree with Ruse's claim that Creation science is not genuine science. (The case for this conclusion has been argued well by Philip Kitcher in Abusing Science.) But Ruse is wrong to assume from this that the Book of Genesis poses no serious challenge to the Darwinian scientist. To be taught that in the beginning God created the heaven and the earth is to face the incomprehensible mystery of the origin of things, and thus to recognize the limits of human reason. Modern science cannot remove that mystery by teaching that everything evolved from a universal starting point. If it is incomprehensible that God created everything out of nothing, it is no more comprehensible that nothing turned itself into everything.
It is surprising that Ruse casually dismisses the biblical understanding of things, but it is even more surprising that he ignores the tradition of Greek philosophy. In particular, one would have expected that a book on the philosophic implications of biology would give some attention to Aristotle, who remains perhaps the greatest philosophic biologist. Aristotle's biology manifests the teleological understanding of nature as purposeful, with human beings, as the only fully rational animals, being the highest embodiments of nature's purposes. That living beings act for the sake of ends, as if these ends were conceived in the mind of a cosmic artist, is for Aristotle a fact of observation. Yet it is a fact that he never tries to explain fully. The idea that nature had immanent causes analogous to those of a conscious artist remains mysterious. Equally mysterious, in Aristotle's account, is the capacity of the human mind to understand nature. If we believe anything at all, we must believe in the validity of rational thought as a grasping of reality. But rational thought, particularly at the level of intellectual intuition, cannot be fully explained, although it is itself the precondition for explaining anything at all.
So, despite their critical disagreements, the Bible and Aristotle agree that in pondering the fundamental mystery at the core of things, absolute knowledge is unattainable. The Socratic philosopher seeks for knowledge of his own ignorance. The pious believer seeks for faith as "the substance of things hoped for, the evidence of things not seen." One speculates on the mystery. The other worships it.
Darwinian evolution does not necessarily supplant either the Bible or Aristotle. Darwin himself suggested that evolution arose from "the laws impressed on matter by the Creator." And he welcomed the argument of Asa Gray and Thomas Huxley that his theory of evolution vindicated the idea of natural teleology. Ruse, however, rejects, at least implicitly in this book, any such reconciliation of Darwinian evolution with Aristotelian teleology or biblical revelation. Instead of that, he tries to unite Darwinian biology and the philosophy of David Hume. But he cannot do that without falling into nihilism.
Nonetheless, I would consider much of what Ruse argues as a Darwinian confutation of Aristotle's biological understanding of human nature. For example, Ruse points to the human capacity for language to illustrate how our biological nature shapes our cultural conventions. The human vocal tract, unlike that of the apes, is specially adapted for speech. Although the vocal organs do not dictate any particular language, they do determine the basic patterns found in all languages. And since language is the fundamental tool of culture, we must conclude that the human capacity for culture is grounded in the biological nature of human beings. All of this sustains Aristotle's biological observations about the importance of language: human beings are by nature the only political animals because, although other animals have some ability to communicate, only human beings are capable of the articulate speech through which they reach a shared understanding of those moral concepts that constitute a political community.
Presumably Ruse would reject Aristotle's thinking insofar as he represents "traditional philosophy and theology." The traditional view of human beings, as Ruse explains it, is that although we are animals, we are special animals.
We have some special essence, which gives us a favoured place in this world and (perhaps) the next. This distinctive part of human nature is our rational faculty, or some such thing-that which enables us to see the truth about the world and about the proper courses of action binding upon us humans. (pp. 103-4)
But "if you take Darwin seriously," Ruse insists, you must reject this. "Any powers we have are no more than those brought through the crucible of the evolutionary struggle and consequent reproductive success." Ruse's position becomes self-contradictory, however. If taking Darwin seriously means recognizing the truth and worth of what Darwin teaches, then we cannot take him seriously if he teaches us that we have no power to see the truth or worth of anything.
According to Ruse, Darwinian biology, in both epistemology and ethics, supports the arguments of Hume-that is to say, it requires a denial of "metaphysical reality" and an affirmation of "common-sense reality," which means a denial of objectivity and an affirmation of subjectivity.
In epistemology, we normally think that the reality of common sense, the reality which we have truly had a role in creating (not choosing!) is the human-independent reality of the metaphysician. In ethics, we normally think the morality of common sense, the reality we have truly had a role in creating (not choosing!), is the human-independent morality of the objectivist. But they are not. (pp. 269-70)
Ruse maintains that the fundamental principles of human reasoning are innate in the human mind because they were favored by natural selection. Those primeval human ancestors who respected the law of the excluded middle, who avoided contradictions, and who knew how to count, were more likely to survive and reproduce than those who did not. Similarly, the innate sense of obligation that underlies morality could have evolved to promote biological ends. Those who felt they ought to help their relatives and neighbors, who felt that killing innocent people was wrong, and who thought no one should ever commit incest, enhanced their biological fitness.
But although both the rules of thought and the rules of morality have evolved as innate dispositions only because they serve biological ends, and not because they are objectively true or necessary, we must believe them to be objective; and thus we are unconscious of their biological origins. Rationality and morality as biological mechanisms work best for human beings when the mechanisms are concealed by the illusion of objectivity.
Ruse concedes, however, that although both reason and morality originated as biological adaptations, their cultural applications transcend their biological origins (pp. 149, 206, 223). Yet he never works out the implications of this for his general argument. If, at some point in cultural development, human thought and morality can transcend biology, does that mean that some human beings can escape the illusion of objectivity and decide rationally what is, in fact, true and right? Could they, for example, decide whether Darwin's theory of evolution is objectively true? Indeed, Ruse insists that we can know "beyond reasonable doubt" that Darwinian evolution is a fact (p. 4). If so, then we must wonder why he says so often that human beings can never know objective truth.
Ruse acknowledges the contradiction between metaphysical skepticism and Darwinian science. "I confess that the notion that there is not something solidly real to this world sounds somewhat ludicrous to a person whose basic thesis is that we all got here in an ongoing clash between rival organisms" (p. 187). To escape this paradox, he adopts Hume's common-sense realism, according to which, in Hume's words, we must affirm "that the operations of nature are independent of our thought and reasoning."
But far from resolving the contradiction, this only restates it. Nature either does or does not exist independently of our minds. We cannot believe both propositions simultaneously. Of course we can pretend to believe in metaphysical skepticism and then act on our common-sense belief in metaphysical realism, which was Hume's peculiar way of doing things. But then what's the point of pretending to believe what we do not believe? It's one of the oddest of the distinguishing features of modern thinkers beginning with Descartes: we show our profundity by feigning belief in preposterous ideas that could be seriously believed only by the insane.
Ruse speaks of himself as a philosophic "naturalist." The more appropriate label would be "idealist" or "solipsist." Like most professional philosophers today, he claims to believe the premise of the early modern philosophers (such as Descartes, Hobbes, Locke, and Hume) that the mind knows directly only its own states-"ideas," "representations," or "impressions." Therefore, he must also pretend to believe that the order that we think we see in nature (such as causality) is in fact only the order in our minds. We mistakenly identify our subjective impressions with objective reality. It is an absurd vision, but Ruse endorses it (or, again, pretends to endorse it) in its Humean form (pp. 182-86). Ruse exposes himself to the same solipsistic idealism that plagued Hume. And like Hume he tries to cure himself by fleeing to the "world of common sense," that strange world in which people believe they can know something about reality. At this point Ruse's teaching becomes as incoherent as Hume's. He cannot enter the world of common sense as a skeptical alien. He cannot live in that world while still believing it's all an illusion.
To live in the world of common sense, to be a true "naturalist," one must recognize the Aristotelian distinction between that which is apprehended in the mind and that by which it is apprehended. Ideas are not the objects of apprehension; rather, ideas are that by which we apprehend objects. Ideas are the conceptual vehicles by which we reflect on things in the world. This simple thought supports the realism that is commonsensical because it is metaphysical.
One cannot be a metaphysical skeptic and a common-sense realist at the same time, for once one has affirmed metaphysical skepticism, one cannot speak about the natural reality of either the mind or morality. Hume's skepticism requires him to say that a mind is nothing but "a heap of impressions." Yet how could a heap of impressions have all the capacities and needs that Hume attributes to the mind? Why should these impressions be heaped in one way rather than another? When Hume speaks of the natural passions of the mind, he has to speak of it as Aristotle would-a substance with natural attributes-in contradiction to metaphysical skepticism.
Similarly, in his account of morality, Hume has to assume a universal human nature. At the beginning of An Enquiry Concerning the Principles of Morals, he contends that moral distinctions depend upon "some internal sense or feeling, which nature has made universal in the whole species." Here he cannot speak of human beings as random heap of impressions. Rather, he must speak of them as members of a natural species endowed with natural inclinations, which is to speak the language of Aristotelian (and common-sense) realism. After all, a consistent adherence to Humean skepticism would make it impossible to say anything about human nature. Humean beings would not be human beings at all.
Ruse is wrong, therefore, in linking Darwinian biology and Hume's philosophy. The Darwinian biologist must believe that human beings have a genetically distinct nature. To assume that they are only accidental heaps of impressions would be the most radical rejection of biological science. The problem is not peculiar to Hume's thinking. Metaphysical skepticism in any form must deny that there is any natural order to things, which denies the possibility not only of biology but of any science.
Metaphysical skepticism would also deny the reality of human nature as the foundation of morality. Insofar as morality is a biological adaptation, Ruse believes, we must regard it not as "something objective, in the sense of having an authority and existence of its own, independent of human beings," but rather as "subjective, being a function of human nature, and reducing ultimately to feelings and sentiments" (p. 252). Ruse is using the concept of subjectivity in a special sense. In speaking of the subjectivity of morality as being a part of human nature, he wants to take a middle way between "traditional objectivism" and "traditional subjectivism" (pp. 215-17). Morality has no objective reference to any reality that would be eternal and independent of human beings (such as God's law or Plato's ideas). But neither is morality subjective in the sense of being merely a matter of personal choice or arbitrary feelings. Morality rests on a sense of obligation binding on all human beings. "Killing Jews because they are Jews is absolutely, objectively wrong. Period" (p. 215).
Humans share a common moral understanding. This universality is guaranteed by the shared genetic background of every member of Homo sapiens. The differences between us are far outweighed by the similarities. We (virtually) all have hands, eyes, ears, noses, and the same ultimate awareness. That is part of being human. There is, therefore, absolutely nothing arbitrary about morality, considered from the human perspective. I, like you, have forty-six chromosomes. I, like you, have a shared moral sense. People who do not have forty-six chromosomes are considered abnormal, and (probably) sick. People who do not have our moral sense are considered abnormal, and (probably) sick.(p.255)
It would seem that Ruse would agree with Abraham Lincoln's argument that slavery is absolutely wrong and contrary to our natural moral sense, because it means that some human beings are treated as if they were not human. Our shared humanity as members of the same species is an objective fact that cannot be denied without absurdity.
Why then doesn't Ruse regard this grounding of morality in a universal human nature as sufficient to secure the objectivity of morality? His reasoning is that although human nature is not just a matter of personal whim, neither is it eternally fixed. Human beings arose from an evolutionary process in the past and could be altered in the future either by natural selection or by genetic engineering. Consequently, human nature is contingent. Although slavery contradicts human nature as we now know it, Ruse might say, we could have evolved so that, like some species of ants, some of us would have been genetically designed for slavery. And in the future, we might be able through biotechnology to alter our nature to produce a caste society based on genetic differences. Aldous Huxley foresaw this in his Brave New World.
We might now have visions of Nietzschean projects for genetic manipulation in the service of a transvaluation of all values. But Ruse hesitates. "Morality is a part of nature, and . . . an effective adaptation. Why should we forego morality any more than we should put out our eyes?" (p. 253). To which the Nietzschean nihilist might respond, If our eyes deceive us, why not put them out? Ruse has no good answer as long as he accepts Hume's metaphysical skepticism, which denies the very possibility of seeing anything as it truly is.
If Ruse were to embrace common-sense realism as metaphysical realism, we would have an answer for the nihilist. That we have eyes might be an accident of evolution. And someday there might be a universe without us or any other sighted beings to look upon it. But for now we have eyes, and even if now we see as through a glass darkly, we see something of what the world is like. Our sight may have originated as a tool for survival and reproduction, yet now it is more than that. To see is to understand, and to understand is for us desirable for its own sake. To see is not only to live but to live well, to live in a manner proper to our nature. To see whatever there is to see is to be fully awake and thus fully alive. And everywhere we look we see intelligible order. We all understand Darwin's amazement when he looked at the adaptation of parts in an orchid and declared: "I never saw anything so beautiful."
Since we live in and through our bodies, our sight as well as all of our vital capacities will decay and die. That is the way with all things that depend on body. But doesn't our looking with comprehension and wonder upon the world, our looking for the enduring patterns in all things, intimate some participation, even if momentary, in the eternal order? How else could we explain the intellectual passion of Darwin in his quest to look upon the principle governing all life-orchids as well as barnacles, frogs as well as men?
Ruse would say that although Darwin's brain was designed by natural selection only to promote his biological fitness, he was able to use it for scientific understanding as well. "We get the tools through organic evolution," Ruse explains. But "what we produce has a meaning of its own, transcending biology, as we push our tools of understanding to produce ever better pictures of the world" (p. 206). Does this imply that the mind (as the capacity for thought) is not simply identical to the brain (as the organic product of evolution)? Is the brain the necessary but not sufficient condition for the mind? Dare we suggest that to speak of human thought as "transcending biology" reminds some of us of the old-fashioned idea that human beings have souls?
In any case, it would be disastrous for the defenders of Darwin to accept Ruse's Humean interpretation of Darwinian biology. The popularity of Creation science depends not on the specious arguments for its scientific validity, but on the belief that Darwinism promotes a nihilistic assault on reason and morality. Ruse confirms that belief by insisting that to become a Darwinian one must deny that one can ever know objectively what is true or right. To put science in opposition to common sense provokes a natural (and sensible) animosity among common people, who lack the talent of clever people for believing nonsensical ideas. And insofar as science itself is ultimately a refinement of common sense, a scientific denial of common experience would be intellectual suicide. We cannot take Darwin seriously if we have no reason to take anything seriously. | <urn:uuid:63548200-88d9-496e-8f75-bd2d245eada3> | CC-MAIN-2021-21 | https://claremontreviewofbooks.com/darwin-hume-and-nihilism/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989637.86/warc/CC-MAIN-20210518125638-20210518155638-00096.warc.gz | en | 0.963541 | 4,582 | 3.203125 | 3 |
Rupe, J. and L. Sconyers. 2008. Soybean Rust.
The Plant Health Instructor. DOI: 10.1094/PHI-I-2008-0401-01
Phakopsora pachyrhizi and
Soybean and kudzu are the most important hosts, but there are over 90 other hosts known for P. pachyrhizi. Some of the hosts are listed in Table 1.
University of Arkansas
University of Georgia
Soybeans infected and not infected with Asian soybean rust, caused by
Phakopsora pachyrhizi, in a fungicide trial in Attapulgus, GA, 2006. (Photo by R. C. Kemerait, Jr.)
Soybean rust caused by
P. pachyrhizi has been a serious disease in Asia for many decades. It appeared in Africa in 1997, and in the Americas in 2001. Before it was first found in the continental USA in late 2004, probably brought in by a hurricane, it was considered such a threat that it was listed as a possible weapon of bioterrorism. Soybean rust cannot overwinter in areas with freezing temperatures, but it can spread by wind rapidly over such large distances, its development can be so explosive, and it can cause such rapid loss of leaves that it is now one of the most feared diseases in the world's soybean-growing areas.
Symptoms and Signs
The first symptoms of soybean rust caused by
Phakopsora pachyrhizi begin as very small brown or brick-red spots on leaves (Figure 2). Symptoms caused by
P. meibomiae are similar to those of
P. pachyrhizi, but this lesson will focus on
P. pachyrhizi because most of the research and observations have been made with this species. In the field, these spots usually begin in the lower canopy at or after flowering, although seedlings can be infected under certain circumstances. Often the first lesions appear toward the base of the leaflet near the petiole and leaf veins. This part of the leaflet probably retains dew longer, making conditions more favorable for infection. Lesions remain small (2-5 mm in diameter), but increase in number as the disease progresses. Pustules (Figure 3), called uredinia, form in these lesions, mostly on the lower leaf surface, and they can produce many urediniospores. The raised pustules can be seen with the unaided eye, especially when sporulating (Figure 4). Even though the lesions are small, each lesion often has several pustules (uredinia) (Figure 5). Lesions can be completely covered in urediniospores when the pustules are active (Figure 6). Soybean rust urediniospores are pale yellow-brown to colorless, with an echinulate (short spines) surface ornamentation (Figures 7 and 8). This coloration is different from many other rust pathogens whose spores are often reddish-brown (rust colored). Germination of
P. pachyrhizi urediniospores occurs through an equatorial (central) pore, producing a germ tube that ends in an appressorium, which the fungus uses to penetrate the host directly or through a stoma (Figure 9).
As more and more lesions form on a leaflet, the affected area begins to yellow, and eventually the leaflet falls from the plant (Figure 1). While soybean rust usually begins in the lower canopy, it quickly progresses up the plant until all of the leaves have some level of disease. Severely diseased plants may become completely defoliated. The loss of effective leaf tissue results in yield reductions from both fewer and smaller seed. Yield losses as high as 30 to 80% have been reported, but the amount of loss depends on when the disease begins and how rapidly it progresses. Besides leaves, soybean rust can also appear on petioles, stems, and even cotyledons, but most rust lesions occur on leaves.
Lesions may be either tan (Figure 10) or red-brown (Figure 11). Tan lesions have many pustules that produce numerous urediniospores. Red-brown lesions, thought to be a moderate resistance reaction, have only a few pustules that produce only a few urediniospores. As will be discussed in the Disease Management section, this lesion type depends on the strain of the pathogen, and may appear on the same leaf with tan lesions, or tan lesions may turn red-brown with age. Symptoms and signs on other hosts, such as kudzu, are similar, although lesion size may differ.
As pustules age, they may turn black (Figure 12). This is caused by the formation of a layer of teliospores in the pustules, turning pustules from uredinia into telia (Figure 13 and 14). Teliospores have two functions: survival of the fungus in the absence of a living host (overseasoning) and sexual reproduction. The thick walls of the teliospores protect the fungus from the environment and attack by other organisms. In rusts, the teliospores germinate forming a basidium and four basidiospores during which sexual recombination occurs. Germination of
P. pachyrhizi teliospores has been observed only in the laboratory and does not seem to make a significant contribution to the perpetuation of this disease in the field.
There are two closely related fungi that cause rust on soybean:
Phakopsora pachyrhizi, sometimes referred to as the Asian or Australasian soybean rust pathogen, but which now also occurs in the western hemisphere, and
P. meibomiae, the so-called New World soybean rust pathogen, which is found only in the western hemisphere. Except for a few minor characteristics, the two fungi appear morphologically identical, but
P. pachyrhizi is much more aggressive on soybean than
P. meibomiae. To date,
P. meibomiae has not been documented to cause significant yield losses in Central and South America. The two species can be distinguished by using DNA analysis protocols. Like other rusts, the soybean rust pathogens are obligate parasites that require a living host to grow and reproduce. They can survive away from the host as urediniospores for only a few days under natural conditions.
Both soybean rust pathogens, to the best of our knowledge, produce only two types of spores: urediniospores and teliospores (Figure 15). This contrasts with other rusts, which can have up to five spores stages (for example, wheat stem rust). For soybean rust, like most rusts, the uredinial stage is the repeating stage. This means that urediniospores can infect the same host on which they were produced (soybean) during the same season. Epidemics can develop quickly from only a few pustules because spore-producing pustules are produced in as little as 7 to 10 days after infection, and each pustule can produce hundreds of urediniospores.
Teliospores are produced in old lesions, but they do not appear to germinate in nature, and no alternate host, nor aecia or spermogonia are known. Without germination of teliospores, sexual reproduction cannot take place. Lack of sexual reproduction should limit variability of the rust fungus, but nevertheless there is substantial variability in
P. pachyrhizi with respect to virulence. This has limited the use of single genes for resistance in soybean, because in a short time new isolates of the pathogen arise that overcome the resistance gene. It is not known how this variability originates in
P. pachyrhizi. Wheat stripe rust,
Puccinia striiformis, has a similar life cycle as
P. pachyrhizi with no functioning telial stage and therefore no sexual reproduction, but has many races. It may be that each resistance gene is so specific that a single mutation in the right gene of the fungus allows it to be virulent on hosts with the new resistance gene.
Soybean rust epidemics begin with the arrival of airborne inoculum (urediniospores). This pathogen is unique among rusts because it has many alternative hosts (Table 1), which may serve as sources of inoculum.
Alternative hosts are other plants that can become infected with the same pathogen, but are not required to complete the pathogen's life cycle. Alternative hosts are not to be confused with
alternate host, which is a plant other than the principal host, that is needed for a pathogen to complete its life cycle. In frost-free areas, such as South America, Central America, the Caribbean basin, southern Texas, and Florida, the inoculum source could be nearby on volunteer soybean plants, kudzu, or some other alternative host. In areas that experience frost, such as the Midwestern United States, inoculum must be blown in from over-wintering sources that may be hundreds of miles away. Re-introduction of obligate pathogens into a distant region occurs with several other diseases, such as wheat stem rust and downy mildews, e.g. blue mold on tobacco. Because spores of
P. pachyrhizi are sensitive to ultraviolet radiation, long distance movement of these rust spores probably occurs in storm systems where clouds protect the spores from the sun.
Table 1. Known hosts of Asian soybean rust caused by
Phakopsora pachyrhizi. Information courtesy of Kent Smith, USDA/ARS.
|Bean, Common, Dry (field, kidney, navy, pinto) *||
Phaseolus vulgaris var.
|Bean, Common, Succulent (garden, green, snap, and wax) *||
Phaseolus vulgaris var.
|Bean, Fava or Broadbean||
|Bean, Lablab or Hyacinth *||
|Bean, Lima *||
Phaseolus lunatus var.
|Bean, Mung *||
|Bean, Scarlet Runner *||
|Bean, Winged or Goa||
|Bean, Yam *||
Pachyrhizus ahipa, P. erosus|
|Blackeyed Pea, Cowpea or Yardlong Bean *||
|Clover; Alyce or Oneleaf ||
|Clover, Crimson ||
Crotalaria anagyroides, C. spectabilis|
|Florida Beggarweed* ||
Pueraria montana var.
Kummerowia striata, K. stipulaceae|
Astragalus cicer, A. glycyphyllos|
|Pea, garden and field||
|Peatree or Colorado River Hemp (Sesbania)||
|Pigeon Pea *||
|Soybean (including edamame) *||
|Urd or Black Gram *||
|Wild Soybean *||
Vicia villosa subsp.
|Yellow Sweet Clover||
|* Includes field observations of infection, in addition to infection resulting from artificial inoculation|
Once viable spores have landed on the leaf surface of a suitable host, infection and subsequent epidemic development are dependent on environmental conditions. Generally, infection occurs when leaves are wet and temperatures are between 8°C and 28°C, with an optimum of 16°C to 28°C. At 25°C, some infection occurs in as little as 6 hours of leaf wetness, but 12 hours are optimal. After infection, lesions and pustules with urediniospores can appear within 7 or 8 days, and the next infection cycle is set to begin. This short life cycle means that, under the right conditions, soybean rust epidemics can quickly build up from almost undetectable levels to very high levels. Soybean rust epidemics can progress from below detectable levels to defoliation within a month. Epidemics may seem to progress even faster than that, because early infections occur in the lower canopy and are hard to find. Besides the environment, plant age affects soybean rust epidemics. Usually, rust lesions are not found on soybean until flowering, unless there are high inoculum levels early in the season. This may be due to greater susceptibility of plants to rust as the host enters the reproductive stages, it may be because in lower parts of the canopy spores are more protected from UV radiation, or it may be because conditions in the canopy become more humid as the canopy closes. In any event, lesions can form at any growth stage, but major increases in disease do not occur until after flowering.
There are three basic management tactics that can play a role in reducing soybean rust epidemics: fungicides, genetic resistance, and cultural practices. At present, fungicides are the only highly effective tactic (Figure 16), but long-term management will probably depend more on resistance, in combination with fungicides and changes in cultural practices.
At present, the most effective means of managing soybean rust is the use of fungicides (Figure 1). However, to be effective, selecting the right fungicide and applying it at the right time are crucial. Several fungicides are registered in the US for soybean rust control, and most can be classified into three groups: chloronitriles, strobilurins, and triazoles (Figure 17). Chlorothalonil is the one chloronitrile fungicide registered for soybean rust control. Its protectant mode of action affects many biochemical pathways in the pathogen, but it is not taken up by the plant, not even by the cuticle. As a result, it is more subject to weathering than the strobilurins or the triazoles and complete coverage of the leaf surface is critical. To be effective, chlorothalonil may need to be reapplied several times if new growth or weathering occurs.
Strobilurin fungicides are modeled after a natural antifungal compound, strobilurin, produced by certain mushrooms. Strobilurins (also known as QoI fungicides) inhibit mitochondrial respiration in the pathogen. Strobilurins are typically absorbed by the cuticle, and act as protectant fungicides (http://admin.apsnet.org/edcenter/advanced/topics/Pages/StrobilurinFungicides.aspx). A protectant fungicide prevents infections from taking place, but it has little effect on disease development once infection has occurred (Figure 17). Therefore, to be effective, protectants like the strobilurins must be applied before infection occurs. Depending on the rate applied, strobilurins are effective for up to 2 weeks after an application, but they will not protect newly developing leaves. Strobilurins control a broad range of soybean pathogens.
Triazoles inhibit sterol production, which disrupts cell membrane function in the pathogen. Triazoles are absorbed and translocated upward in the plant. While they generally do not prevent infection, the triazoles can kill the fungus in the plant and prevent pustules and spores from forming (Figure 17). The extent to which these chemicals are translocated depends on the triazole, but all of them move up the plant into new growth to one degree or another. Still, systemicity of triazoles in plants is incomplete and does not approach the level of systemicity associated with certain herbicides or insecticides. Triazoles are effective for 3 or 4 weeks after application and give some protection to new growth. While highly effective against rust, the triazoles are not as effective as the strobilurins against other soybean pathogens. Some fungicide products (premixes) contain both a triazole and a strobilurin. The premixes provide protection against a broader range of pathogens and reduce the possibility of pathogens developing resistance to either product.
The number of applications required for disease control depends on the compounds used, when the rust epidemic starts, and the favorability of the weather conditions. Even with triazoles, which are effective for the longest period of time, two applications are often needed to control soybean rust. In some locations in Brazil, high levels of inoculum early in the season result in rust epidemics starting well before flowering, thus forcing growers to make as many as five fungicide applications in order to control the disease. Such early disease onset and the early need for fungicide application are unlikely in most of the US. However, rust could start as early as flowering (R1) and require an additional spray before harvest. It is generally felt that, once the plants reach the R6 growth stage (when seeds have filled the pod), most of the yield has been achieved and controlling rust beyond that point is not economical. One concern with multiple applications of the same fungicide is the development of fungicide resistant pathogen strains. While fungicide resistance in
P. pachyrhizi has not been reported, other fungal pathogens may be affected and growers should try not to spray the same fungicide consecutively. Fungicide labels may restrict the number of times a particular compound or class of compounds can be applied within a season to reduce the possibility of resistance developing.
The key to effective control of soybean rust with fungicides is application timing. This is especially important in areas of the US where the soybean rust pathogen must be reintroduced each year. The introduction or reintroduction will probably occur at different times in different years or not at all in some years. All of the fungicides, even the systemic triazoles, are most effective when applied just before the rust epidemic starts in the field. From tests in South America, if disease incidence reaches 10% in the lower canopy before the first application, fungicides will not completely control soybean rust, and some yield loss will result if weather conditions are favorable. Such low levels of disease are difficult to detect so growers need an early warning system that predicts the onset of disease early enough so that they have time to apply fungicides to all of their fields. Application decisions, equipment, and technique can all greatly impact the level of rust control achieved.
At present, the most reliable early detection method is the use of "sentinel plots." These are small plots (primarily soybean, but kudzu or another susceptible host also may be used) planted several weeks before the commercial crop, and often use early maturing cultivars. Both the early planting and the early maturity of the cultivars results in the sentinel plots flowering 1 to 3 weeks before the commercial crop. Since soybean rust usually develops after flowering, the disease can be observed in these sentinel plots a week or two before being found in adjacent commercial fields. This early warning gives growers in the area time to apply a protective fungicide treatment. Sentinel plots have been established throughout the soybean and dry bean production areas of the US. Information from these plots is uploaded weekly into a USDA website (www.sbrusa.net) where maps are generated showing rust activity in the country (Figure 18). This site also includes state and national commentaries, disease forecasts, and other pertinent information.
Besides the sentinel plot findings, extension specialists in each state also include state-specific commentary on soybean rust and the need for control measures. In addition, rust information from all of the state plant diagnostic clinics is networked together, and new finds of soybean rust are included on the site. Information from this USDA website can be used by growers and scientists to see where rust is active and to determine if their area is threatened by the disease. In addition to the USDA website, information is also available on many state Cooperative Extension Service's websites and on several agricultural industry websites. Information about soybean rust in Argentina can be found at
http://www.sinavimo.gov.ar/ and in Brazil at
http://www.cnpso.embrapa.br/alerta/. A partial list of websites can be found in the Selected References section of this lesson.
Several experimental early warning and disease forecasting systems are under development. These models relate a variety of weather, crop and disease conditions to spore movement, spore deposition, and infection. Some of the factors included in these models are sources of inoculum, wind direction and speed, temperature, humidity, leaf wetness, sunlight intensity, and crop developmental stage. These models are currently being used to indicate where and when scouting efforts should be intensified.
Another method of early detection of soybean rust is spore trapping, in which two strategies are being assessed. One traps windblown spores on glass slides coated with petroleum jelly (Figure 19). The spores are examined microscopically, and the presence and number of soybean rust-like spores noted. At this time, microscopic examination can only identify spores that resemble the soybean rust pathogen because it is not currently possible to identify
P. pachyrhizi with certainty by simply examining the urediniospores. More conclusive identification of urediniospores of
P. pachyrhizi is being developed by using labeled antibodies and polymerase chain reaction (PCR) protocols.
The other spore trapping approach involves collecting and filtering rainwater and then uses PCR to determine the presence of
P. pachyrhizi on the filters (Figure 20). It is thought that long-distance spread of urediniospores occurs when storms pick up the spores and then deposit them in rainwater at distant locations. Because this technique uses species-specific molecular markers, positive findings are thought to be more reliable. In 2005 and 2006, both air and rain sampling found
P. pachyrhizi or
P. pachyrhizi-like spores over a wide area, far from where soybean rust was active. While neither approach can determine if the spores arrived alive, they do indicate that this pathogen has the potential to spread widely and quickly.
Soybean plants respond to infection by
P. pachyrhizi by producing either tan, red-brown, or no lesions at all. Tan lesions produce many pustules with many spores (Figure 10). Red-brown lesions produce a few pustules with limited spore production, and no pustules or spores are produced where no lesions are formed (Figure 11). It is thought that these responses represent susceptible, moderate, or highly resistant reactions, respectively. High levels of resistance are usually associated with one or a few dominant genes. There are four known dominant genes for resistance to soybean rust,
Rpp4. While these dominant genes confer high levels of resistance and are relatively easy to incorporate into new soybean cultivars, they are not effective against all races of
P. pachyrhizi. Deployment of varieties with new resistance genes is usually followed in a few years by the emergence of races of
P. pachyrhizi that are virulent on them. This high degree of variability in the soybean rust pathogen is common in many rusts [see wheat stem rust] and requires the frequent discovery and incorporation of new sources of resistance. Currently, isolates of
P. pachyrhizi exist that are virulent on each of the four known genes for resistance.
Another approach is the use of moderate resistance. Moderate resistance is usually conferred by a number of genes, each contributing a little to the overall resistance of the cultivar. This type of resistance often is effective against all races of a pathogen, but it is more difficult to incorporate into cultivars and does allow some disease and yield loss. Moderately resistant cultivars have been developed in Asia, but adapted varieties with this type of resistance are not yet available in the US or South America. Ultimately, moderate resistance may be used in combination with cultural practices and fungicides when needed.
There are several cultural practices that may help manage soybean rust. In most areas of the US where rust must be introduced each year for an epidemic to occur, changing planting and harvest dates may avoid disease. Planting early with an early maturing cultivar may avoid the rust until the crop has either been harvested or is so far along that the disease will have little impact on yield. Planting dates may also be delayed so that the vulnerable reproductive period occurs during dry conditions that do not favor rust. In areas where the weather is marginal for rust development, wider row spacing along with lower plant populations may hasten canopy drying, thus reducing the dew period enough to prevent or at least slow disease development. It may also allow better fungicide penetration into the canopy, increasing the effectiveness of chemical control. Research is needed to confirm this. However, because the more open canopy provides less weed suppression, weed problems may be more severe with this strategy, and this method is unlikely to affect rust significantly if weather conditions are very favorable for the disease. Adjusting soil fertility, particularly potassium and phosphorus levels, may help increase disease resistance, but there is little research in this area yet. While it is unlikely that cultural control measures alone will be enough to control soybean rust, they may increase the effectiveness of host resistance or fungicide applications.
Soybean rust is one of the most important soybean diseases worldwide. Soybean, a major crop both in the US and the world, is high in vegetable oil and protein (approximately 20 and 40%, respectively) and provides 57% of the vegetable oil consumed worldwide and 68% of the vegetable protein. According to the American Soybean Association, the US produced 38% of the world soybean crop in 2006, worth over $19 billion, with Brazil and Argentina producing 24 and 19% of the world crop, respectively. Because of the lack of plant resistance, the explosive nature of the disease, and the high potential yield losses (30 to 80%), soybean rust has long been viewed as a serious threat to soybean production in both North and South America. The threat of soybean rust was so serious that
Phakopsora pachyrhizi was included on a list of 'select agents' in the 2002 USA Bioterrorism Act, along with other biological agents such as those that cause anthrax and hemorrhagic fever. Select agents are pathogens of humans, animals, or plants that have the potential to be used as weapons of terrorism.
Soybean rust caused by
P. pachyrhizi was first reported in Japan in 1902 and was limited to Asia and Australia until 1997 when it was found in Uganda. From Uganda it spread to Zimbabwe (1998) and then to South Africa (2001). In 2001, soybean rust was found in Paraguay. Since most of the world's soybean production occurs in North and South America, the introduction of rust into Paraguay posed a significant threat. Soybean rust was reported in Brazil and northern Argentina in 2002. By 2003, soybean rust was occurring in most soybean producing areas in Brazil as well as Bolivia. In the summer of 2004, rust was reported in Columbia, and in November of that year it was found for the first time in the continental US in Louisiana and then, within a short time, in 8 other southern states (Figure 21). Soybean rust probably entered the continental US from Columbia with hurricane Ivan, which made landfall in September 2004.
Figure 22 illustrates the estimated spore load carried by hurricane Ivan and the geographic distribution of spore deposits. Since then,
P. pachyrhizi appears to have established itself permanently on kudzu in Florida, and the area where it is found during the growing season has increased. In 2005, soybean rust was active primarily in the southeastern US (Figure 23), but some late finds were found on kudzu as far north as Kentucky and North Carolina. Soybean rust was found in Mexico in the early spring of 2006 and in an isolated field near Brownsville, TX.. Throughout the first half of the 2006 growing season, soybean rust was confined to the southeastern US, but it was not very active even there because of the unusually hot, dry weather. However, as rainfall increased, so did soybean rust, especially in Louisiana and along the southeast coast of the US. By the end of the season, soybean rust was found along the Mississippi River into southern Illinois and Indiana and as far north as West Lafayette, IN (Figure 24). In 2007, drought in the southeastern US limited soybean rust development throughout most of the season, but high rainfall in Louisiana, Texas, Oklahoma, and Kansas favored rust in these areas (Figure 25). Rust appeared for the first time as far north as Iowa in 2007. Although these late-season infections did not cause yield loss, it did show that under the right conditions, soybean rust can spread quickly and is a threat to the major soybean growing areas of the US.
Note: The first introduction of Asian soybean rust caused by
P. pachyrhizi in the US occurred in Hawaii in 1994, but this did not impact the major soybean growing areas in the continental US. Also, soybean rust was found in Puerto Rico in the mid-1970's, but later analysis showed that this rust was caused by
P. meibomiae and not
The introduction of Asian soybean rust into the continental US sparked a nationwide effort by research and extension scientists to prepare for rust epidemics. These efforts include a network of sentinel plots that track the occurrence of soybean rust nationally on a weekly basis, a coordinated effort to label fungicides for soybean rust, training of "first detectors" on how to recognize soybean rust, and other extension and research activities. As a result, this disease has sparked immense media interest. Some news headlines follow.
"Soybean Checkoff Builds Defense Against Rust"
"RUST BELT CINCHES UP"
"Sen. Feingold urges action against soybean rust"
The Asian soybean rust work that has been conducted in the United States to date would not have been possible without a concerted and cooperative effort from Land Grant Universities, the U.S. Department of Agriculture, State Soybean Check-Off Boards, and Industry.
APS Press Release. 2004. Plant Pathologists Offer Soybean Rust Identification and Management Tips.
Bromfield, K.R. 1984. Soybean Rust. Monograph 11. The American Phytopathological Society, St. Paul, MN.
Dorrance, A.E., M.A. Draper, and D.E. Hershman, Editors. 2007 (revised). Using Foliar Fungicides to Manage Soybean Rust:
Dunphy, J., D. Holshouser, D. Howle, P. Jost, B. Kemerait, S. Koenning, J. Mueller, P. Phipps, S. Rideout, L. Sconyers, E. Stromberg, P. Wiatrak, and A. Wood. 2006. Managing Soybean Rust in the Mid-Atlantic Region:
Hernández, J.R. 2004. Systematic Botany & Mycology Laboratory, ARS, USDA. Invasive Fungi. Asian soybean rust. Retrieved October 2, 2007, from
Miles, M.R., R.D. Frederick, and G.L. Hartman. 2003. Soybean Rust: Is the U.S. Soybean Crop At Risk?
Sinclair, J.B., and G.L. Hartman. 1999. Soybean Rust. Pages 25-26. In Compendium of Soybean Diseases, Fourth Edition. Eds. G.L. Hartman, J.B. Sinclair, and J.C. Rupe, APS Press, St. Paul, MN.
Sconyers, L.A., R.C. Kemerait, J. Brock, D.V. Phillips, P.H. Jost, E.J. Sikora, E. Gutierrez-Estrada, J.D. Mueller, J.J. Marois, D.L. Wright, and C.L. Harmon. 2006. Asian Soybean Rust Development in 2005: A Perspective from the Southeastern United States.
Aerobiology at Penn State:
State and Regional-Public
Extension Disaster Education Network:
North Central Plant Diagnostic Network:
Plant Health Initiative:
Syngenta Crop Protection. | <urn:uuid:741fab7e-2eca-4c28-89f5-f052281e0a49> | CC-MAIN-2021-21 | https://www.apsnet.org/edcenter/disandpath/fungalbasidio/pdlessons/Pages/SoybeanRust.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991252.15/warc/CC-MAIN-20210512035557-20210512065557-00374.warc.gz | en | 0.917064 | 6,796 | 2.953125 | 3 |
Michael A Rizzotti
When I switched my major from economics to theology at Loyola College in the fall of 1970, the province of Québec was in the midst of a political turmoil. During what is now known as the October Crisis, British Trade Commissioner James Cross and Québec Labor Minister Pierre Laporte were both kidnapped by the Front de Libération du Québec (F.L.Q.). James Cross was later released, but Pierre Laporte was found strangled in the trunk of an abandoned car on the eastern outskirts of Montréal. The Crisis triggered the adoption by the Liberal Government of Canada of the War Measures Act and the army was sent into the province.
The circumstances that led to this dramatic turn of events could be traced back to the British invasion of a small but growing French colony of Nouvelle-France in 1760. The Conquest –la Conquête- was to be the beginning of a people’s ongoing struggle for survival.
In order to maintain peace in the newly conquered colony, the English undertook a policy of laissez faire toward the Catholic Church. As the new spiritual leader, the Church promoted in the minds of the people a distinct vision of its own identity and destiny. Looking back, hardly any political party could have inspired such a collective will to overcome the unforeseeable obstacles of history.
From the time I first left Italy to immigrate to Montreal. I witnessed enormous changes in the people of Quebec in the sixties. The Catholic Church was omnipresent when we first arrived, and had been for at least three centuries. All aspects of French Canadian life was imprinted with the Church’s authority.
In the early nineteen-sixties, two major events were to change the Church’s hold over the people: Vatican II in Rome, and the emergence of the Quiet Revolution –La Révolution Tranquille– in Québec. In a matter of years, the Church’s power rapidly eroded. In less than a decade, the priests and nuns who dominated schools and hospitals were replaced by lay people. The Church was losing an increasing number of its believers. Those who lost their faith embraced the growing nationalist fervor. And as the québecois progressively abandoned the Church, they joined the ranks of the emerging political quest for independence.
It is this quest that is the subject of this chapter. We will try to explain how a desire for spiritual salvation was transformed into a movement for political liberation. As Claude Levi-Strauss observed, nothing in today’s society is more mythical than political ideology. He wrote:
But what gives the myth an operational value is that the specific pattern described is timeless; it explains the present and the past as well as the future. This can be made clear through a comparison between myth and what appears to have largely replaced it in modern societies, namely, politics.1
It is this quest for independence that is the focus of a message of mythical proportions.
Ironically, myth and history appear today as conflicting in meaning as in function. They are both stories, yes, but each relates to different aspects of events that are recounted. Both are equally considered to be true stories by those who relate their content. Yet myth is primarily concerned with accounts of the origins taking place in a primordial time, a so called time beyond the realm of history. History, on the other hand, is a chronological compendium of historical data.
One can best differentiate myth from history as two distinct forms of language. Foremost, history is the realm of the historian and his work, whereas the narrative of myth reaches out to all men, women, and children regardless of class, position, and age. All are captivated by myth. Everybody is enchanted by the mythical stories that have been generated by different cultures.
Myth is concise, symbolic, meaningful, and efficient. Its stories relate to events and heroes beyond the ordinary human sphere. These stories are concerned with god(s), super-heroes, and their heroic deeds. What separates myth from history is its description of a special class of beings and their activities. They deal mostly with the powers that rule the world: wherein God or the gods are metaphors for the unfathomable powers -subliminal and inconspicuous hierarchies- that rule the world. For the most part, these stories have an enduring quality that reflects the intrinsic and significant aspects of a mentality derived from the different cultures they emerged from.
Myth relates how a new reality came into being, how a new world was created. It describes the actions of the super-heroes or the god(s) in their creative endeavor. Why are certain things forbidden? What legitimates a particular authority? Why do people suffer and die? To sum it up, myth decodes the meaningful events of the world. These events evolved in a time beyond history; ie, in illo tempore.2 Thus, this ethereal dimension in time and space is the primary gap that separates myth from history. It is a fuzzy boundary between the sacred/supernatural that is set apart from the profane/ordinary world.
History is foremost an exhaustive and detailed account of all significant events that occurred in the past. With the scientific application of historiography, history has been stripped of any mythical content. However, this was not the case of the history books of several decades ago. One look at older history books reveals how they were filled with heroic embellishment which have nothing to do with historical facts. The interpretation of the events surrounding General Custer’s battle at Little Bighorn, for instance, has varied tremendously over time. Some of the earlier versions were, to say the least, mythical, and particularly unfavorable toward the aboriginal people.
The above comparison between myth and history is well illustrated in the example of the discovery of Nouvelle-France (New-France). According to Mircea Eliade, myth is essentially an account that describes the events that are at the origin of a new reality founded and created by civilizing heroes or gods in the beginning of time. The discovery of New-France, for example, has been inscribed in history as the legitimate origins of a new national reality. The new beginnings inaugurate the grounds of mythical significance. The ancestral heroes are the founders of a new national identity at the beginning of a new chapter of history. The founders’ identities are celebrated as heroic and are separated from the mass of historical events. In the U.S., for instance, Columbus day is a national holiday.3 The national event celebrates the hero as the prototype of a new cultural and national reality. The pioneer is not so much famed as a person but as a symbol of a new cultural identity. As history shows, because of Amerigo Vespucci, the New World became known as America on maps as early as 1507.
Christopher Columbus discovered America in 1492
Jacques Cartier discovered New-France in 1594
These national heroes were the first to inaugurate a new historical and national reality. They were elevated above ordinary human beings and other historical characters. As a result, society will commemorate these super-heroes by erecting monuments in their honor. These monuments consecrate the significant part they played in the foundation and creation of a new national entity and identity.4
There is an inherent contradiction in the concept of the discovery. How could the New World be discovered when it was already inhabited by native cultures? To validate the Christian discovery, these natives had to be dismissed as having no cultural and moral value of their own. Being labeled as heathen and pagan justified their need for civilization. Therefore, the discovery was strictly an European colonialist imposition upon the native cultures to justify the taking of the biggest piece of free real estate ever discovered. Today, such historical value given to the discovery is debatable, since it is more mythical than anything else. But it shows how the mythical process is a propaganda tool for the justification of any form of colonialism and imperialism.
The chronicle of the origin of a new reality has an important mythological significance in history, yet the struggle for the nation’s identity is also essential.5
sacred vs profane
the colonialists vs the natives
the Christians vs the pagans
The opposition establishes the sacredness of the colonial endeavor, especially in respect to the belief of the mission to civilize and to convert the savage heathen who represented an obstacle to the development of the new nation. We have typified elsewhere the Zuni as the heathen reality to be converted. As a profane reality, they were seen as an obstacle to the development of the New World.
Christian civilization vs the heathen
British civilization vs the pagan
French civilization vs the savage
New-France will evolve dramatically from the time of its foundation. Its historical discovery allowed the consecration of its origin as a legitimate nation regardless of the fate of the aboriginal cultures who lived in their ancestral lands.
The discovery of New-France that fills the first pages of history books of that nation was to be undermined by a tragic turn of events. In 1760, the colony was conquered by the British army and abandoned by France. In the process, the conquerors set their own political rules while recognizing the authority of the Catholic Church so as to appease the population.
The defeat and the abrupt change in the political allegiance left a deep scar in the collective memory of the French people. The result was to imprint ambivalent feelings of being a nation of colonized-colonialists, and to mark a Lord-victim approach in regards to their history and their fate. The people were in political exile in their own land. The French, who were originally the Lords and colonialists in the New World, had become themselves the victims of colonialism imposed by the British. This turn of events will have enduring effects in the development of their destiny and history. It will set off the beginning of a peoples’ struggle for survival.
The British conquest of New-France also reinstated the old rivalry between England and France and exported to North America the ancestral antagonism between Reformed Church/Protestantism and Catholicism that had endured in Europe for several centuries.
The political struggle that emerged because of the conquest clearly outlined two distinct and rival cultural entities.
English vs French
Reformed/Protestants vs Catholics
Abandoned by France, the people congregated under the leadership of the Catholic Church. From then on the French mentality would be shaped into a Catholic mold. With her new found authority the Church became preoccupied with the redemption of its people. The hierarchy promoted the principles of obedience to the Church as the only visible sign of salvation: extra ecclesiam nulla salus; ie, there is no salvation outside the Church. The Church encouraged students to shun the evils of business and commerce and to embrace liberal professions such as law, medicine, and the priesthood. The clerics preached to the population the benefits of agriculture as a privileged way of salvation. They urged women to marry young and have numerous children.
Meanwhile, by the end of the XVIIIth century, signs of the Industrial Revolution were visible all over England. The Kingdom was in a rapid transition from an agrarian to an industrial society. The roots of the cultural and economic development of capitalism had Protestant ethical overtones. Individual responsibility, freedom, industry, and success were believed to be visible signs of salvation. Max Weber described the ethic in terms of a “secular asceticism”.6 This spirit of capitalism would soon spread to all the British colonies of North America.
Suddenly, Canada became a battleground for two rival cultures, two languages and two religions originating from two rival European colonial powers. On one hand, we have the French culture led by the Catholic Church whose authority lay in the hierarchy and in the assembly of believers as a visible sign of its invested power, described in terms of collective asceticism. This belief implied a faithful obedience to the principle of the Church as the only way toward salvation.
On the other hand, we have the English culture influenced by the Protestant ethic, described in terms of secular asceticism. The ethic favored individual initiative, industry (hard work), responsibility, and financial success as a sign of election.
Hence, two cultures and two visions of the world inspired an antagonism that put the two collective entities against each other. Each was living in their world of sacred beliefs, opposing the other as a profane reality.
French Catholics vs English Reformed/Protestants
collective asceticism vs secular asceticism
other-worldly vs this-worldly
Not until the first half of the 18th century did the French-Canadian people begin to challenge the political rules set by the English and the Church.
During 1837-38, a movement emerged that began to question the authority of the Church and the political advantage of the English. A growing number of people from the French middle-class, as well as intellectuals, expressed their unhappiness with their share of political power. Louis-Joseph Papineau, the leader of the Parti Canadien, succeeded in rallying a majority of French people against the Catholic Church and the English. The nationalist outburst was brief. In 1838 the English crushed an armed insurrection and dispelled the leader and its followers.
As a result, the people were left in a political limbo. In time, the French-Canadians rallied back to the Church for guidance. The majority of the people who were tempted by the political solutions proposed by the nationalists returned to the Church’s promise of collective salvation. Redemption would not be won through political means, but through obedience to the Church and through faith.
By the end of the XIXth century, the rapid changes brought by industrialization and urbanization began to undermine the Church’s control over the faithful. Priests began to preach to people to have large families in order to overcome the English by number.7 The policy of la revenge des berceaux -the revenge of the cradle- worked. As the population grew rapidly, people left the farm for the city. The cities were unable to handle the increasing number of people moving in. And because of the high level of urban unemployment a great deal of the people emigrated to the U.S. In order to limit the exodus, the Catholic hierarchy pioneered the development of agricultural lands in the northern parts of Quebec. These policies were devised to keep the people away from the evils of industrial cities controlled by the English. But despite the courage and endurance of the inhabitants, the harsh climate and poor economical benefits failed to keep the people on their farms.
Urbanization was seen by the clerical elite as a threat to their authority. They had complete control over the farmer who lived in relative autonomy and isolation on his land. Not so for the people living in the cities who were being hired by the English industrialists and traders.
The rapid industrial development, which was out of the Church’s control, was perceived as threatening the integrity of their flock. The economic power of the English was seen as an incursion in their clerical jurisdiction. Especially in light of the overwhelming presence of the Anglo-Saxon culture of Canada and the U.S.
Even though the French-Canadians renewed their allegiance to the Church in the years following the rebellious outburst, their vision of salvation underwent some fundamental changes. Out of the defeat arose a new kind of collective mysticism, more patriotic in tone. A national messianism began to take shape.8
Between the end of the 19th and the early 20th century, a new form of collective mysticism with messianic overtones emerged among the clerical elite. Mgr. Laflèche and later to a lesser extent, Canon Lionel Groulx, prophesied a messianic role for the French Catholic people of North America. They proclaimed that the French-Canadians were destined to be the chosen people of God. They exhorted the population to obedience to the Church in return for a glorious call to the promised land. Mgr. Laflèche compared the plight of the French-Canadian people to Israel. For him “American France…is nothing other than the New Israel of God since it is the heir of the Old France and therefore the heir to the promises made to the Church, and the promise made before that to Israel.”9
As we have seen already, colonialism has broad and sometimes ill effects on the culture it is imposed upon. Extensive ethnological studies show that when cultures are oppressed by a foreign power they instigate movements of messianic salvation, some with revolutionary goals.10 In some cases, the revolt takes the guise of a religious movement but ends in violent outbursts. The conquest and later the defeat of the Rebellion of 1837-38 inhibited the “normal” evolution of the national identity. The strong sense of religious conviction inspired by the Church led the people to shift their desire for national freedom into a mystic vision upheld as a national messianism.
As a result, the ideological boundaries that usually exist between what is believed to be strictly nationalistic and religious fade. National aspirations become intertwined with deep expressions of collective mysticism. The messianic movement described above reinforces the distinct calling of its people and polarizes even further the gap between the French and the English mold of cultural differences and divisions.
collective asceticism vs secular asceticism
French language vs English language
Catholics vs Reformed/Protestants
farmers vs merchants
labor vs industrialist
At this point, it is crucial to stress the importance of the dynamic of opposition in the development of a national identity. The antagonism separates and reinforces the cultural differences and identities on both sides of the dynamic. As we have explained already, the stronger the opposition, the greater the belief in being set apart and of sacred identity.
Although the Catholic Church imposed on its believers a stoic acceptance of the political reality of the British rule, it nevertheless fought any form of assimilation. While the Church was preaching a passive submission to the English rule, it maintained a strong sense of cultural identity. Since the Conquest of 1760, the Church had promoted among its faithful the urgency of its collective survival. Under its guidance the people were kept together by two things: la langue et la foi; eg, the French language and catholic faith. Both were instruments of social unity and a barrier against foreign intrusion. They became the two main vehicles for social integration. They were the two major components of contemporary nationalism.
Language and a desire for emancipation have been vital forces behind the renewal of nationalism that began in the nineteen sixties. As the nationalist movement began to spread, the Quebec society underwent rapid cultural changes. The Quebec people perceived themselves as other and apart from the rest of Canada. It is this perceived sense of distinctness that allowed the separatists to make political headway among le peuple québécois.
As the Spirit of renewal and openness swept Vatican II, Quebec society as a whole was undergoing its own la Révolution Tranquille -Quiet Revolution. In less than a decade, the power of the Church eroded. Meanwhile, political changes were spreading throughout society. The educational system, formally the stronghold of the Church, was rapidly becoming secularized. The medical system, under the control of the clerical hierarchy, was nationalized. Little by little, Quebec society became more secular. Secularization was undertaken so swiftly that it appeared as if the people wanted to be rid of the heavy moral burden the Church had imposed on them during the last two centuries.
Simultaneously, from the late fifties and throughout the sixties, television took center stage in a majority of homes. People indiscriminately plugged into the power of its message. TV began to shatter the mold of the insular mind as it opened a window to the outside world. Inadvertently, this medium began to challenge the old religious and cultural models by the power of its images. Its mass appeal precipitated even further the secularization of society. The images presented on TV eventually supplanted the ethical models preached by the Church. The Chapel was no longer the center for the preaching of the Word.
Until the 1960s, business signs in Montreal were predominantly in English, reflecting the Anglo-Saxon economic control over the city. It revealed the disproportionate supremacy of the minority over the French majority. Things would rapidly change.
As the desire for emancipation grew, a new wave of radical nationalism arose. The new breed of nationalists demanded more control of their political and economical destinies. They felt, with reason, that their language and culture were threatened by the overwhelming Anglo-Saxon presence in North-America.
An alarming decrease in the French birthrate and a dramatic increase in the immigration of people who would rather learn English sparked fears of assimilation. Quebec, the only bastion of French language and culture in America, was threatened. In the late sixties and early seventies, radical movements like the F.L.Q. –Front de Liberation du Québec– undertook to promote social awareness about such threats. The radical movement advocated complete political control over the province’s destiny. Among their demands was the separation of Quebec from the rest of Canada. To show that they were serious, they planted bombs in the mail boxes, a symbol of the Federal Government, of the affluent English section of Montreal.
From the more radical Rassemblement pour l’indépendance nationale (R.I.N.), emerged a moderate indépendantiste party under the leadership of René Lévesque, a former Liberal provincial cabinet member.11 The movement appealed to the masses as it revived memories of broken dreams and shattered hopes. The promise of independence rang out as a clear message of liberation. To implement these goals, the Parti Québécois (P.Q.) proposed the option of “sovereignty-association” with the rest of Canada.
Quebec vs Canada
Parti Québécois vs Federal Parties of Canada
French vs English
The idea of independence rekindled memories of lost aspirations. It captured the hearts of the people who longed to transcend their past. It allowed them to hail their own future. As such, the movement inspired what the more radical nationalist detractors derisively called “the religion of René”.12
To promote the idea of independence, the P.Q. used metaphors like “paradise” and warned against “old demons” and “abortionists” that opposed their goal.13 People who were close to Rene Levesque were called the “evangelists”. One of his closest ministers was even described as “the disciple that Rene Levesque loved”. These quasi-messianic references consecrated even further the cause in which they believed. The leader himself became the embodiment of a sacred mission of quasi-religious proportion.14 The collective passion among its members became vivid and intense as the nationalists became spirited by its crusade. The quest for independence became more and more mythical in meaning and function as the movement grew more popular among a greater segment of the population.
The historical development of nationalism outlines the desire to be distinct. It prompted opposition to whoever challenged this assumption. The dynamic opposition to the other cultural entity reinforced the Quebecers’ sense of conviction in their own separate identity. What existed outside the periphery of the linguistic and religious boundaries –la langue et la foi– was considered a threat to the social makeup. As we have already explained, the stronger the antagonism to the outer cultural reality, the greater the inner identity. This opposition first began with the profane reality of the heathen, which was an obstacle to colonization, and eventually, it was transposed into the struggle against the English adversary.15
The French language became the main bond among the people. It also became a communication barrier against les anglais. Religion, on the other hand, further consolidated the conviction of being set apart and of having a distinct identity as Catholics. The mythical quest for independence became the noetic integrator of the Quebecers. These thematic symbols captured the core of the historical experience of the people. It originated from a legitimate desire to recreate a golden age, a Paradise Lost, if you will, that was denied to them by history. Independence became the rallying quest of that legitimate desire.
It is one of history’s paradox that as soon as the secularization took hold in Quebec, nationalistic concerns arose. What was unique about the people of Quebec prior to the nineteen sixties was the strength of their separate religious identity as well as their language. The province was the only bastion of French Catholicism in North America. The ensuing spiritual vacuum that came as a result of people leaving the Church propelled the faithful quest to be distinct in a secularized world. As a consequence, the collective mentality was politicized. Yet the advent of the political and cultural emancipation of French society also increased the danger of assimilation into the greater North American melting pot. As a remedy, a dose of nationalism was embodied by the quest for independence.
As we have tried to show above, the mythical aspect of history thrives in the minds of the people who are deeply affected by its significance. The quest for independence embodies the collective spirit of the people in search of their own integrity and identity.
1. Claude Levi-Strauss, Structural Anthropology, New York, Basic Books Inc., 1963, 209.
2. Mircea Eliade, Myth and Reality, New York, Harper & Row, 1963.
3. Although it was soon found out that Columbus did not find his way to India, the inhabitants he met on the continent are still referred to by the wrongful appellation of “Indians”.
4. My work on the inauguration of monuments shows that the fine line between historical figures and mythical heros disappears at the dedication; L’Interpretation Religieuse de l’Origine Mythique de la Nationalite, Montreal, UQAM, 1978. More on the subject in the next chapter.
5. The connection between nationalism and the principle of opposition was first proposed by Maurice Lemire, Les Grands Themes Nationalistes du Roman Historique Canadien-Francais, Québec, PUQ, 1970.
6. Of course, when Max Weber talks about capitalism it is in terms of the “spirit” of capitalism, which implies an ethical and spiritual dimension to it. Not to be confused with the capitalistic anomalies of greed, speculation, and corruption we have witnessed. Max Weber, The Protestant Ethic and the Spirit of Capitalism, New York, Scribner, 1958.
7. From a mere sixty thousand French-Canadians in 1760, their number grew to six million in 2000.
8. Gabriel Dussault, L’Eglise A-t-Elle “Oublie” ses Promesses?, in, Relations, 386, 1973, 264-267.
9. G. Dussault, Ibid. 266.
10. See reference on messianism and bibliography, p.78.
11. Under the leadership of Liberal Prime Minister Jean Lesage.
12. See Peter Desbarats’, René, Toronto, Seal Books, 1977, 192.
13. Political Pamphlet, Quand Nous Serons Vraiment Chez-Nous.
14. Paul Tillich, Christianity and the Encounter of the World Religions, New York, Columbia University Press, 1963.
15. Ironically, at the time of this writing, Quebec with only a quarter of the country’s population turns out 40% of the business school graduates of Canada. In 1988, the province yielded half of the 50 fastest growing, publicly-held companies in the nation. It is a characteristic of antagonist acculturation for cultures to finally embrace whole heartidly the cultural principles that they opposed at the outset. See George Devereux on “antagonist acculturation” in, Ethnopsychanalyse Complementariste, Paris, Flamarion, 1972, 201-231. | <urn:uuid:34fb8f3d-5516-4bf8-b7d8-489a4b6e2fbc> | CC-MAIN-2021-21 | https://netage.org/the-mythical-quest-for-independence/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991428.43/warc/CC-MAIN-20210514152803-20210514182803-00255.warc.gz | en | 0.96037 | 5,802 | 2.6875 | 3 |