text
stringlengths
5.5k
44.2k
id
stringlengths
47
47
dump
stringclasses
2 values
url
stringlengths
15
484
file_path
stringlengths
125
141
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
4.1k
8.19k
score
float64
2.52
4.88
int_score
int64
3
5
To choose other pages, return to the home page. by David A. Lindsey Mineral deposits at Spor Mountain, in the Thomas Range of western Utah, are well-known examples of the association of fluorine with lithophile metal mineralization in a volcanic environment. In addition to fluorspar, the Spor Mountain district contains the world's largest economic deposits of beryllium and has produced uranium in the past. This slide show summarizes the geologic setting and history of volcanism and mineralization at Spor Mountain, discusses major controls of mineralization, and describes processes of mineralization and alteration that may have formed the mineral deposits at Spor Mountain. The discovery and economic significance of the Spor Mountain beryllium deposits is discussed by Griffitts (1964) and Williams (1963). Mining methods are discussed by Davis (1984). SLIDE 1 (above or left) is an aerial view looking north at the eastern part of (from left to right) Spor Mountain, The Dell, and most of the Thomas Range in 1978. The excavations in the left foreground, within the southernmost part of Spor Mountain, include the Bell Hill fluorspar mine. The Yellow Chief open pit uranium mine is visible in the center of the photo. The cliffs beyond are flows of Topaz Mountain Rhyolite of Miocene age. Beyond the cliffs is the Dugway Range. The principal area of beryllium mining is out of view, to the lower left of the photo. More aerial views (SLIDES 2-9) includes photos of beryllium mines. SLIDE 10 (left) shows the location of Spor Mountain, other ranges, and beryllium occurrences in western Utah. The center of mineralization was at Spor Mountain, situated along the ring fracture of the Thomas caldera, on the west side of the Thomas Range. The Thomas caldera was one of at least three volcanic subsidence structures formed during Oligocene time in an east-west trending belt of Tertiary age igneous rocks and mineral deposits, called the "beryllium belt of western Utah," also the "Deep Creek-Tintic belt." In addition to fluorspar, beryllium, and uranium at Spor Mountain, the belt contains other beryllium occurrences and deposits or occurrences of copper, base metals, and gold scattered between the Deep Creek Mountains on the west and the East Tintic Mountains (not shown) on the east. Pronounced aeromagnetic anomalies, in part reflecting igneous stocks and thick accumulations of volcanic rocks in calderas, accompany the mineral belt. The regional setting of the belt is described by Cohenour (1963), Hilpert and Roberts (1964), and Shawe (1972). The map in slide 10 shows three calderas (in blue) identified by Shawe (1972) and a fourth (shown in dark green) hypothesized by Lindsey (1982), as well as locations of some beryllium occurrences in the Deep Creek-Tintic belt. FLUORSPAR: Spor Mountain was first mined in 1943 for fluorspar; most mines are in carbonate rock formations on Spor Mountain. Mines are typically small adits, shafts, and irregular workings that follow pipe-like ore bodies and veins. Most pipes are mineralized breccias located along faults or fault intersections. Some contain rhyolite and are evidently plugs or volcanic vents. The geology of the fluorspar deposits is described by Staatz and Osterwald (1959) and Staatz and Carr (1964). SLIDE 11 (left) shows surface workings at the Bell Hill mine in 1979. The view is to the west. In the photo, fluorspar occurs in breccia pipes that join below; fluorspar veins like the prominent one in the center of the photo, as much as 2 ft across, cut across the pipes. URANIUM: Uranium in trace amounts is widespread in the Spor Mountain district, but uranium was mined in commercial amounts only at the Yellow Chief mine, located in The Dell, east of Spor Mountain. Tabular lenses of ore were mined until 1962 by open pit. Ore occurs in tuffaceous sandstone that has been correlated with the tuff that hosts beryllium deposits nearby. The geology of the Yellow Chief mine was first discussed by Bowyer (1963) and Staatz and Carr (1964) and was revised by Lindsey (1978) after remapping. SLIDE 12 (above) of the Yellow Chief pit was taken in 1976, about 14 years after mining ceased. The view of the north face shows bentonite (white) overlying tuffaceous sandstone with two lenses of uranium ore, located by the arrows. Beds in the pit dip west. BERYLLIUM: Beryllium at Spor Mountain has been mined by Brush Wellman since about 1970 and remains the major commercial resource of this metal in the United States. Ore is mined from linear open pits that follow the strike of the tilted ore-bearing tuff. Deposits are mined to shallow depths (very approximately, 30-50 m). Depth is limited by the cost of stripping hard rhyolite caprock. Most published information on the Spor Mountain beryllium ore comes from the deposit at the Roadside mine. Descriptions of the deposits at various stages of development have been given by Staatz and Griffitts (1961), Williams (1963), Griffitts (1964), Shawe (1968), and Lindsey and others (1973). SLIDE 13 (left) of the Roadside pit, was taken in 1970. The view is to the south, with the Fish Springs Range in the background. The picture shows a pit face of rhyolite caprock on the right; the caprock has been stripped to expose ore in the bottom of the pit, in the upper part of the beryllium tuff. Other mines and prospects (SLIDES 14-15) are located around the periphery of Spor Mountain, on the east side of Fish Springs Flat and in The Dell (see "Location and regional setting," above). Igneous rocks and mineral deposits of the beryllium belt formed during three stages of volcanism: 1) rhyodacitic and quartz latitic eruption of flows, breccias, and tuffs, ending in initial eruption of ashflow tuff and caldera collapse, 2) rhyolitic to quartz latitic ashflow tuff eruption and continued cauldron subsidence, and 3) post-caldera eruption of rhyolite and basalt flows and domes. Each stage is characterized by distinctive compositions of igneous rocks, mode of volcanic eruption, tectonic structures, and types of ore deposits formed. The most recent discussion of geologic history is by Lindsey (1982). Detailed geologic maps are by Staatz and Carr (1964) and Lindsey (1979). SLIDE 16 (table above or left) summarizes the geologic history of volcanism and mineralization in the Deep Creek-Tintic belt. SLIDE 17 (geologic map below) shows rocks and structures of the Thomas Range and northern Dugway Range. Both the table and map were prepared in 1980. The unit identified on the map as Oligocene "Needles Range Formation" has since been determined to be of local origin, and is no longer considered to correlate with the Needles Range. The first stage of volcanism was characterized by eruption of flows, breccias and tuffs of rhyodacite (SLIDE 18) to quartz latitic composition about 42-39 Ma (million years ago). Early volcanism culminated with eruption of the Eocene Mt. Laird Tuff (SLIDE 19) from the southwestern part of the Thomas caldera, about 39 Ma. The Mt. Laird eruption initiated subsidence of the Thomas caldera. The west side of the caldera is marked by a topographic wall and scarp of the Joy fault (SLIDE 20), which extends northward past the east side of Spor Mountain. The caldera margin is also marked by the confinement of younger, thick intracaldera ashflow tuff east of the Joy fault, and by caldera-wall landslide deposits (SLIDES 21-22) at Spor Mountain. A center of igneous intrusion, explosive eruption of breccia pipes, and copper, manganese and gold mineralization developed in the Drum Mountains, immediately south of the caldera margin (Nutt and others, 1991). The second stage of volcanism was characterized by eruption of rhyolitic ashflows at 38 and 32 Ma. These tuffs (Oligocene and Eocene Joy Tuff (SLIDES 23-25) and Oligocene Dell Tuff) largely filled the Thomas and adjacent calderas as subsidence continued. The source of some of the tuff is believed to be a cauldron beneath Dugway Valley. Possible vent breccias crop out in the southeastern part of the Thomas Range. Landslide megabreccia on tuff in the northern Drum Mountains may mark the west side of the Dugway Valley cauldron. Mineralization during the second stage was sparse or absent. The third stage of volcanism began at 21 Ma with eruption of the Miocene Spor Mountain Formation. The first eruptions deposited the stratified beryllium tuff member and formed a complex of rhyolite domes, flows and plugs (the porphyritic rhyolite member) in the vicinity of Spor Mountain. Between 21 and 7 Ma, the Spor Mountain Formation and all older formations were cut and tilted by basin-range faulting. All of western Utah was broken into basin-range fault blocks of varying sizes. At Spor Mountain, deposits of fluorspar, beryllium, and uranium were formed during and after eruption of the Spor Mountain Formation. At 6-8 Ma, extensive tuffs, flows and domes of Miocene Topaz Mountain Rhyolite (SLIDES 26-34) were erupted in the Thomas Range and the Keg Mountain area (SLIDES 35-40), located east of the Thomas Range. Only weak mineralization accompanied eruption of the Topaz Mountain Rhyolite, and no mineralization at all is known from the Keg Mountains. Basaltic rocks, also characteristic of the third stage of volcanism, are not represented in the vicinity of Spor Mountain, but were erupted at Fumarole Butte (SLIDE 41), located about 20 miles southeast of Spor Mountain. A small analog of the mineralized rhyolite at Spor Mountain is found at the Honeycomb Hills (SLIDES 42-45), located about 20 miles west of Spor Mountain. Controls of mineralization are defined by 1) magma chemistry, 2) favorable host rocks, and 3) structure. The association of fluorine and lithophile metals (primarily, Be, Li, U) mineralization with alkali rhyolite volcanism and basin-range extensional faulting indicates that extensional tectonism may be a fundamental cause of mineralization. In addition to this general control, both stratigraphic and structural features provided paths for mineralizing fluids. Controls of mineralization are summarized by Lindsey (1977) and updated by Lindsey (1982). MAGMA CHEMISTRY: The three stages of volcanism are marked by distinct chemical compositions of major oxides and trace elements. Rocks of the first stage, of rhyodacitic and quartz latitic composition, contain the largest trace concentrations of base metals such as lead, zinc, and copper. Rocks of the middle stage, mostly rhyolite, contain the smallest concentrations of all trace elements. Alkali rhyolite of the third, post-caldera stage, contains the largest trace concentrations of lithophile elements such as beryllium, lithium, thorium, and uranium. Trace element concentrations of these rocks are interpreted as primary, indicative of retention in erupted magma. Mineralizing fluids either interacted directly with fluorine and lithophile-rich magma or secondarily by leaching eruptive products. Low content of trace elements, especially lithophile elements, in caldera-forming rhyolite tuffs may reflect volatilization and eruption into the atmosphere. SLIDE 46 (above or right) is a total alkali-silica diagram of volcanic rocks in the Thomas Range and northern Drum Mountains. It illustrates the compositional differences among the three stages of volcanism. Plots of trace element content are given by Lindsey (1982). FAVORABLE HOST ROCK: The host-rock control of beryllium and uranium deposits at Spor Mountain is precise: every important deposit of beryllium and the only economic deposit of uranium occur in the beryllium tuff member of the Spor Mountain Formation. Beryllium is mined from the top of the beryllium tuff member, immediately beneath the mostly unmineralized porphyritic rhyolite member. The beryllium tuff is a favorable host for beryllium ore because 1) it is adjacent to faults and rhyolite vents where mineralizing fluids could enter the tuff, 2) it is a porous, reactive conduit for mineralizing fluids, including both hydrothermal and ground waters, and 3) it contains carbonate clasts, which reacted with fluorine-rich fluids to precipitate fluorite and beryllium. SLIDE 47 (above or right) shows one of the few natural outcrops of the beryllium tuff remaining in 1969, south of Spor Mountain. Other natural exposures are found in The Dell. The few natural outcrops may be atypical in that they do not contain abundant clasts of carbonate rock. Tuff with carbonate clasts (SLIDES 48-49) is well-exposed in mines and prospect pits on the southwest side of Spor Mountain. Carbonate-clast tuff was formed by explosive eruption of the Spor Mountain Rhyolite through carbonate rocks of Paleozoic age. SLIDE 50 (left) shows the porous, reactive beryllium tuff, initially composed of volcanic glass, zeolite, and abundant fragments of carbonate rock. The specimen contains abundant angular fragments of carbonate rock in tuff from the Monitor pit. The specimen represents the first stage of alteration of dolomite fragments to calcite. The matrix is still glassy. SLIDE 51 (right) shows the beryllium tuff after mineralization. Mineralized tuff is composed of a matrix of clay and potassium feldspar and the dolomite fragments have been replaced by fluorite, clay, chalcedonic quartz, and manganese oxide. The specimen is mineralized tuff from the Roadside pit. Mineralized tuff specimens (SLIDES 52-54) contain a variety of nodules formed by replacement of rock fragments. STRUCTURE: SLIDE 55 (left) shows some details of the structural geology of Spor Mountain (see slide 18 for explanation of colors and symbols). Both eruption of alkali rhyolite and the flow of mineralizing fluids were influenced by basin-range faults. Vents (shown by asterisks on the map) for the Spor Mountain Formation (and the younger Topaz Mountain Rhyolite) were located along faults and fault intersections. On Spor Mountain and in The Dell, rhyolite plugs and domes mark vents for the Spor Mountain Formation. A small dome, located at a fault intersection on the south slope of Spor Mountain, contains a fringe of mineralized pumiceous rhyolite and tuff. Fluorspar pipes on Spor Mountain also occur along faults and at fault intersections, indicating that they, too, are fault-controlled. Some faults existed prior to eruption of rhyolite, but displacement of Spor Mountain Formation and, locally, even the younger Topaz Mountain Rhyolite, indicates repeated basin-range faulting. The fault system that separates Spor Mountain from The Dell follows the margin of the Thomas caldera; it was reactivated during basin-range faulting. At the Claybank beryllium prospect and the Bell Hill fluorspar mine, breccia and wallrock in the reactivated margin fault were mineralized. Fluorspar, beryllium, and uranium ore differ markedly in occurrence, mineral association, and alternation minerals. Fluorspar is described in detail by Staatz and Osterwald (1959) and Staatz and Carr (1964); uranium ore, by Bowyer (1963) and Staatz and Carr (1964); and beryllium ore, by Staatz and Griffitts (1961), Griffitts and Rader (1963), Shawe (1968), and Lindsey and others (1973). Only beryllium and uranium ore are discussed here. SLIDE 56 (below) shows the distribution of alteration minerals and beryllium in a section through the Roadside beryllium ore body. BERYLLIUM: At most places, beryllium ore is concentrated in the upper part of the beryllium tuff member, as typified by the Roadside ore body. Feldspathic and montmorillonitic (smectite) clay alteration zones, including lithium-bearing trioctahedral smectite, closely follow and enclose beryllium ore (>1,000 ppm Be). In ore, carbonate clasts have been largely replaced by fluorite that contains submicroscopic bertrandite. Below ore, clasts that have been altered to calcite persist downward and are accompanied by anomalous amounts of lithium (probably in associated smectite). In the lowest part of the beryllium tuff, unmineralized dolomite clasts predominate. Mineralized nodules (commonly called "beryllium nodules," SLIDES 57-62) are locally abundant in beryllium ore; they represent altered clasts of carbonate rock. Other clasts, of quartzite, limestone, and volcanic rocks, are little altered. Carbonate clasts show the alteration sequence dolomite-calcite-chalcedonic quartz/opal-fluorite. Matrix of mineralized tuff consists of smectite (both dioctahedral and trioctahedral varieties), opal, potassium feldspar, and minor kaolinite. Glassy and zeolitic tuff was probably altered in the sequence smectite-sericite-potassium feldspar. Beryllium (bertrandite)-bearing matrix is clay-rich but commonly also feldspathic. Feldspathic alteration is best detected by X-ray diffraction and by overgrowths of clear potassium feldspar on volcaniclastic crystals. Photomicrographs (SLIDES 63-65) show typical matrix of glass and alteration products. Mineralized tuff also contains pore fillings and veinlets of opal (SLIDE 66), manganese oxide minerals, and yellow secondary uranium minerals, including weeksite (SLIDE 67) and beta-uranophane. URANIUM: Anomalous amounts of uranium are broadly associated with beryllium ore but, at the Roadside ore body, the principal uranium anomaly (100-200 ppm) underlies beryllium ore. The major thorium anomaly (100-150 ppm), in contrast to uranium, follows beryllium ore. Below the beryllium ore, thorium levels decline to crustal background values. The ratio of thorium to uranium (>4) rises to crustal background. The cross-section in slide 56 shows the distribution of alteration minerals and beryllium values in cuttings from three exploration drillholes in the Roadside ore body. These sections are discussed in detail by Lindsey (1977, 1982). Uranium and thorium (SLIDES 68-69) distribution is shown in additional cross sections. SLIDE 70 (left) summarizes analyses of drill cuttings from many mineralized zones in the beryllium tuff member. Anomalous levels of beryllium and uranium occur separately. Likewise, uranium and beryllium ore bodies at Spor Mountain are separate. At the Yellow Chief mine, uranium ore does not have the characteristic clay and fluorite alteration of beryllium ore. Only background trace levels of beryllium occur in Yellow Chief uranium ore. Mineralization processes involving both hydrothermal and meteoric fluids have been proposed to explain the origin of fluorspar, beryllium and uranium deposits at Spor Mountain. Hydrothermal mineralization has been discussed by Staatz and Griffitts (1961), Shawe (1968), and Lindsey and others (1973). Mineralization by meteoric waters has been discussed by Burt and Sheridan (1981) (for fluorspar, beryllium, and uranium) and by Lindsey (1981) for uranium only. The occurrence of beryllium and uranium in fluorspar indicates coprecipitation of fluorite, bertrandite, and uranium (probably dispersed in the fluorite lattice--no uranium mineral has been identified in fluorite). The proposed mechanism for precipitation is declining fluorine activity in cooling hydrothermal fluids. At low to moderately elevated temperatures, beryllium-fluoride and uranium-fluoride complexes are stable, but the complexes dissociate in the presence of calcium at temperatures common near the earth's surface. Abundant secondary minerals of manganese and uranium suggest the likely remobilization of these elements by meteoric water. Uranium in secondary minerals (uranophane and, less commonly, weeksite) was concentrated separately from beryllium and fluorite in the beryllium tuff member. Under near-surface temperatures and oxidizing conditions, uranium may have been mobilized as uranyl carbonate complexes. No reducing material, such as pyrite or organic matter, has been identified, and no uraninite has been found. Uranium is interpreted to have precipitated as uranophane and other oxidized minerals. The relative role of ascending hydrothermal fluids versus shallow meteoric (ground) water in forming the fluorspar and beryllium deposits remains an unresolved question. Burt and Sheridan (1981) discuss evidence for deposition of fluorspar and beryllium by meteoric water, perhaps heated locally by cooling rhyolite lava. Most other investigators have assumed, or argued for, a concealed pluton that either contributed hydrothermal fluids directly or heated meteoric waters that leached magma and rock of fluorine and lithophile metals. All agree that the uranium ore at the Yellow Chief was deposited by meteoric water. Bowyer, B., 1963, Yellow Chief uranium mine, Juab County, Utah, in Sharp, B. J., and Williams, N. C., eds., Beryllium and uranium mineralization in western Juab County, Utah: Utah Geological Society Guidebook to the Geology of Utah, no. 17, p. 15-22. Burt, D. M., and Sheridan, M. F., 1981, Model for the formation of uranium/lithophile element deposits in fluorine-rich volcanic rocks, in Goodell, P. C., and Waters, A. C., eds., Uranium in volcanic and volcaniclastic rocks, AAPG Studies in Geology No. 13, p. 99-109. Christiansen, E. H., Sheridan, M. F., and Burt, D. M., 1986, The geology and geochemistry of Cenozoic topaz rhyolites from the western United States: Geological Society of America Special Paper 205, 82 p. Christiansen, E. H., Bikun, J. V., Sheridan, M. F., and Burt, D. M., 1984, Geochemical evolution of topaz rhyolites from the Thomas Range and Spor Mountain, Utah: American Mineralogist, v. 69, no. 3/4, p. 223-236. Cohenour, R. E., 1963, The beryllium belt of western Utah, in Sharp, B. J., and Williams, N. C., eds., Beryllium and uranium mineralization in western Juab County, Utah: Utah Geological Society Guidebook to the Geology of Utah, no. 17, p. 4-7. Davis, L. J., 1984, Beryllium deposits in the Spor Mountain area, Juab County, Utah, in Kerns, G. J., and Kerns, R. L., Jr., eds., Geology of northwest Utah, southern Idaho and northeast Nevada: 1984 Field Conference, Utah Geological Association Publication 13, p. 173-183. Galyardt, G. L., and Rush, F. E., 1981, Geologic map of the Crater Springs known geothermal resources area and vicinity, Juab and Millard Counties, Utah: U. S. Geological Survey Miscellaneous Investigations Series Map I-1297, scale 1:24,000. Griffitts, W. R. 1964, Beryllium, in Mineral and water resources of Utah: Utah Geological and Mineralogical Survey Bulletin 73, p. 71-75. Griffitts, W. R., and Rader, L. F., Jr., 1963, Beryllium and fluorine in mineralized tuff, Spor Mountain, Juab County, Utah: U. S. Geological Survey Professional Paper 476-B, p. B16-B17. Hilpert, L. S., and Roberts, R. J., 1964, Economic geology, in Mineral and water resources of Utah: Utah Geological and Mineralogical Survey Bulletin 73, p. 28-34. Lindsey, D. A., 1975a, Mineralization halos and diagenesis in water-laid tuff of the Thomas Range, Utah: U. S. Geological Survey Professional Paper 818-B, 59 p. Lindsey, D. A., 1975b, The effect of sedimentation and diagenesis on trace element composition of water-laid tuff in the Keg Mountain area, Utah: U. S. Geological Survey Professional Paper 818-C, 35 p. Lindsey, D. A., 1977, Epithermal beryllium deposits in water-laid tuff, western Utah: Economic Geology, v. 72, no. 2, p. 219-232. Lindsey, D. A., 1978, Geology of the Yellow Chief mine, Thomas Range, Juab County, Utah, in Shawe, D. R., ed., Guidebook to mineral deposits of the Great Basin: Nevada Bureau of Mines and Geology Report 32, p. 65-68. Lindsey, D. A., 1979, Geologic map and cross-sections of Tertiary rocks in the Thomas Range and northern Drum Mountains, Juab County, Utah: U. S. Geological Survey Miscellaneous Investigations Map I-1176, scale 1:62,500. Lindsey, D. A., 1981, Volcanism and uranium mineralization at Spor Mountain, Utah, in Goodell, P. C., and Waters, A. C., eds., Uranium in volcanic and volcaniclastic rocks: AAPG Studies in Geology No. 13, p. 89-98. Lindsey, D. A., 1982, Tertiary volcanic rocks and uranium in the Thomas Range and northern Drum Mountains, Juab County, Utah: U. S. Geological Survey Professional Paper 1221, 71 p. Lindsey, D. A., Ganow, H., and Mountjoy, W., 1973, Hydrothermal alteration associated with beryllium deposits at Spor Mountain, Utah: U. S. Geological Survey Professional Paper 818-A, 20 p. Ludwig, K. R., Lindsey, D. A., Zielinski, R. A., and Simmons, K. R., 1980, U-Pb ages of uraniferous opals and implications for the history of beryllium, fluorine, and uranium mineralization at Spor Mountain, Utah: Earth and Planetary Science Letters, v. 46, no. 2, p. 221-232. McAnulty, W. N., and Levinson, A. A., 1964, Rare alkali and beryllium mineralization in volcanic tuffs, Honeycomb Hills: Juab County, Utah: Economic Geology, v. 59, no. 5, p. 768-774. Nutt, C. J., Thorman, C. H., Zimbelman, D. R., and Gloyn, R. W., 1991, Geologic setting and trace-element geochemistry of the Detroit mining district and Drum gold mine, Drum Mountains, west-central Utah, in Raines, G. L., Lisle, R. E., Schafer, R. W., and Wilkinson, W. H., eds., Geology and ore deposits of the Great Basin: Geological Society of Nevada and U. S. Geological Survey, Reno, Nevada, April 1-5, 1990, Proceedings, v. 1, p. 491-509. Shawe, D. R., 1968, Geology of the Spor Mountain beryllium district, Utah, in Ridge, J. D., ed., Ore deposits of the United States, 1933-1967 (Graton-Sales volume): New York, American Institute of Mining, Metallurgical, and Petroleum Engineers, v. 2, pt. 8, p. 1149-1161. Shawe, D. R., 1972, Reconnaissance geology and mineral potential of the Thomas, Keg, and Desert calderas, central Juab County, Utah: U. S. Geological Survey Professional Paper 800-B, p. B67-B77. Shubat, M. A., and Snee, L. W., 1992, High-precision 40Ar/39Ar geochronology, volcanic stratigraphy, and mineral deposits of Keg Mountain, west-central Utah, in Thorman, C. H., ed., Application of structural geology to mineral and energy resources of the central and western United States: U. S. Geological Survey Bulletin 2012, p. G1-G-16. Staatz, M. H. and Carr, W. J., 1964, Geology and mineral deposits of the Thomas and Dugway Ranges, Juab and Tooele Counties, Utah: U. S. Geological Survey Professional Paper 415, 188 p. Staatz, M. H., and Griffitts, 1961, Beryllium-bearing tuff in the Thomas Range, Juab County, Utah: Economic Geology, v. 56, no. 5, p. 946-950. Staatz, M. H., and Osterwald, F. W., 1959, Geology of the Thomas Range fluorspar district, Juab County, Utah: U. S. Geological Survey Bulletin 1069, 97 p. Williams, N. C., 1963, Beryllium deposits, Spor Mountain, Utah, in Sharp, B. J., and N. C. Williams, N. C., eds., Beryllium and uranium mineralization in western Juab County, Utah: Utah Geological Society Guidebook to the Geology of Utah, no. 17, p. 36-59. Zielinski, R. A., Lindsey, D. A., and Rosholt, J. N., 1980, The distribution and mobility of uranium in glassy and zeolitized tuff, Keg Mountain area, Utah, U.S.A.: Chemical Geology, v. 29, no. 1, p. 139-162. To choose other pages, return to the home page.
<urn:uuid:1ccb02ff-1906-498c-8d57-2c1a9275a485>
CC-MAIN-2021-21
https://pubs.usgs.gov/of/1998/ofr-98-0524/SPORMTN.HTM
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992159.64/warc/CC-MAIN-20210517084550-20210517114550-00257.warc.gz
en
0.885766
6,724
3.140625
3
Administrative data are readily available, inexpensive, computer readable, and cover large populations. Despite coding irregularities and limited clinical details, administrative data supplemented by tools such as the Agency for Healthcare Research and Quality (AHRQ) patient safety indicators (PSIs) could serve as a screen for potential patient safety problems that merit further investigation, offer valuable insights into adverse impacts and risks of medical errors and, to some extent, provide benchmarks for tracking progress in patient safety efforts at local, state, or national levels. - patient safety research - patient safety indicators - error reporting Statistics from Altmetric.com The first and most critical obstacle in the patient safety campaign is the lack of a system that can reliably identify and report medical errors.1 Such a system is a prerequisite to study the magnitude of the problem, to identify risks and correlated factors, to find solutions, and to examine the effectiveness of any intervention aimed at reducing medical errors. Medical records have so far been the primary source for researching medical errors. Over 90% of the original studies reviewed by the 1999 Institute of Medicine (IOM) report involve medical record abstractions.1 This system contains rich clinical details that allow identification of various medical injuries and near misses and analysis of circumstances and causes of errors. A significant limitation of this system is that medical records are mostly in paper format or electronic format that is not readily usable for research. Transforming medical records into research data is resource intensive and requires exceptional knowledge and skills in medical context and research. As a result, patient safety research with medical records is usually limited in scope and statistical power. Alternative systems for safety research include mandatory and voluntary reports of medical errors, drug safety surveillance, nosocomial infection surveillance, and medical malpractice data. All of these systems have limitations and/or access difficulties. For example, about 20 US states mandate reporting serious adverse events such as unanticipated death, brain or spinal cord damage but no published study has ever used the data, probably because they are strictly guarded from the public and researchers.2 Administrative data are a viable source, and their potential in patient safety research is increasingly recognized. They are readily available, inexpensive, computer readable, typically continuous, and cover large populations. In the early 1970s administrative data were used to reveal startling small area variations in health care and practice patterns.3 In the 1980s many researchers started using the data for outcomes research.4 Since the early 1990s researchers have been exploring the potential of administrative data in assessing quality and patient safety. Notable examples are the complication screening programs (CSP) by Iezzoni and colleagues5 and the Agency for Healthcare Research and Quality (AHRQ)’s quality indicators.6 In 2002 AHRQ developed and released the patient safety indicators (PSIs),7 a tool specifically designed for screening administrative data for patient safety events and medical errors. This development opened a new stage and opportunities for patient safety research using administrative data. This paper provides a critical review of the progress in administrative data based patient safety research, with a focus on the PSIs and initial analysis of applying the PSIs to hospital discharges in a sample of general hospitals in the US. The merits and limitations of claims based systems are reviewed and the potential applications and future challenges discussed. ADMINISTRATIVE DATA BASED PATIENT SAFETY RESEARCH We conducted an extensive literature review aimed at identifying all empirical research in patient safety or medical errors that used administrative data. Our review started with the IOM report,1 the review performed by the University of California at San Francisco-Stanford Evidence-Based Practice Center under contract with AHRQ,8 Iezzoni’s review of administrative data based research on quality of care,9 and our previous research.10,11 We then carried out a systematic search of Medline and AHRQ grant databases from 1966 to 2002 using the following search algorithm ((patient safety OR medical error* OR medical-errors* OR adverse event* OR complication* OR iatrogenic* OR nosocomial) AND (administrative data* OR insurance-claim-review* OR claims data or ICD-9-CM)). Use of administrative data in quality and safety assessment Administrative data, also called claims data, are by-products of administering and reimbursing healthcare services. Government payers (such as Medicare, Medicaid and Veterans Affairs) and private insurance companies regularly maintain a large amount of administrative data concentrated primarily on acute hospital admissions and, increasingly, on outpatient care, nursing homes, home care, and hospice programs. The core data elements of an administrative data system are admission date, discharge date and status, primary and varying numbers of secondary International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) diagnoses, procedures, and external causes of injury, and some demographic variables. These data are often available as compiled research databases from federal agencies, state health departments, health plans, and private data institutions. For example, AHRQ Healthcare Cost and Utilization Project (HCUP), a partnership of federal government and over 30 participating states, compiles uniform hospital discharge records for research purposes (see http://www.ahrq.gov/data/hcup/ for more details). Overall, there has been limited use of administrative data in quality and safety research. Roos and Brazauskas in 199012 proposed screening claims data for adverse events and to guide subsequent medical record reviews to determine whether a quality problem existed. Leatherman et al in 199113 described a quality screening and management program developed at the United Healthcare Corporation that used claims data to explore incidence rates, adverse events, and other outcomes measures. Riley et al in 199314 used ICD-9-CM codes in Medicare claims to identify readmission to hospital for adverse events following selected procedures. The work of Iezzoni and colleagues on CSP in the early 1990s,5,15,16 supported by AHRQ, was the first systematic exploration of the value of administrative data in quality and patient safety research. The CSP relied on ICD-9-CM codes to identify 27 potentially preventable in-hospital complications such as postoperative pneumonia, hemorrhage, medication incidents, and wound infection. Iezzoni et al16 found that patients with complications were significantly older, more likely to have comorbid conditions, more likely to die, and were higher in charges and lengths of stay than other patients. They also found that hospital complication rates generally were correlated across clinical areas, but not correlated with hospital mortality rates.15 Higher relative rates of complications were associated with larger hospitals, availability of major teaching facilities, and provision of open heart surgery, as well as with coding more diagnoses per case.15 Such findings, along with findings from other studies,5,17–19 suggested that the CSP had certain clinical validity for research use, but also cast some doubts on the usefulness of CSP as a tool for provider level quality assessment. In the mid 1990s AHRQ developed a set of administrative data based quality indicators as a companion tool for HCUP, named HCUP QIs.6 The 33 original QIs included several measures of avoidable adverse events and complications. Given the substantial nationwide interest in quality of care and a lack of quality assessment tools, the QIs were soon found in empirical studies. For example, Needleman and colleagues20 at Harvard University and Kovner and colleagues21 at AHRQ used some of these QIs to assess the association between inadequate nurse staffing and high rates of complications in hospitalized patients. The IOM’s call to develop medical error reporting systems three years ago1 prompted renewed interest and vigor in developing a tool specifically designed for patient safety research that takes advantage of the large volume of existing claims data. Researchers at AHRQ10 compiled a list of potential administrative data measures from CSP, HCUP QI and other published works, and from hand searching ICD-9-CM codes for complications, adverse events, medical negligence, and iatrogenic conditions. For each potential indicator, ICD-9-CM code inclusion and exclusion criteria were created to identify appropriate risk pools and to minimize ambiguity as to whether an event was a true error or an unpreventable complication. This list of potential codes was grouped into 13 measures based on clinical cohesiveness of the codes. Analyses using this algorithm revealed significant safety incidences and associated adverse patient outcomes in both hospitalized adults10 and children.11 Realizing the potential value of administrative data based measures to screen for patient safety events, AHRQ contracted with the Evidence-Based Practice Center (EPC) at the University of California San Francisco (UCSF) and Stanford University to further expand, test, and refine these measures as well as to improve the evidence behind their use with extensive literature reviews and broad clinical consensus panels. The final product of this joint effort is the AHRQ patient safety indicators (PSIs). AHRQ PATIENT SAFETY INDICATORS (PSIS) they reviewed the literature to develop a list of candidate indicators in addition to the initial PSIs developed at AHRQ and collected information about their performance; they formed several panels of clinician experts to solicit their judgment of clinical sensibility and suggest revisions to the candidate indicators; they consulted ICD-9-CM coding experts to ensure that the definition of each indicator reflected the intended clinical situation; they conducted empirical analysis of the promising indicators using HCUP data; and they produced the software and documentation for public release at AHRQ. The PSIs include 20 indicators with reasonable face and construct validity, specificity, and potential for fostering quality improvement. Seven of the PSIs are recommended to be area based PSIs to capture complications/adverse events occurring in an area as opposed to within an institution. The PSI software calculates raw rates, risk adjusted rates derived by applying the average case mix of a baseline file that reflects a large proportion of the US hospitalized population in patient age, sex, diagnosis related groups (DRGs) and co-morbidities, and smoothed rates that dampen random fluctuations over time.7 Thirty co-morbidity categories23 are automatically generated by the software and used as risk adjusters together with variables available in most administrative data systems. Table 1 describes the definitions of the numerators, denominators, and key exclusions for the 20 PSIs, and table 2 provides the findings from applying the PSI to the 7.45 million discharges in the HCUP Nationwide Inpatient Sample for the year 2000. Note that each PSI has a unique risk pool determined by its denominator definition and exclusion criteria. Table 3 presents unadjusted length of stay, charges, and in-hospital mortality for patients with and without PSI events. Tables 2 and 3 show substantial numbers of patient safety events with tangible impacts on patient outcomes in terms of increased length of stay, increased likelihood of in-hospital death, and increased charges for patients experiencing a PSI event as opposed to those that do not. Taken together, these tables clearly point to a significant potential role for administrative data in patient safety efforts. CHALLENGES OF ADMINISTRATIVE DATA BASED PATIENT SAFETY RESEARCH Any discussion of patient safety research using administrative data should recognize some data limitations and understand how such limitations play in their analysis. In particular, we focus on how these issues relate to the PSI and/or are addressed by the PSI. Problems with ICD-9-CM coding There are many concerns over ICD-9-CM coding with regard to patient safety research. First, we can only find events for which there are corresponding ICD-9-CM codes. A small number of standard codes and E codes appear to identify medical errors. For example, ICD-9-CM codes 9984, 9987, and E8710-E8719 can be used to record a foreign body left after a procedure. A coder should in theory code both the standard ICD-9-CM code and the E code. These codes, including E codes that are specifically designed to record injuries, in no way capture any significant percentage of the entire universe of medical errors that can occur. Secondly, there may be a substantial amount of coding errors due to misunderstanding of codes, or errors by physicians and coders, or miscommunications between them. An IOM study in 1977 found that agreement on the principal diagnosis between hospital reports and IOM reabstraction was only 65.2%.24 Thirdly, coding is very likely to be incomplete because of limited slots for coding secondary diagnoses and other reasons. Fourthly, assignment of ICD-9-CM codes is variable because of the absence of precise clinical definitions and context. Iezzoni and colleagues9 found that the mean number of diagnoses coded in 441 California hospitals ranged from 2.5 to 11.7, and this variation explained part of the differences between high and low mortality hospitals. Some of the variation may be driven by financial reasons, such as in “DRG creep” where hospitals choose codes with higher Medicare pay schedules.25–27 Finally, diagnoses are not dated in current administrative data systems, making it difficult to determine whether a secondary diagnosis occurred before admission (a co-morbid disease) or during the stay in hospital (a complication or medical error).9,18,28 Overall, these limitations were not amenable to be proactively addressed in developing the PSIs. Inadequate reliability and validity in identifying medical errors Administrative data have been shown to have low sensitivity but fair specificity in identifying quality gaps. Bates and colleagues29 found that, while medical record review results in many false positives, administrative data were able to identify only half of patients with adverse events but had a fair specificity of 74%. Iezzoni and colleagues conducted several validity studies on CSP. One study30 reported that physician reviewers confirmed CSP flagged complications in 68.4% of surgical and 27.2% of medical cases. Another study18 found that 89% of surgical cases and 84% of medical cases had their CSP trigger codes corroborated by review of the medical records. A third one19 indicates that objective clinical criteria or physicians’ notes supported the coded diagnosis in 70% to over 90% of most CSP flagged conditions. Focusing on specific adverse events for a specific patient population, as is built into the PSIs, improves specificity appreciably. Romano et al31 showed that specificity for postoperative complications after diskectomy can be as high as 98%. No attempts have been made to identify the validity and reliability of AHRQ PSIs. Lack of clinical details for risk analysis and risk adjustment Lack of clinical details is a major limitation of claims data.32 Of special concern is severity of illness that affects patient outcomes and conceivably affects the likelihood of medical errors. Analysis of outcomes and risk factors associated with medical errors are limited to variables available from administrative data. AHRQ PSIs and other similar tools usually identify a relatively homogeneous risk pool for each PSI which not only reduces misclassification but also alleviates variation in risk factors.5,6 Coding co-morbidities using ICD-9-CM codes represents another major effort built into the PSI for risk adjustment.23,33–36 Iezzoni37 provided an excellent review of several claims based systems measuring severity. The performance of these systems depends substantially on complete coding of diagnoses.38 The large size of the administrative data and the relative rarity of safety events requires special consideration in statistical analysis. The sheer size of the administrative data can give the illusion of great precision and power.39 Given the standard errors for cases with obstetric trauma without instrumentation and their risk pool (table 3), as an example, a difference of 0.014 days in length of stay in hospital between the two groups is statistically significant (p<0.05). Such differences are often of little clinical significance. Coupled with missing important confounding variables and difficulty in choosing correct statistical models that fit the data, clinically insignificant but statistically significant results could lead to biased inferences and erroneous conclusions. Matched case-control analysis appears to be a method particularly applicable to administrative data based analysis where cases are rare and controls are plenty. Classen et al40 and Bates et al41 matched cases of adverse drug reactions (ADR) with controls without ADR on DRG, co-morbidity, severity, and demographic characteristics to estimate the excess costs, mortality, and length of stay attributable to ADR. Jensen et al42 matched cases of hospital acquired Staphylococcus aureus infections in Denmark hospitals to patients with the same primary diagnosis at admission to identify risk factors among unmatched factors such as age, anemia, etc. Bates et al43 matched patients with ADR with patients from the same hospital unit and similar pre-event length of stay to study risk factors for ADR. Matching retains only cases and controls with similar covariates. By matching cases and controls to the same hospitals, researchers could focus on patient level factors without concerns over hospital coding practices and hospital effects. POTENTIAL APPLICATIONS OF ADMINISTRATIVE DATA IN PATIENT SAFETY RESEARCH Patient safety indicators as a screening tool First and foremost, PSIs are considered indicators, not definitive measures, of patient safety concerns.10,22 As with the CSP,17 the intention of these indicators is to provide a useful screening tool to highlight areas in which quality should be investigated in greater depth. PSIs enable institutions to quickly and easily identify a manageable number of medical records for closer scrutiny. Using administrative data to screen cases for chart review has also been proposed by Roos and Brazauskas in 1990 and by Silber and colleagues more recently.12,44 For example, in cases with a foreign body left in after surgery (table 2), 7.45 million medical records have to be extracted to uncover 536 cases. Screening the claims with PSIs would quickly identify such rare events and associated medical records could be abstracted for in-depth analysis. This approach has great potential in enhancing the design of medical record based patient safety research, but its use has yet to be widely adopted. Epidemiological study in patient safety Administrative data are valuable in epidemiological studies of the incidence and consequences and factors associated with medical injuries. Our earlier studies10,11 and those of Romano et al22 revealed substantial incidence rates and provided some insights into the outcomes and risk factors associated with medical errors. Our ongoing analysis of the 2000 data suggests that medical errors identified in table 2, excluding death in low mortality DRGs and failure to rescue where patients with errors all died during hospitalization, account for a total of 2.4 million extra days in hospital, $9263 million extra charges, and 32 591 attributable deaths in the US per year. It is also possible to identify certain risk factors such as nurse staffing.20,21 At a time when no reliable reporting system exists, applying PSIs to administrative data could reveal the overall incidences and trends and provide useful benchmarks at the local, state, regional, and national levels for tracking progress. However, such use must be made with care. Coding differences across institutions,9 lack of robust risk adjustment,37,45 relative rarity of safety events, and many other reasons make it uncertain that differences between PSI rates reflect true differences in quality.46,31 Because of these limitations, public reporting of PSI rates for institutions and regions may raise contentions over technicalities rather than facilitate quality improvement. Developers of PSIs and similar administrative data based systems in general express caution with regard to the use of the indicators for public reporting at an institutional level.10,22 We have highlighted generic and specific concerns regarding administrative data and their use, in particular, for patient safety research. Despite the known limitations, the lack of tools for patient safety today makes administrative data based tools like the AHRQ PSIs appealing. AHRQ PSIs could be useful to identify potential patient safety problems that merit further investigation. With proper methodology, administrative data can provide valuable insights into the incidences, adverse impacts, and risks of medical errors. In addition, PSI rates could serve as useful monitors at local, state and national levels and as benchmarks for tracking progress in patient safety. Further research will be needed to establish whether, and under what circumstances, these indicators are valid measures of safety related hospital performance for comparative purposes. Most paramount in this effort is work at explicitly examining cases flagged by PSIs using chart review. Some of this work is already ongoing. Future growth in electronic health data will make tools like the PSI more useful. Ongoing refinement of ICD-9-CM and, eventually, ICD-10-CM should introduce more data elements and may allow clearer distinction of complications from conditions present at admission and increase the specificity of codes. Iezzoni17 predicted that the definition, content, and scope of administrative data would change dramatically in the near future, and inclusion of clinical information from both clinicians and patients in administrative data would make exciting new possibilities. In 2001 the Department of Health and Human Services (HHS) of the AHRQ launched a $50 million initiative aimed at improving patient safety, focusing primarily on medical error data or reporting systems.47 The fruition of this investment will hopefully evolve into real time, user friendly, nationwide error reporting systems. It is conceivable that any eventual reporting system would involve triangulation between current administrative data, chart review, and self-reports in order to maximize the amount of information available with respect to medical errors. At present, the value of administrative data—with its large scale, uniformity, and regularity—should be fully harvested in our campaign against medical errors. Pointers for future research Understanding the clinical sensitivity and specificity of the AHRQ PSIs. Understanding the interplay between administrative data and self-reports or chart abstraction for research on patient safety. Development of multifaceted error reporting systems which make maximal use of all data available Administrative data are readily available, inexpensive, and cover large populations. Tools such as the AHRQ PSI are available to begin identifying, tracking, and improving healthcare processes in the interest of patient safety. Researchers need to understand the issues and limitations of administrative data as they relate to studying patient safety events. The authors of this article are responsible for its content, including any clinical or treatment recommendations. No statement in this article should be construed as an official position of the Agency for Healthcare Research and Quality or the US Department of Health and Human Services. Dr Miller completed the analysis presented here while serving as Acting Director of the Center for Quality Improvement and Patient Safety at the Agency for Healthcare Research and Quality of the US Department of Health and Human Services. If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
<urn:uuid:a8519ad4-3c73-4b39-aa1d-85f2fdffc90a>
CC-MAIN-2021-21
https://qualitysafety.bmj.com/content/12/suppl_2/ii58?ijkey=ecfd338845fbac3994588c895b5e47f22c58770b&keytype2=tf_ipsecsha
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00537.warc.gz
en
0.93176
4,728
2.828125
3
Opioids, Pain Management, and Addiction Jennifer P. Schneider, M.D., Ph.D. Pain Practitioner, Winter 2006-2007, 16:17-24. Although chronic pain is the most frequent cause of suffering and disability that seriously impairs the quality of life in the United States, chronic pain is still regularly undertreated. Despite the availability of potent pain medications, most prescribers are still reluctant to adequately treat chronic pain, especially pain that is not caused by cancer. Some reasons I have heard for the willingness to treat cancer patients are, “It doesn’t matter if the patients get addicted, since their lifespan may be limited anyway.” “With cancer pain I know the pain is real whereas with back pain, headaches, chronic pelvic pain, neuropathic pain, etc. you can’t see anything on labs or x-rays.” “Once you start, you have to keep increasing the dose because the patients will become tolerant and will need more and more to get pain relief.” Such responses bespeak a fundamental misunderstanding of chronic pain and of opioids. This article will address these misunderstandings. Its focus is on opioids, but keep in mind that treating chronic pain often requires a comprehensive approach including several non-opioid medications (NSAIDs, (acetaminophen, NSAIDs, anticonvulsants for neuropathic pain, muscle relaxants for muscle spasm, etc.) along with physical therapy, exercise, injections, and alternative approaches. Chronic pain is e pain that lasts 3 or 6 months (or some other arbitrary time period) and which has lost its usefulness. Acute pain in a particular body part is a useful signal that something has gone wrong and needs assessment, but with chronic pain there is often a disconnect between the source of the pain and the pain experience. The cause of the pain may have resolved, or the painful body part may even have been amputated. But the pain is still real. When acute pain is prolonged (e.g. by undertreatment), changes occur in the central nervous system (a phenomenon called central sensitization) such that the pain signals continue to be sent through nerve fibers to the brain, no matter what is going on at the original site of the pain (Woolf 2000). The pain signals have taken on a life of their own, much like an experienced typist who starts typing a word and finds his or her fingers completing a commonly typed word rather than the one intended, or a driver who intends to drive home by a different route than normal, but finds himself having unthinkingly turned the car to the street he usually uses. In chronic pain patients, nerve signals that are normally interpreted as heat or pressure may be perceived as pain (allodynia), or normally mild pain signals may be severely painful (hyperalgesia). The result is that it is hard to assess chronic pain objectively. Typically what is observed is pain behavior, so that the patient who grimaces and groans, whose face is pale, who is hyperventilating or crying, is believed to be in a lot of pain, whereas a patient who sits quietly, or who is observed laughing in the waiting room, is thought not to be in pain. Chronic pain patients, however, adjust to their condition, as does their autonomic nervous system. In reality, the best measure of chronic pain intensity is the patient’s word. This is considered by JACHHO (Joint Commission on Accreditation of Health Care Organizations) the gold standard of pain assessment (JCAHO, 2000). Not believing the patient is likely to lead to exaggerated pain behaviors and can damage the practitioner-patient relationship. The goal of chronic pain treatment The goal of acute pain treatment is first and foremost to diagnose and treat the source of the pain, and second to provide pain relief. Chronic pain treatment, however, is different. The initial step again is diagnosis and definitive treatment. But once the patient is beyond that stage – the back pain has been operated twice and the surgeon now says that additional surgery is unwarranted; the neurologist says the headaches are not due to a brain tumor but rather are a chronic recurrent problem; the patient has been patched up after the car accident but pain remains – the goals now become relieving pain and improving function. Patients often believe that if only one more sophisticated test is done or specialist seen, the “real cause” can be determined and curative treatment instituted. Most of the time, this is not so; patients need to be educated to take the focus off diagnosis and on to improving their function. A successful outcome in chronic pain treatment is one that improves the patient’s functioning. When a patient says, “I have my life back,” he doesn’t mean that he is still spending all day in bed, but with less pain. He means he can now go to work, walk the dog, clean the house, do yardwork, have sex, etc. That constitutes a good outcome, but getting there may require strong pain medications. Are opioids safe? In their position paper on pain management for geriatric patients, the American Geriatrics Society wrote that opioids are safer than NSAIDs. (AGS, 2002) Unlike NSAIDs, opioids do not cause GI bleeding, don’t elevate blood pressure, and have no specific organ toxicity. Their chief side effects are nausea/vomiting, sedation/respiratory depression, and constipation. The first two usually resolve with continued dosing. Constipation does not, so that patients on opioids need a continual bowel program. Opioids bind to mu receptors in the gut, slowing down the transit of materials through the intestinal tract. For this reason, fluids and fiber aren’t sufficient; the patient needs a laxative to counteract the slowing effect of the opioid. I generally recommend a preventive regimen of daily senna plus a stool softener. Chronic opioid administration often causes a subnormal testosterone level in males. (Daniell, 2002; Rajagopal et al, 2003.) This can result not only in decreased libido and erectile dysfunction but also in decreased muscle strength, less energy, and eventually in osteoporosis. All male patients on chronic opioids should have their testosterone levels checked. Unless contraindicated, consider testosterone replacement. There is no accepted upper limit of safety for opioid analgesics. Because of genetic differences and varying pathology, patients differ enormously in the dose needed for adequate analgesia. Patients may also differ genetically in their response to a particular opioid (Galer et al, 1992), so if high doses of one opioid are not effective, consider changing to another. Opioid-induced sedation typically resolves with a few days after a dose is begun or increased, so patients need to avoid driving when sedated. Once they feel alert, generally it is safe to drive because they have adequate psychomotor functioning (Jamison t al, 2003; sabatowski et al, 2002, Fishbain et al, 2002). Tolerance is the need to increase the dose to get the same effect, or a decrease in effect when the same dose is continued. Asking “Do patients get tolerant to opioids?” is asking the wrong question. The correct response is, “Tolerant to which effect?” Opioids have several effects, and tolerance to these differs. As mentioned above, tolerance to sedation and nausea is common, a desirable outcome. Tolerance to constipation is not, which is why an ongoing bowel program is necessary. Contrary to common opinion, tolerance to the pain-relieving effect of opioids is uncommon. (Scimeca et al, 2000; Portenoy RK, 1996) Research in animal studies suggests that in some situations opioids cause hyperalgesia (Mercadante S et al, 2003) but this is rarely observed in the clinical setting. Usually when a patient is on a dose of opioid that gives good pain relief, he or she is likely to stay on that same dose for a long time. When the patient complains of increased pain, consider the following possible reasons: - The patient has increased her level of physical activity - The underlying disease has worsened or a new pain problem has appeared Increased pain after a year of two of a stable dose is not due to late development of tolerance. Assessment requires going back to basics: re-evaluate the back or whatever region of the body has increased pain. Understanding physical dependence versus addiction. Physical dependence is a property of various classes of drugs, including opioids and corticosteroids. Once the body has become habituated to such drugs, abrupt cessation results in a recognizable withdrawal syndrome. Full-blown withdrawal from steroids and alcohol is potentially fatal; withdrawal from opioids is uncomfortable but rarely dangerous. Some drugs of abuse are associated with a withdrawal syndrome; others (such as cocaine and marijuana) are not. Withdrawal symptoms can be avoided by tapering the drug, as every practitioner who prescribes corticosteroids knows. Physical dependence is a different phenomenon from addiction. Confusion arises because opioids can produce both physical dependence and addiction. Pain patients treated chronically with opioids often become physically dependent, but only occasionally develop de novo addiction. A prior history of drug or alcohol addiction or abuse increases the risk of addiction. Drug addiction is a disease in which there are three elements: - Loss of control (also called compulsive use) of a drug – the person uses more than intended, is unsuccessful in attempts to cut down, etc. - Continuation despite significant adverse consequences – disease or injury, job loss, relationship difficulties, arrest, etc. - Preoccupation or obsession – over obtaining, using, and recovering from the effects of the drug. Signs of possible drug addiction in the medical setting may include: - Repeatedly using up the drug before the next refill (but see the section on pseudoaddiction below!) - Frequent requests for early refills, recurrent stories that the medication was lost, stolen, fell down the toilet, was eaten by the dog, etc. - Abuse of illicit drugs - Selling prescription drugs - Injecting topical or oral medications For a more detailed description of addictive disorders, look at the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV, (APA, 1994), but notice that the word addiction appears nowhere in this “bible” of psychiatric disorders. Instead, the word has been replaced by the term dependency, so that opioid addiction is called opioid dependency, which is not at all the same thing as physical dependency on opioids. This is why when discussing issues of opioid addiction versus physical dependency, it’s crucial to make the distinction. Does prescribing opioids for pain lead to de novo addiction? Surprisingly, there are no solid published studies to answer how likely prescribing opioids for chronic pain is to engender iatrogenic addiction. In the U.S. about 10 percent of people are addicted to drugs, so it’s expected that this will also be true of the pain population. Clinical experience by pain specialists such as Russell Portenoy suggests that de novo addiction to opioids in patients without an addiction history is unlikely to result from long-term opioid treatment for pain (Portenoy 2003). One way to minimize this likelihood is to keep careful records of when refills are due, have clear-cut rules and expectations outlined in a written contract, get urine drug screens if you have any concerns, and see the patient on a regular basis (See below). Pseudoaddiction versus addiction In the clinical setting, undertreated patients may look like addicts, because in their efforts to obtain more pain relief they may use more than prescribed, go to more than one prescriber to gain opioids (“doctor shopping”), or make up stories why they need early refills. Behavior that results from undertreated pain rather than from addiction is called pseudoaddiction (Weissman DE 1989). Some prescribers do not realize, for example, that giving 100 Percocet (containing 5 mg oxycodone) for a month may be seriously undertreating a patient with significant 24/7 pain. If in doubt, the prescriber can give the patient a week’s supply of their pain medication at a dose that they say has worked for them, then see the patient back in a few days, along with the prescription bottle, and see what happens. In a legitimate patient who has been undertreated, the aberrant behaviors will disappear once treatment is adequate. Other aberrant drug-related behaviors (Portenoy), such as selling prescription drugs or injecting an oral or topical formulations, are huge red flags for addictive disorders. Assessment for appropriateness of opioid therapy Patient assessment for a chronic pain problem begins with a history of the pain problem, supplemented by old records of prior assessment and treatment. Let’s assume that a patient who comes to you for pain management has chronic back pain that has been evaluated and treated surgically. She has had several local injections with transient benefit. Assessment begins with obtaining a history of the pain problem, treatments already tried, current medications, and previous medications tried for the pain. Ask about the patient’s life before the back pain began and how the back pain has impacted her functioning. What is she able to do now? What are her goals in seeking pain management? Ask about other current and past medical problems, the patient’s job history and current employment, and whether or not she is living alone. Inquire about past or present use of cigarettes, alcohol, coffee, and illicit drugs. I phrase the latter as, “Have you had any experience with recreational drugs?” A prior addiction or abuse history does not rule out opioid use, but requires caution. The goal of prescribing pain medications is to maximize the patient’s functioning, not to minimize the dose. With this in mind, the process consists of beginning with a low dose to minimize side effects, then titrate upwards until an effective dose is reached. The initial dose and the particular drug depend on what opioid (if any) the patient is currently taking, what experience they’ve had with various opioids, and what attitudes they have about particular drugs. When patients obtain pain relief, they are likely to increase their level of activity, which in turn means a need for an increased dose of opioid. Once the patient’s level of functioning has stabilized, so does the maintenance dose of medication. In general, short-acting opioids should not be used as the mainstay of chronic pain treatment. They require repeated dosing during the day, keeping the patient focused on his or her pain; provide up-and-down blood levels which can result in periods of mood alteration alternating with increased pain; do not last long enough at night to provide sustained sleep; and are usually formulated in combination with acetaminophen (sometimes aspirin), which is toxic in high doses. Sustained-release opioids, on the other hand, provide smooth blood levels with sustained pain relief and allow better sleep at night. The plan is to maintain the patient on an effective dose of a long-acting opioid (methadone) or sustained-release preparation (morphine, oxycodone, or oxymorphone, or transdermal fentanyl), and supplement with a small quantity of an immediate-release preparation for breakthrough pain (hydrocodone in Vicodin, oxycodone in Percocet, etc.) Recognize that chronic pain is not uniform throughout the day or week. At times the patient may have increased pain because of increased physical activity, weather changes, end-of-dose failure, or increased anxiety or depression. (Extensive medical literature supports the finding that pain and depression each worsen the other, and when both are present, both need to be treated.) The patient is told to take the sustained-release opioid on a timed basis, and the immediate-release only as needed. Patients who take opioid analgesics need to be informed consumers. The practitioner’s responsibility is to educate patients about physical dependence, addiction, constipation, preventing diversion, etc. Patients need to understand what is expected of them. A written opioid agreement, to be signed by the patient, spells out the physician’s expectations of the patient. The patient agrees to assist in obtaining old medical records, to obtain opioids from only one prescriber, to get the prescription filled at only one pharmacy, to make no change in dosage without prior discussion with the physician, to obtain any consultations the physician recommends, not to use illegal drugs, and to agree to urine drug screen. The patient also gives permission to the prescriber to discuss the patient with pharmacists and other relevant practitioners. The patient understands that early refills will not be given (except for a good reason). Part of appropriate assessment for opioid treatment is to determine the level of structure the patient needs. Anyone who has chronic pain deserves treatment, but some people need more structure than others. If a patient cannot reliably manage their own medications, a plan to do so must be arranged. If a problem becomes evident in the course of treatment, the structure may need to be intensified. Some examples from my practice where opioids were prescribed only when a family member agreed to hold and dispense the medications: - a 75-year old woman with dementia who couldn’t remember if she’d taken her medication - a 20-year old youth with bipolar illness who has episodes of hypomania when he misuses medications, alcohol, etc. - a 45-year old man with a head injury who can’t remember things from day to day Another situation in which a patient cannot be relied on to take his opioids responsibly is the person with an active drug addiction. The only way such a person can be considered for opioid management is if he or she is receiving ongoing treatment for the drug or alcohol addiction. A position paper of the American Academy of Pain Medicine and American Pain Society states, “Experience has shown that known addicts can benefit from the carefully supervised judicious use of opioids for the treatment of pain due to cancer, surgery, or recurrent painful illnesses.” (AAPMed/APS Patients with the two concurrent diseases of pain and addiction would benefit from referral to an addiction specialist. Patients with an addiction history will benefit from occasional urine drug screens and ongoing involvement in a recovery program such as AA or NA. Former addicts who have family and community support and who are involved in addiction recovery activities can do well with opioid treatment (Dunbar 1996). Chronic pain patients need to be seen fairly often – I see stable patients once every two months, but more often initially or if something changes. At each visit the “4A”s (Passik 2000) are assessed and documented, as is a fifth A, affect – how the patient feels. - Analgesia – “On a scale of zero to ten, how much pain do you have today?” - Activities of daily living – How often and how long do you walk the dog, etc. - Adverse effects – how’s the constipation? Any sedation? etc. - Aberrant behaviors – Document that the patient wants an early refill because she’s going on vacation, or has more pain, etc. Anything out of the usual pattern. An important difference between addicts and pain patients who are benefiting from opioid treatment is that drug use secondary to addiction tends to constrict the person’s life; they are increasing focused on the drug, while the rest of their lift suffers. In contrast, appropriate pain treatment expands the person’s life, and lets them function better in their daily life. Talk with patients about their original goals when they started treatment and how close they are to those goals. Opioids are the strongest available analgesics, and many patients can benefit from using them. Practitioners who prescribe opioids need to be knowledgeable about these drugs, to believe patients unless there is reason not to, and to strive for a balance between adequate pain treatment and prevention of misuse. An excellent guide to the rational use of opioids in treatment of chronic pain was recently published by Gourlay et al (2005). Guidelines for opioid prescribing can also be obtained from the following websites: American Pain Society Federation of State Medical Boards of the United States Pain and Policy Studies Group, University of Wisconsin Comprehensive Cancer Center - Woolf CJ, Salter MW. Neuronal plasticity: increasing the gain in pain. Science 2000;288:1765-1768. - JCAHO Pain assessment and management: An organizational approach. Oakbrook Terrace, IL: JCAHO, 2000. - American Geriatrics Society Panel on persistent pain in older persons. The management of persistent pain in older persons. Journal of American Geriatrics Society 50:S205-224, 2002. - Daniell, HW. Hypogonadism in men consuming sustained-action oral opioids. J. Pain 3:377-384, 2002. - Rajagopal A, Vassilopoulou-Sellin R, Palmer JL et al. Hypogonadism and sexual dysfunction in male cancer survivors receiving chronic opioid therapy. Journal Pain Symptom Manage 2003:26:1055-1061. - Galer BS, Coyle N, Pasternak GW, Portenoy RK. Individual variability in the response to different opioids: report of five cases. Pain 49:87-91, 1992. - Jamison RN, Schein JR, Vallow S et al. Neuropsychological effects of long-term opioid use in chronic pain patient. J Pain Symptom Manage 2002;26:913-921. - Sabatowski R, Schwalen S, Rettig K et al. Driving ability undr long-term treatment with transdermal fentanyl. J Pain Symptom Manage 2002;25:38-47. - Fishbain DA, Cutler RG, Rosomoff HL, Rosomoff RJ. Re opioid-dependent/tolelrant patients impaired in driving-related skills: A structured evidence-based interview. J Pain Pall Care Pharmacother 2002;16:9-28. - Scimeca MM, Savage, SR Portenoy,RK & Lowinson,J 2000. Treatment of pain in methadone-maintained patients. Mt. Sinai Journal of Medicine 200;67(5-6):412-422. - Portenoy, RK. Using opioids for chronic nonmalignant pain: current thinking. Internal Medicine 1996;17(suppl):S25-S31) - Mercadante S, Ferrera P, Villari P. Hyperalgesia: an emerging iatrogenic syndrome. I Pain Symptom Manage 2003;26:769-775. - American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders, 4th Edition. 1994. Washington DC: APA - Portenoy RK, www.deadiversion.usdoj.gov/pubs/pressrel/newsrel_102301.pdf Accessed 9-18-06. - Weissman DE and Haddox JD. Opioid pseudoaddiction – an iatrogenic syndrome. Pain 36:363-366, 1989. - AAPM/APS/ The use of opioids for the treatment of chronic pain. Chicago, 1994. - Dunbar SA & Katz NP. Chronic opioid therapy for nonmalignant pain in partients with a history of substance abuse: Report of 20 cases. Journal of Pain & Symptom Management 11:163-171, 1996. - Passik SD & Weinreb HJ, 2000. Managing chronic nonmalignant pain: Overcoming obstacles to the use of opioids. Adv Ther 17:70-83. - Gourlay DL, Heit AA & Amahregi A, 2005. Universal precautions in pain medicine: A rational approach to the treatment of chronic pain. Pain Medicine 6:107-112.
<urn:uuid:a16228d2-281d-4840-94c3-4260978a7655>
CC-MAIN-2021-21
http://jenniferschneider.com/index.php/81-articles/21-opioids-pain-management-and-addiction
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992440.69/warc/CC-MAIN-20210517180757-20210517210757-00616.warc.gz
en
0.925719
4,983
2.515625
3
How to read a medieval astronomical calendar Updated: May 17, 2020 They're everywhere. Calendars with some – or lots of – astronomical content appear in medical almanacs, in Christian prayer books, psalters and books of hours. And, of course, they are a common feature of astronomy books. But what do all those columns of letters and numbers mean? This post explains how to make sense of all the information. Sometimes it's complex, sometimes surprisingly simple. We'll work through a particularly detailed example, seeing briefly what each part was for. At the end I'll give you some tips to help you figure out any calendar you might be struggling with. Do please comment with any questions! (We're focusing on the medieval western Christian calendar here, but we must be aware that Latin astronomy was built on pre-Christian roots, and was influenced by developments by Jewish and Muslim astronomers. Calendars took different forms in different cultures.) Here's our example - not much space wasted! The core of the calendar The contents of calendars varied widely, as each astronomer copied or computed the particular information that would be most useful. However, all calendars in Latin Christendom were built around a necessary core. The core of the calendar above are the five columns under the word Aprilis (April). Here they are in close-up. From left to right, they are: The Golden Number (there are always gaps in this column). The ferial letter (a.k.a. dominical letter) The day number. This counts down to... Nones (or Ides, or Kalends) The name of a saint or other feast celebrated on a certain day. But wait a moment! Probably the most noticeable thing about this close-up picture is the big space in the middle. What should be there is a giant KL. It stands for Kalends, the first day of the month. Don't be surprised that it's missing! One important thing to note about calendars is that they were living objects, put together over time by users with changing needs. Sometimes bits got left out by scribes/illuminators in a hurry. Sometimes later users added to them, corrected them or customised them in some other way. Here's another example. Our eye is immediately drawn to the striking gold KL marking the first day of April. Underneath it you have the same five columns, but this calendar, from the 1270s, uses Roman numerals. (There are Arabic numerals on the left - I explain them in this post. But they are a later addition - more customisation!) What you won't see here, and don't find very often in medieval calendars, is a column of days numbered 1 to 30. Instead, the month was split into three unequal chunks. (The system was inherited from the Romans, based on the calendar reformed by Julius Caesar, which is why we call it the Julian Calendar.) In the middle of the month was the Ides, on either the 13th or 15th day. (You can find a month-by-month list in this post, which explains the cultural context of these calendars.) The days after Ides count down to the next Kalends. The last day of April was 2 Kalends May, i.e. the second day – counting inclusively – before the Kalends. The day before Ides was 2 Ides, the day before that was 3 Ides... and so on up to 9 Ides, which was called Nones, from the Latin word for "nine". Since that's 9 days before Ides, it was always on the 5th or 7th of the month. From there back to Kalends, the first day of the month, it's just a few more days to count backwards. That accounts for the countdown of numbers, and the fat capital N (for Nones), Id' (Ides) and Kl (Kalends) just to the right. To the left of that we have a column of letters A to G known as the ferial letters. In calendars designed to last for many years, these are a clever way of keeping track of the day of the week. In any year you would only have to know what letter Sunday was (the dominical letter). If you know that all the Sundays are C that year, then all the Mondays will be D, all the Tuesdays E, and so on. (You just have remember to shift them if it's a leap year.) The column to the right of the repeating N, Id' and Kl is usually the widest: it's the feast days. The list in the Coldingham Breviary above is fairly sparse. In this April page, from the celebrated Très Riches Heures (c.1412-16) of Jean, Duke of Berry, there's something for almost every day. In fact the only reason there are any gaps at all is because the scribe didn't quite get around to filling in the REALLY important feasts (St George and Mark the Evangelist) in gold. That's why the letter A's in the column of ferial letters are also missing. Now, in the calendar at the top of this post we have a narrow, and curious, column of feast days. It goes like this, with one syllable for each day: Jun-gi-tur Am-bro-si-us A-pri-li Guth-si-bi Ti-bur Ac-ci-pit Al-ple-gum-que Ge-or Mar-cum-que Vi-ta-lem. Thirty syllables for the 30 days of April. This is a mnemonic – part of a popular scheme named Cisio Janus (which are the first two words of the verse for January). It was a popular way of learning the order of the major feast days, from Ambrose on the 4th to Vitalis on the 28th. It was easily adapted to locally specific feasts – you'll notice for example, that Guthlac, who lived in the kingdom of Mercia around 700, does not appear on the Très Riches Heures calendar above, since he was not venerated outside England. The final part of the core is the Golden Number. This is a column with numbers 1 to 19 written next to certain letters. They may appear to be randomly distributed, but look closely and you'll see that 16 is eleven days before 15, which is eleven days before 14. And 5 is eleven days before 4, which is eleven days before 3. There are some irregularities, but this is the basic pattern. It tells you the official date of the new moon in any year. It works because 12 lunar months of 29½ days add up to 354, which is 11 days shorter than a normal solar year. Thus April's new moon next year will be (at least officially) 11 days earlier than it was this year. The cycle repeats itself after 19 years, because 19 solar years = 235 lunar months. All you have to know is the Golden Number for this year. That was vitally important because the date of Easter, and all the movable feasts preceding and following it, depend on the date of the late March/early April full moon, on the 15th day after the new moon. To work out the Golden Number, just divide the year by 19, and add 1 to the remainder. For example, 1392 was golden number 6, because 1392 ÷ 19 = 73 remainder 5, and 5 + 1 = 6. Of course, Easter itself has to fall on a Sunday, and that's where it helps to know your dominical letter. The system wasn't perfect, and attempts were made to improve it throughout the Middle Ages. We can see one attempted update in the Très Riches Heures calendar above: a column on the far right headed "new golden number", with its numbers three or four days earlier than the standard list on the far left of the page. This proposal was drawn up by two French astronomers in 1345. Such reform proposals appeared regularly from the creative astronomers of the later Middle Ages; they culminated, of course, in the sixteenth-century Gregorian calendar reform. Astronomy and geometry All the above was very standard. Here's where it gets a bit more complicated, and more variable. Let's take a closer look at this calendar. Here it is again (and click here to see it online in high resolution): Medieval astronomers had their own opinions about what they thought would be useful in a calendar. They also liked to tweak and improve earlier calendars, so the format rarely stayed static for long. This calendar contains a version of the Kalendarium drawn up by a Franciscan friar named John Somer in 1380. It was valid for four 19-year cycles, from 1387 to 1462. But there is also some bonus information Somer didn't include – not least that Cisio Janus mnemonic. Here is what each column does: 6. Ascendens media nocte. The ascendant at midnight. This tells us what degree of the ecliptic is rising above the horizon at midnight. (The ecliptic is the Sun's yearly path through the stars, passing the well-known zodiac constellations.) So let's say it's 2 April – 4 Nones in the calendar. If the Sun, which is then at the 22nd degree of Aries, is directly underneath us at midnight, the sign of Sagittarius – 19° of Sagittarius, to be precise – will be rising above the horizon. The data is given as degrees within each zodiac sign. The sign is noted at the top of the column, as well as where it changes – you'll see it switches from Sagittarius to Capricorn halfway through the month. Note that this depends on the angle between your horizon and the axis of celestial rotation – or the Earth's axis, if you want to be all heliocentric about it. That angle is equivalent to your latitude, which is why over above column 19 the scribe has noted that we are at the latitude of the University of Oxford (assumed to be 51° 50' in the Middle Ages). 7. Medietatis [sic] noctis. Half the length of the night. Since the time from midnight to sunrise is one half the night, this gives you the time of sunrise on any day. It is presented in a column of minutes, with each new hour marked in red. (This was a very common way to save space.) So you can see that from 3 Ides April to 2 Id' – that is, from the 11th to the 12th – the time of sunrise goes from 5h 0m to 4h 58m. Yes, that Y-shaped number is a 5 (not a 4!), the X with a closed top is a 4, and the upside-down V is a 7. You can see in the left-hand column here that the numbers go 24, 25, 25, 26, 27. 8. Quantitas lure planetarum nocturne. Length of a planetary hour (a.k.a. "unequal hour") at night. (The word ure here, rather than the Latin hora, comes from Norman French.) According to the system of unequal hours, there were always 12 hours between sunrise and sunset, and another 12 between sunset and the next sunrise. Thus in the summer each daytime hour was long and each nighttime hour was short, and vice versa in winter. I don't have space in this post to consider the history of this system, but suffice it to say that it made a lot of sense when lives were shaped by the changing seasons and availability of daylight. Here the hour-lengths are given in degrees and minutes, rather than hours and minutes. One equal hour corresponds to 15 degrees (since 360 ÷ 24 = 15), and in the same way if you add one daytime and one nighttime planetary hour you will always get 30 degrees, because they always balance each other out. 9. Medium celi ad instans ortus solis. Midheaven at the moment of sunrise. This gives the degree of the ecliptic culminating [at its highest point, in the south] when the Sun rises. Conceptually it's very similar to column 6. Just like column 6, the data is given as degrees within a zodiac sign. This one is in Capricorn for the whole month. 10. Verus locus solis. True place of the Sun. This gives the Sun's longitude on the ecliptic. Do you know your star sign? Your star sign is the 30-degree segment of longitude that the Sun was in on the day of your birth. The Sun travels round the entire zodiac in the course of a year – 360 degrees in 365 days, so a little less than a degree a day. Solar longitude was an absolutely fundamental piece of astronomical information, which is why almost all astrolabes have a calendar on the back to find the Sun's position for any day of the year. It's also fundamental to timekeeping. Again, the data is given in degrees within each zodiac sign, and here we see the Sun move from Aries into Taurus on the 12th of April. 11. (2 columns) Ascendens signorum in circulo directo cuiusmodi est meridianus in omni regione. The ascension of signs on the direct circle, for any latitude or longitude. This is the right ascension: an arc of the celestial equator. It's measured from the equinox to another point on the equator corresponding to a certain longitude. In this case it's the Sun's longitude, in the previous column. It's laid out with separate columns of degrees and minutes. This is really useful for all kinds of astronomical calculations, which is why a standalone table of right ascensions is a common feature of almanacs. In fact it's unusual to find it in a calendar, as its values aren't really tied to days of the year – John Somer didn't include it in the calendar that this one is based on. But it was so important that it's not surprising another astronomer chose to include it here. Now we come to a set of columns with rather different information, and with writing in three different directions! At the top it says "mas. d'. igneum orientale. Sol. Saturnus. Jupiter." At the bottom we have corresponding information: "Venus. Mars. Luna. Fe. Nm. terreum. meridionale." What these abbreviated notes tell us are the astrological qualities of the triplicities for the two signs this month. As we've seen, at the start of April the Sun is in Aries, and by the end it is in Taurus. The triplicities were groups of three signs, which share certain qualities. Aries, like Leo and Sagittarius, was thought to be masculine, diurnal, fiery and eastern. Its lords by day were the Sun, Saturn and Jupiter. Taurus (and Virgo and Capricorn) were feminine, nocturnal, earthy and southern. Its lords were Venus, Mars and the Moon. These theories have ancient roots. In late medieval Europe, astronomers were hugely influenced by the encyclopedic astrological works of Muslim thinkers. Perhaps the most significant was Abu Ma‘shar, whom Latin astronomers knew as Albumasar. Another – and the probable source for the information in this calendar – was al-Qabisi, known as Alcabitius. Columns 12, 13, 14, 15 and 16 have astrological data based on al-Qabisi's theories: 12. The domus – house or domicile – was the planet strongest in each zodiac sign. This was Mars in Aries, and Venus in Taurus (as we see halfway through the month). But this column also shows the exaltations and dejections (or fallings): individual degrees where certain planets were particularly strong or weak. Here we learn, for instance, that Saturn has a dejection at Aries 21°, and the Moon has an exaltation at Taurus 3°. We also see that Taurus is the odium – that is, the detriment of Mars. As one adaptation of al-Qabisi, written in English at the end of the 14th century, explained: And there is difference bitwixe fallinge and descendinge and bitwixe detriment or harmynge; for detriment is in the opposite of the hous, fallinge sothly in opposite of exaltation. (Cambridge, Trinity College MS O.5.26, f. 4r) Thus, because Scorpio was a house of Mars, so Taurus, opposite it, was its detriment. 13. Each sign had five termini – terms. These were segments of anything between 2 and 10 degrees, each dedicated to a particular planet. 14. A facies – face – was a ten-degree segment, based on the decans of Egyptian astrology. Like the planetary hours, these cycled in a strict order, inwards from Saturn to the Moon, before beginning again at Saturn, the outermost planet. Thus, as we see here, the three faces for the end of Aries and the beginning of Taurus were Venus, Mercury, the Moon. 15. Each sign was also divided into masculine and feminine segments. So, as we see here, the sex of the last 8 degrees of Aries and first 8 of Taurus was masculine, followed by 7 feminine degrees, and so on... 16. Finally, as the English translator of al-Qabisi explained, there are the qualities: In evereche of these signes there be degrees that beth seide [are called] lucidi or bright, and degrees that beth seide tenebrosi or darke, and degrees that beth seide fumosi or fumous and degrees that beth cleped [called] vacui or voide. (MS O.5.26, f. 7v) 17. There's one more column that goes with the five we've just mentioned. We find them in the half-empty column next to the narrow mnemonic of saints' days. Here some degrees are labelled as degrees of diminished fortune or "pits" (puteus), degrees of chronic illness (azamana or azemena), and degrees of increasing fortune (augmentans fortunam). For all of these different features of particular segments or signs or individual degrees, al-Qabisi drew up a table, making them easy to locate. Here's what it looks like in the Middle English version produced around the same time as our calendar. John Somer's Kalendarium doesn't give us any of this information, and his contemporary Nicholas of Lynn only included some of it. The maker of this copy managed to cram in an awesome quantity of astrological data. Some more astronomy We're going to skip across a little now, and look at the seven columns on the left-hand side of the right-hand page (the recto). You already know most of what you need to know to understand these. 18. Altitudo solis meridiana. The Sun's meridian altitude – its height above the horizon at midday each day. Notice that degrees (black) and minutes (red) are given in the same column. 19. (2 columns) Ascensiones signorum ad latitudinem universitatis Oxonie. Ascensions of signs at the latitude of the University of Oxford. These oblique ascensions, as they're known, are the counterpart to the right ascensions in column 11. In the diagram above, the oblique ascension is ET – that is, the arc of the equator rising above the horizon with a given arc of the ecliptic. Since the turning of the celestial equator, 360 degrees in 24 hours, is how we measure time, the oblique ascensions tell us the time it takes for a particular sign – or the Sun itself on the ecliptic – to rise. This is very useful information, but it is specific to each latitude. Like the right ascensions, it wasn't usually included in calendars, and was more commonly laid out in a separate table. 20. Ascendens in media die ibidem. The ascendant at midday. Just as column 6 tells us the ascendant at midnight, this tells us what degree of the ecliptic will be rising above the horizon at local noon each day. 21. Medietatis diei. Half the length of the day. Just as column 7 gave us the time of sunrise, this gives us the time of sunset. And of course the two columns always add up to 12 hours – in the first line of the table it's 5h18m + 6h42m. 22. Quantitas lure planetarum diurne. Length of a daytime unequal hour, in degrees and minutes. This is the exact counterpart of column 8. And, as I mentioned above, if you add what's in the two columns for any line, you always get 30 degrees: 13° 14' + 16° 46' in the first line. 23. Medium celi ad instans occasus solis. Just as column 9 gave us midheaven at the moment of sunrise, this is midheaven at the moment of sunset. Conjunctions and eclipses Nearly there now! The rest of the calendar is taken up with what is the most famous component of astronomical calendars: the dates and times of conjunctions, or new moons, and details of the eclipses that sometimes occurred at conjunction or opposition (full moon). This is the result of the increasing emphasis on the Moon in 13th and 14th-century medicine, as well as weather forecasting and agriculture. By the way, the single word that means conjunction or opposition is "syzygy". You don't need to know that, but it might come in useful in Scrabble one day. Whereas all the information we've covered so far is basically permanent – that is, it only changes very slowly, or not at all, the information in columns 24 to 27 is only good for the specific years the calendar was made for. This one covers four 19-year cycles, from 1387 to 1462. The Franciscan Friar John Somer drew it up in 1380, at the request of Joan, mother of king Richard II. Why did he pick 1387 to start it from? Because that was year 1 in a cycle of golden numbers. So, in the four columns, the first covers 1387 to 1405, the second is 1406-24, the third is 1425-43, and the fourth is 1444-62. At the top, under the heading Coniunctiones equate super meridiem Oxonie (conjunctions calculated on the meridian of Oxford), we have cycles 1 to 4. Each one has three columns: ciclus (which here, confusingly, means the year within that cycle), hora (hour) and minuta (minute). These are true syzygies – using the true rather than mean longitudes of the Sun and Moon – an important development in the 14th century. Let that sink in for a moment. John Somer has given us every new moon for 76 years, calculated to the nearest minute. To take one example, we know that 1387 was year 1 of the golden number cycle. So to find the day and time of the new moon in April 1387, we scan down the first column until we find the "1" – there it is on 13 KL May, the 19th of April. The time of new moon was 04:03. Read across and you'll see that 19 years later it was at 04:40 on the same day. One effect of such precision is to show off just how out-of-date the original golden numbers had become. You only have to look back across the calendar to see that golden number 1, which is supposed to predict the date of the new moon, is not on the 19th at all - it's two days later. It's easy to see how such precise calendars drew attention to the deficiencies of the ecclesiastical calendar. John Somer himself wrote a tract, now alas lost, entitled Castigations of the Calendar. The final component of the calendar was even more advanced. Columns 28 and 29 give the precise timings and magnitudes of each solar and lunar eclipse. Solar on the left, lunar on the right. As an example, let's take the solar eclipse in 1436, in the bottom right-hand corner of the left-hand (verso) page. Here it is close-up. What it tells us is that in year 12 of the cycle, on the 16th day of the month, the eclipse would begin at 5:33:44 (yes, that's to the nearest second). The time of greatest eclipse would be at 5:48:09, and the end of the eclipse would be at 6:02:33. Over in the far-left margin we see a little drawing with the numbers 0;16,56. Those represent the proportion of the Sun that would be eclipsed, as a fraction of 12. The number above is a bit less than 17 sixtieths of one-twelfth - that is, only a tiny sliver of the Sun would be eclipsed. In other calendars we sometimes find this information displayed more graphically. Here it is in a 15th-century copy of John Somer's calendar, in a physician's folding almanac. The caption above, in English, gives us the date and time of the eclipse. The numbers around the picture, starting from the upper-left quadrant, denote: the magnitude of the eclipse (notice that the copyist has written 9;16,56 instead of 0;16,56 – we all make mistakes); the time elapsed from the start to the moment of greatest eclipse; half the duration of total eclipse – in this case, zero; the total duration from start to finish – here 0:28:49, which you will see is the difference between 5:33:44 and 6:02:33. And that's it! Pretty impressive, eh? Tips and tricks If all that seems a bit overwhelming, there are some ways to make it easier to figure out a calendar. First of all, it's pretty rare to have so much information on two pages! The core information I started with is the most common. So: If you have a column with numbers 1 to 19 and some gaps, those will be the golden numbers. A column of letters A to G will be the ferial/dominical letters. pictures of circles with segments cut out of them often indicate eclipses, and the numbers next to them represent the magnitude of those eclipses. Be careful! The noun meridies and adjective meridionalis can refer to a geographical longitude, to the direction south, or to something crossing the meridian (when it is in the south). Likewise, try not to get confused between media nocte (midnight), medietas noctis (half the length of the night), and medium celi (midheaven, i.e. the meridian). An astrolabe can be useful to check the calendar data, if you know how to use one (there are a few simulators online). But don't forget that compilers and copyists sometimes made mistakes, just as we do! Good luck! Don't forget to comment if you have any questions, or things you'd like to see in future posts. You can read a bit more about such calendars, and lots of other subjects in medieval science, in my forthcoming book, The Light Ages. All the manuscripts pictured above are available to view online. See if you can figure them out! Cambridge, Trinity College MS O.5.26: Middle English al-Qabisi London, British Library Harley MS 321: calendar by John Somer and friend(s) London, British Library Harley MS 937: physician's folding almanac London, British Library Harley MS 4664: Coldingham Breviary Some important books and articles on these calendars, especially in the context of 14th-century England: Sigmund Eisner (ed.), The Kalendarium of Nicholas of Lynn (Athens, GA: University of Georgia Press, 1980) Linne R. Mooney (ed.), The Kalendarium of John Somer (Athens, GA: University of Georgia Press, 1998) John D. North, Chaucer's Universe (Oxford: Clarendon Press, 1988) C. Philipp E. Nothaft, Scandalous Error: Calendar Reform and Calendrical Astronomy in Medieval Europe (Oxford: Oxford University Press, 2018) C. Philipp E. Nothaft, “The Astronomical Data in the Très Riches Heures and Their Fourteenth-Century Source,” Journal for the History of Astronomy 46 (2015): 113–29.
<urn:uuid:580dc8c8-6ed9-4594-9357-2083fc758d2e>
CC-MAIN-2021-21
https://www.sebfalk.com/post/how-to-read-a-medieval-astronomical-calendar
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989812.47/warc/CC-MAIN-20210515035645-20210515065645-00617.warc.gz
en
0.944308
6,133
3.625
4
Interesting Facts - Archives Interesting Facts About San Francisco's Colorful Past Available Archived Articles I grew up playing and riding my bike with friends in San Francisco's Golden Gate Park. It never occurred to me then to wonder how long the park had been there, or how it came to be in the first place. As an adult, however, I became curious about the park's beginnings. It's a fascinating story! Shortly after 1866, the people of San Francisco began clamoring for a large public park. It was at about that time that New York had begun work on Central Park, and Boston was planning its beautiful Public Garden. In order to keep up the city's civic pride, San Francisco decided that it, too, should have a park of its own. The first problem the city faced was where to place the new park, especially when it was discovered that the best land had already been appropriated. In 1868, the mayor ordered the board of supervisors to make a survey of sites available. They recommended certain areas west of Divisadero Street toward the beach -- a total of 1013 acres known as the Outside Lands. Unfortunately, there had long been dispute over ownership of this land, which had been parts of the pueblo of San Francisco ceded to the government under the old Spanish and Mexican grants. In the late 1860s, the land was held by squatters and outlaws, who were in no hurry to vacate. After a great deal of bickering, the city finally raised enough money to purchase the land for $810,595. Ownership of this property, however, was only the beginning of the problems the city would have to face in order to build their new park. In 1871, the area was just an arid, windswept tract of shifting sand dunes. Many experts believed that trees could not be grown on the site, which had less water and more sand than any other section of the city. Still, while San Franciscans jeered, the grounds were surveyed and the landscape plan designed by William Hammond Hall, who also undertook the first control of the sand. Small boys were hired to go out into the hills and collect seed from the wild lupine and this, together with beach grass, was sown to provide anchorage. Whenever a plant took root, seedlings of cypress, pine and gum trees were planted on the spot. Meanwhile, thousands of tons of soil were being hauled from near-by hills and mixed with the sand to give it ballast. Humus and peat, straw, grass cuttings and tons of manure were spaded into it. Little by little, fertile soil was made and the tawny dunes began to take on tints of green. It was a true miracle of sorts; grass and shrub and tress grew where none had ever grown before. And at last there came a day when San Franciscans looking out to sea across the onetime rolling sand, were able to gaze upon hundreds of acres of lush and verdant park land. No discussion of Golden Gate Park would be complete without mentioning the park's presiding genius, John McLaren. From 1890 to his death in 1943, McLaren dedicated his life to creating a recreation ground for beauty and utility. It is largely because of this man that San Francisco's great park was preserved, nurtured and cultivated into the lovely oasis that it remains to this day. Sources: "Suddenly San Francisco, The Early Years" by Charles Lockwood, "Golden Gate, The Park of a Thousand Vistas" by Katherine Wilson, "San Francisco Secrets", by John Snyder, and "San Francisco Almanac" by Gladys Hansen. The first of the four Cliff Houses to occupy the northwest tip of San Francisco -- at the entrance to the Golden Gate where the land ends and the Pacific crashes against the cliffs and shore -- was erected in 1863 by Charles Butler, a local real estate man. Not only did it afford a spectacular view of the Pacific Ocean, but its many visitors were greatly entertained by the antics of thousands of sea lions, otters and the famous seals from which the massive sandstone cliffs -- once a part of the mainland -- were named. Before the first Europeans arrived in the seventeen hundreds, Ocean Beach was part of one vast sand dune with not a tree in sight. In 1857, Harper's Weekly had this to say about Ocean Beach: "The voyager is impressed with the gloomy appearance of the scene before him; a multitude of low, black sand hills are partially visible over which continually sweep, like disturbed spirits, flying clouds of dense mist. Passing gradually into the strait, the scene constantly increases in interest. The surrounding hills assume a more positive form; the islands become bold and rocky, and in some parts precipitous, swelling at times into towering mountains. The strong winds and heavy fog which constantly assail the land, prevents trees and luxuriant vegetation. Despite the fog and frequent winds, this first Cliff House had everything necessary to ensure its success. Except easy access. Located at what was popularly referred to as Lands End, Ocean Beach was so far out of the city, and so difficult to reach, that it wasn't until a toll road was finished in 1864 that visitors could finally travel there in relative comfort and a great deal less time. In 1866, the proprietor of the first Cliff House was Capt. Junius G. Foster, who was a jovial, innovative innkeeper. People flocked in from San Francisco for good food and drink, horse racing and other recreation. The mile-and-a-quarter-long "speedway" (one of the final sections of the toll road) was constantly rolled to keep it smooth and watered to hold down the dust. Such famous men as Senator George Hearst, Leland Stanford, and Charles Crocker regularly raced their trotters on this improvised speedway. In 1868, Captain Foster tripled the size of the building by adding two wings and a long balcony to the original structure, thus making it what is now referred to as the "second" Cliff House, and providing overnight lodging for his guests. It became one of the premiere attractions for all the first families of the city. But in the late 1870s, the guests began to complain that the outings to Lands End weren't as much fun as they had once been. The reason for this wasn't difficult to find: Cliff House was now crawling with tourists. On nice afternoons, it wasn't unusual to see 1,200 teams hitched in front of the buildings. As the genteel clientele disappeared, the Cliff House began attracting more moneyed gamblers, politicians, and lobbyists, along with their assorted collection of lady friends. In the early 1880s, Adolph Sutro -- a quiet and scholarly German who made his fortune by solving the drainage and ventilation problems at the Comstock Lode -- bought the Cliff House and much of the surrounding land, In fact, at one time Sutro owned 1/12 of the city of San Francisco! He went on to build a vast mansion, a conservatory, a park, and the largest indoor public bath complex in the world. When the "second" Cliff House burned to the ground in 1894, Adolph Sutro rebuilt, but this time on a much grander scale. (Incidentally, this is the Cliff House pictured on the jacket of THE CLIFF HOUSE STRANGLER, chosen, if fifteen years ahead of its time, because it is so much more dramatic and recognizable than the actual building which stood there at that time). This new structure was so ornate that it quickly became known as the "Gingerbread Palace". It was a gradiose and eye-catching ediface, and went on to host many of the celebrities and luminaries of the day, such as Sarah Bernhardt, Adelina Patti, Presidents Hays, Grant, Teddi Roosevelt and Taft. In his quest to attract more working-class families to the Cliff House, Sutro discontinued offering hotel services, leading the establishment to become a popular venue for dining, receptions, private lunches, galleries, gift shops and exhibits. Adolph Sutro died in 1898, and thus did not live long enough to see his beloved Cliff House bravely withstand the ravages of the 1906 earthquake and fire. Unfortunately, its good fortune was short-lived. On September 7, 1907, the Gingerbread Palace Cliff House burned completely down to the rocks. Sutro's daughter, Dr. Emma Merritt, erected the "fourth" Cliff House, but rather than creating another elaborate structure, she opted to build one of concrete and steel that would blend in with its surroundings. This Cliff House opened its doors on July 1, 1909. After the unique and expansive Sutro Baths burned down in 1966, part of its contents -- the Musee Mecanique -- moved into the Cliff House where it still remains. The Cliff House closed once more in 1969, but reopened again in 1973 with restaurants, bars and shops. In 1977, the Golden Gate National Parks Association became the owner of the property for $3,791,000. To this day, the San Francisco Cliff House remains one of the city's most beloved and exciting landmarks, attracting millions of visitors every year from all over the world. Sources: "San Francisciana: Photographs of the Cliff House," by Marilyn Blaisdell; "San Francisco's Ocean Beach," by Kathleen Manning and Jim Dickson, Arcadia Books; "Suddenly San Francisco: The Early Years of an Instant City," by Charles Lockwood, A California Living Book. Fifty Raucous Years Of Villainy! The history of the Barbary Coast – named after the pirate-infested North African coastline – began with the 1849 discovery of gold at Sutter’s Fort in California’s Sacramento Valley. It was not until sometime in the 1860s, however, that the region bounded by Broadway, Embarcadero, Grant, and Washington came into its own, rapidly becoming notorious for its saloons, bordellos, gambling houses and crime. San Francisco's citizens knew it to be the most dangerous district in the city. It was said that within the Barbary Coast confines, more than three thousand licenses to serve liquor were issued in one year – according to these statistics, there existed a saloon here for every ninety-five persons in the entire city! Among various gangs of toughs on the waterfront, were the Barbary Coast Rangers, who terrorized shops, robbed harlots and fell upon unwary citizens who foolishly ventured into the sailortown streets at night. By day it was crowded with people of the sea; at night criminals worked from its shadows! To San Francisco goes the dubious honor of coining the word “shanghai” – the process of delivering drunk, drugged or, if all else failed, beaten sailors to crew ships waiting in the harbor. Men known as “crimps”, paid “runners” from three to five dollars for each man they brought to his saloon or boarding establishment. There, the sailors were given drinks “on the house”, then beguiled by heavily painted ladies who watched for an opportunity to slip a few drops of laudanum into the man’s whiskey. If a sailor became cantankerous, he was promptly silenced by a blow to the back of his head. Once he’d been rendered unconscious – and rolled for his money and valuables – he was unceremoniously dropped through trapdoors where more men waited to deliver him to the wet pine deck of an outward-bound long-voyage ship. Many a sailor’s stay in port ended up being as short as an hour, depending on the demand for ship hands. Prostitution was unarguably the Coast’s primary source of income. However, profits from shanghaiing was tallied in the millions of dollars, and murders were numbered in the thousands. Almost everyone living within the district was either a harlot, killer, crooked politician, pimp, thief, runner, crimp, or fugitive. Some historians claim that throughout its fifty some years of existence, the Barbary Coast committed every crime known to man. Most San Franciscans abhorred the depravity, crimes and excesses of the Coast. However, periodic attempts at reform met with little success, partly due to protection from crooked officials who received payoffs or shared in the profits from various saloons, brothels, boarding houses and other establishments. The Barbary Coast began its decline after the 1906 earthquake. When rebuilding started, many people protested that this area was too corrupt and low brow for an increasingly important city. The final blow came in 1914 when the Red Light Abatement Act – designed to close the many houses of prostitution – was passed by the state legislature. When the new law was upheld by the California Supreme Court in 1917, the area’s main attraction and source of revenue was cut off, and the once raucous Barbary Coast faded into history. Sources: "Historic San Francisco", by Rand Richards, "Golden Gate", by Felix Riesenberg, Jr., "San Francisco Almanac", by Gladys Hansen and "The Great San Francisco", by Janet Bailey. The man famously known in early San Francisco as His Imperial Majesty Emperor Norton I, was born in London, England, sometime between 1814 and 1819 (the exact date remains unclear). There was little to suggest, however, that the ambitious young Joshua A. Norton, who arrived in San Francisco in 1849, would go on to become one of the most colorful and beloved figures ever to take up residence in the rapidly expanding City by the Bay! Stepping off the boat from England, Joshua Norton carried with him the majestic sum of $40,000, inherited from his father who had died the previous year. The younger Norton enjoyed a highly successful career in the real estate market and as a merchant, so much so that by 1853 he had accumulted a fortune worth some $250,000. In an attempt to make a financial killing cornering the market on rice imported from Peru, he bought an entire shipload only to watch helplessly as tons of rice glutted the market, causing the price of rice to plummet. After years of litigation between Norton and his financial partners, the Supreme Court eventually dealt Norton a bitter defeat, leaving the young Englishman penniless. Norton declared bankruptcy in 1858 and left San Francisco for parts unknown. When Norton returned to San Francisco from his self-imposed exile, he came back a changed man, acting decidedly odd and exhibiting delusions of grandeur. No one knew or remembered who he was or where he had come from, but his faultless attire and regal airs led to intense speculation and surmise. From beneath his polished silk hat fell a thick mane of black hair, and he walked with a curious flourish suggestive of the theater, his gold-headed cane held behind him. Supposedly, Norton spoke to no one, preserving the aura of mystery as he went his way unconcerned by the staring public. Every afternoon he would promenade down Montgomery Street absorbed in his own thoughts. Then on September 17, 1859, the mysterious stranger distributed letters to various newspapers in the city, proclaiming himself to be Joshua A. Norton I, Emperor of these United States. Occasionally thereafter, he would add "Protector of Mexico" to this title, in the belief that America's neighbors to the south were in dire need of leadership and guidance. With these startling declarations, Norton began his unprecedented and remarkable 21-year "reign" over America. It is not known how the citizens of San Fransisco initially felt about their new monarch, but they apparently soon got used to him, for he was often seen walking the streets of the city, dressed in his regal -- although frequently a bit worn -- alternating blue and grey uniform, to show his support for both the Union and the Confederacy, his beaver hat with its colored feathers, his saber at his side and gnarled cane and wiry umbrella in hand. When his uniform was worn out, the San Francisco Board of Supervisors, with a great deal of ceremony, presented him with another, for which he sent them a note of thanks and a patent for nobillity in perpetuity for each supervisor. As Norton the I, Emperor of the United States of America, he lived at a boarding house on Commercial Street, and was registered as "Emperor, living at 624 Commercial St." in a census done August 1, 1870. He resided there for seventeen years, insisting on paying his rent by day instead of by the week. He was fed for free by some of San Fransisco's finest resturants, which he graciosly allowed to put up signs which said; "By Appointment to His Emperor, Joshua Norton I." He had a standing ticket, together with his two dogs, Bummer and Lazarus, at any play or concert in the city´s theatres. He was given a bicycle by the city as his means of royal transport, he was allowed to review the police to check that they performed their duty, and a special chair was reserved for him at each precinct. He marched at the head of the annual Police parade and reviewed the cadets at the University of California. In order to pay his bills he issued paper notes, mostly in 50 cent denominations but some $5 and $10 notes exist. Today they are worth far more than the face value (if they can be found). In accordance with his self-appointed role of emperor, Norton issued numerous decrees on matters of state. Deeming that he had assumed power, he saw no further need for a legislature, and on October 12, 1859, he issued a decree that formally "dissolved" the United States Congress. Not surprisingly, Norton's "orders" had no effect on Congress, which continued in its activities unperturbed. Norton issued further "decrees" in 1860 that purported to dissolve the republic and to forbid the assembly of any members of the Congress. These, like all of Norton's decrees, passed unnoticed by the government in Washington, and by the nation at large. Refusing to be discouraged, Norton's battle against the elected leaders of America was to persist throughout his "reign", though it appears that he eventually, if somewhat grudgingly, accepted that Congress would continue to exist without his permission, although this didn't change his feelings on the matter. His days consisted of inspecting the streets of San Francisco in an elaborate blue uniform with tarnished gold-plated epaulets, given to him by officers of the United States Army post at the Presidio of San Francisco, and wearing a beaver hat decorated with a peacock feather and a rosette. Frequently he enhanced this regal posture with a cane or umbrella. During his ministrations Norton would examine the condition of the sidewalks and cable cars, the state of repair of public property, the appearance of police officers, and attend to the needs of his subjects as they arose. He would frequently give lengthy philosophical expositions on a variety of topics to anyone within earshot. It was during one of his "Imperial inspections" that Norton is reputed to have performed one of his most famous acts. During the 1860s and 1870s there were a number of anti-Chinese demonstrations in the poorer districts of San Francisco, and ugly and fatal riots broke out on several occasions. During one such incident, Norton is alleged to have positioned himself between the rioters and their Chinese targets, and with a bowed head began to recite the Lord's Prayer repeatedly. Shamed, the rioters dispersed without incident. The Empreror had two dogs, strays who apparently recognized a kindred spirit in the peculiar little man, and immediately adopted him. Lazarus and Bummer became Norton's constant companions and followers, and most of the contemporary cartoons of the emperor showed him walking his dogs. Tragedy struck, however, when, in october 1863, Lazarus was run over and killed by a fire-truck. A public funeral was held, and many prominet people turned up to console the Emperor. Bummer continued to beg for scraps at his masters´ feet until the 10th of November 1865 when he, too, shuffled off this mortal coil. Mark Twain wrote the epitaph for the noble canine, saying that he'd died "full of years, and honor, and disease, and fleas." As for the Emperor, he lived out his remaining years in his little room at 624 Commercial Street, continuing to oversee his domain during his daily walks. On January 8, 1880, tragedy once again struck San Francisco. On that sad day, Norton I, "Dei Gratia" Emperor of the United States and Protector of Mexico, was promoted to glory on California Street, on his way to a lecture at the Academy of Natural Sciences, two blocks away. The cause of death was apoplexy. In his pocket was found some telegrams, a coin purse, a two and half dollar gold piece, three dollars in silver, an 1828 French Franc, and a few of his own bonds. When reporters sacked the Emperors' tiny apartment they discovered that all he left behind in the world was his collection of walking sticks, his tasseled saber, newsclippings, his corrospondesce with Queen Victoria and Lincoln, and 1,098,235 shares of stock in a worthless gold mine. The Morning Call ran the headline; "Norton the First, by the grace of God Emperor of the United States and Protector of Mexico, departed this life." On the 10th of January 1880 Emperor Norton was buried in the Masonic Cemetery. Wealthy citizens of San Fransisco paid for the coffin and burial expenses. The funeral cortege was two miles long and an estimated 10,000 to 30,000 people turned up for the funeral. It is reported by some that his burial was marked by a total eclipse of the sun. On June 30, 1934, his grave was moved to Woodlawn Cemetery by the citizens of San Francisco. On January 7, 1980, San Fransisco marked the 100th anniversary of the death of its only Emperor with lunch-hour ceremonies at Market and Montgomery streets. Coming in DECEMBER, San Francisco's infamous BARBARY COAST. Sources: "Joshua A. Norton", from Wikipedia Encyclopedia, "San Francisco Kaleidoscope" by Samuel Dickson, "The Fantastic City", by Amelia Ransome Neville, and excerpts from The Emperor Norton website. The first settlers of San Francisco's Tangrenbu, or "Port of the People of Tang" landed in the city as early as 1847. Since then, the area has gone on to become one of North America's oldest, largest and most historic Chinatowns. Located in downtown San Francisco, Old Chinatown was roughly six blocks long, from California to Broadway, and two blocks wide, from Kearny to Stockton. China at this time was undergoing a period of tremendous upheaval, and during the late 1840s and early 1850s, many Chinese eagerly seized the opportunity to seek their fortunes in California, which was generally referred to as "Gum San, or "Land of the Golden Mountain". By the end of 1851, there were an estimated 4,000 Chinese in San Francisco; by the following year their numbers had increased to 25,000. Early Chinatown was notable for its lack of women. Between 1848 and 1854, only 16 out of 45,000 Chinese immigrants were women. The main reason for this was that the Chinese were not immigrants in the classic sense: they did not come to San Francisco with the intention of settling permanently, but only to work and save enough money to return to China. Because respectable wives were expected to tend the home fires, and also because few men could afford the additional money to bring their wives with them, the great majority of San Francisco's female population in the nineteenth century were prostitutes. Some came expressly to ply their trade, while others were kidnapped, tricked into signing false marriage contracts, or lured by promises of rich husbands in the new country. Bought for $100 to $300 in China, slave girls (some as young as 6 or 7), were sold for $300 to $600 in the United States. The structure of this early Chinatown depended upon groupings by kinship, geographical region, and by other self-defining institutions. Such arrangements allowed the Chinese to tend their own house, gave them comfort during the long exile from home, and helped keep the majority of the poor dependent on those "companies" who had early on assumed control. Gradually, the Six Companies became the governing body of Chinatown, with complete authority over all Chinese activities. The infamous "tongs" were originally merely associations of groups with common interests, but soon they were taken over by the formal criminal element in the community. Their sordid history revolved around gangsters, hatchet men and extortionists. During the seventies when the Chinese made up between 70 and 80 percent of the work force, they were a constant source of controversy. As they established a reputation for industry and hard work, a rural anti-Chinese movement formed and quickly gained strength. In July 1877, crowds of mainly unemployed white laborers gathered in sandlot rallies throughout San Francisco. White Protestant "manifest destiny" arrogance translated into a nativist attack on the Chinese, who not only worked harder and longer hours than many of their white counterparts, they frequently comanded less pay. "The Chinese Must Go!" cried by Dennis Kearney, the fiery orator of the Workingmen's Party, was quoted by local newspapers saying, "Judge Lynch is the only judge we want." Violence against the Chinese mounted throughout the 1870s and 1880s, with bands of angry young men sweeping through Chinatown commiting random murders and setting fires to Chinese businesses and makeshift dwellings. Whereas San Francisco's Tamgrembi was originally a refueling station for Chinese scattered about the region, it became more and more a segregated ghetto that kept the Chinese in one area, and whites out. Despite this segregation, by the turn of the century Chinese made up practically the entire labor force working the canneries amd constituted a large part of the manpower in the laundries, the garment industry, cigar, match, boot, and broom factories, as well as the fishing and fish-packing industries. Although many San Franciscans continued their harassment, they observed the Orientals' qualities of loyalty, obedience, and tireless endeavor, as well as their capacity to persevere in the face of the most overwhelimg obstacles. Also, the Chinese love of gambling and games proved equally appealing to their Caucasian neighbors. Chinatown gambling dens became a major nineteenth-century tourist attractions. As the Westerm community gradually responded to Chinese ways, the Chinese slowly began to settle in, to view their stay no longer entirely as sojourners, but with the possibility of permanence. Early on the morning of April 17, 1906, one of the most devastating earthquakes in American history rattled and shook the City by the Bay. What little remained of San Francisco's Old Chinatown was inevitably claimed by a fire which raged through the city for four days and nights. Before midnight of that terror-filled first day 10,000 Chinese had fled the Quarter. On the second day, anything that had escaped the earlier flames was destroyed as the fire fanned back over the skeleton of Chinatown yet again. Two white Americans, Aitken and Hilton, wrote: "By the fourth day the Quarter was a blackened ruin. The bright lanterns, the little grated windows, the balconies that whispered of romance, the flaring dragons, were gone. Gone, too, the ill-smelling fish markets and cellar shops, the bazaars, the gambling dens, the places where opium was smoked in guarded secrecy. Everything that had made the little foreign section a tradition throught the world had disappeared." Slowly, however, the Chinese drifted back to Dupont Gai and its smoking rubble. They stubbornly shrugged off the demands that they move to the periphery of the city. Fine, handsome buildings of Oriental design, many with pagodalike roofs, were designed and built along what was coming to be called Grant Avenue. Apartments and hotels sprang up as the Chinese crowded back into the Quarter, and the population began to curve upward again until it would reach over 36,000 in 1960. For all its dark alleys, there is nothing very sinister about modern Chinatown. Only on foggy nights when veils of sea mist obscure Spofford Alley and Wavery Place does the Chinese Quarter assume something of an air of its former mystery and an evocation of its turbulent past. Coming in NOVEMBER: San Francisco's Emperor Norton, one of the city's most colorful and beloved figures. Sources: "The Hatchet Men" by Richard H. Dillon, "The Chinese in San Francisco" by Laverne Mau Dicker, "San Francisco's Old Chinatown" text by John Kuo Wei Tchen, and “Old San Francisco, the Biography of a City, by Doris Muscatine. In 1872, Andrew S. Hallidie, a Scots-descended immigrant, managed the California Wire Rope and Cable Company on Market Street in San Francisco. For a long time he pondered the difficulties of conquering the many steep grades of this “City of Hills”. He was also concerned about the ill-treatment horses sometimes received from drivers using whips to urge them up the city’s steeper hills. He realized that if he could come up with a transportation system by using a cable traction system, he could move people, heavy goods and other prohibitably large loads up even the steepest of San Francisco’s hills. Hallidie’s basic invention, adapted from engineer Benjamin H. Brook’s cable plan, combined a grip that would function efficiently without damaging the traveling cable, with a slotted run adapted to the irregularities of the San Francisco terrain. Undeterred by ridicule and skepticism, on August 2, 1873 at 4:00 a.m., the first trial run of Hallidie’s “dummy” was down the Clay Street hill between Jones and Kearny Streets, a distance of 2,880 feet. Later the same day, the dummy with a car attached, made another round trip, this time with a large crowd in attendance. This new public transportation cost five cents a ride, and eventually it was able to reach any part of the city, opening whole new areas to development. In their heyday, as many as eight different cable car lines, extending 112 miles, sent cars up Telegraph, Russian and Nob hills, out to the Presidio, to Golden Gate Park, and even to the Cliff House at Lands End. In 1947, the cable car almost was phased out by authorities in the name of “progress”. The outcry from San Franciscans was such that after a long political struggle, ending in 1955, with only a few miles of track left, they were saved from oblivion. The cable cars received their official seal of approval in 1964 when they were declared a National Historic Landmark. Sources: “Historic San Francisco” by Rand Richards, Heritage House; “Old San Francisco, the Biography of a City, by Doris Muscatine, Putnam; “San Francisco Almanac,” by Gladys Hansen, Chronicle Books.
<urn:uuid:a01932ce-2b41-4543-9a41-1ce0986767be>
CC-MAIN-2021-21
http://shirleytallman.com/s-f-facts/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988966.82/warc/CC-MAIN-20210509092814-20210509122814-00137.warc.gz
en
0.979901
6,546
3.171875
3
This assignment question was recently posted by one of our customers and shared with our professional writers. If you are looking for assignment help on this topic or similar topic, click on ORDER NOW button to submit your details. Once we have your order details, your assignment will be assigned to one of our best writers, who will then proceed to write your paper and deliver it within your specified deadline. Thank you for choosing us today! Population Growth and Migration The objectives of this unit are to invite you to: rapid population growth after 1700. Miles, Steven B. Chinese Diasporas: A Social History of Global Migration. New Approaches to Asian History. Cambridge: Cambridge University Press, 2020. doi:10.1017/9781316841211. *Chapters 1 (“Early Modern Patterns, 1500-1740”) and 2 (“Migration in the Prosperous Age, 1740-1840”), pp. 20-89. If you like, you can also read Chapter 3, (“The Age of Mass Migration, 1840- 1937”), pp. 90-135. List of Topics In order to give meaning to the past, historians often divide the historical record into different periods. A conventional method of periodizing history involves identifying significant dividing lines or “turning points”: specific junctures in time which appear to mark a break from the past and the start of something new. Note that, by its very nature, this method of periodization emphasizes change and downplays the significance of continuities over time. In addition, historical turning points are typically identified with reference to major political events like revolutions, wars, elections, death or assassination of political leaders, and the like. In the case of modern Chinese history, this would include events and dates such as 1644, a watershed year marking the end of the Ming dynasty, the Manchu conquest of China, and the establishment of the Qing, China’s last imperial dynasty; and 1839, the year of the Opium War, when Britain forced China’s opening to western trade and diplomacy. The list would naturally also include the Revolution of 1911 which overthrew the Qing dynasty and established the Republic of China; and 1949, the year of the Chinese Communist Party’s victorious defeat of Nationalist Government forces and the establishment of the People’s Republic of China. Likewise, the death of Mao Zedong in September 1976 also constituted a watershed in China’s modern political history, as did his former rival Deng Xiaoping’s consolidation of power at the Third Plenum of the Eleventh Central Committee of the Chinese Communist Party in December 1978, which marks the beginning of the present Reform period in China. Important as these political dates and events are for enabling us to comprehend and delineate the trajectory of China’s modern political transformations over the last four hundred years, historians have recently begun to question their value for illuminating the broader social, economic and cultural processes that lay behind these political transformations. Indeed, by observing only great political events or the rise and fall of “great” leaders, we may easily overlook or obscure the equally important significance and effects of longer term social, economic and cultural trends, such as changes in the class structure, shifts in the economic base, technological innovations, or the appearance of new cultural beliefs and practices. Recent research has shown that such longer-term trends may be unaffected or only marginally and temporarily affected by the kinds of political events described above—yet their impact on society and on people’s lives and livelihoods may be equal or even greater than the impact of passing political events. A good example from our contemporary era might be the long-term cumulative effects of digitalization on society, economy and culture compared to, say, periodic elections. Historians of China have recently identified a number of such long-term trends which began in the 1500s and then proceeded to significantly transform the social, cultural and economic landscape of China over the next two to four centuries. These forces included commercialization and monetization (the increased use of money), development of foreign trade, the spread of literacy, the gradual blurring of some conventional class and status distinctions, and rapid population growth. Such was the combined impact of these trends that social and cultural historians increasingly consider them as delineating a distinctive period in Chinese history, usually termed the “late imperial period” or sometimes the “early modern” period is a term derived from European historiography), which began around 1500 and lasted until the late nineteenth or early twentieth century. to 1900. In this unit, we focus our attention on what is arguably one of the most critical long-term societal trends in Chinese history over the past six centuries, namely the growth in population. We first examine the basic dimensions of this historical trend and the methods that historians have employed to study it, and then consider some of the competing theories that have been advanced to explain population growth. Finally, we will address some of the critical consequences of rapid population growth for society, economy and politics in the late imperial period. The Scale of Population Growth in Late Imperial China and Possible Explanations 1400s > : The Beginning of Rapid and Sustained Population Growth As the figures below indicate, China’s population remained relatively stable at around 60-80 million for approximately eight centuries from 100 CE to 900 CE. Thereafter, population increased gradually over the course of the next five centuries, from around 80 million in 900 to 110 million in 1400. Beginning around 1400, however, and especially after 1700, China’s population entered a period of rapid and sustained increase, to the extent that population doubled within the course of a century from 1700 to 1800. 180 60 million 875 80 million 1200 110 million 1400 110 million 1700 150 million 1800 300 million 1900 450 million The year 1400 thus marked the beginning of a fundamental increase in China’s population over the next five centuries. It is important for us to recognize that the phenomenon of rapid population growth was by no means unique to China; indeed, a similar population explosion also occurred in Europe around this time. The major difference lies in the relative scale of the increase. Since China had a much bigger population than Europe to begin with, the size of the increase was correspondingly larger. Using 1800 as a benchmark, China’s population was approximately 300 million compared to only 11 million in England and 40 million in Russia. Moreover, while population figures for Europe around this time are likely fairly accurate, the figures for China are more likely to be an underestimate. This was due to the different methods and motives in record keeping. In Europe, local church parishes kept meticulous records of births, deaths and marriage (even today these parish registers remain a critical source for social historians and demographers). China did not have any equivalent of the detailed parish registers kept in Europe. Moreover, the population figures that were collected in late imperial China were gathered by local officials for purposes of taxation and corvee (compulsory labour). Since many localities sought to shield population increases in order to evade taxes and compulsory labour, it is safe to say that the Chinese population figures cited above likely fall short of the true figure. This is especially true for the period after 1700, when the efficiency of local government began a steep decline. Global Warming; Malthusian theory; prosperity vs poverty as potential drivers of population growth Historians do not yet fully understand the complex reasons behind the rapid population growth in China after 1400. Some explanations emphasize the role of universal factors affecting global changes in population, while others stress the significance of conditions and circumstances specific to China. The French historian Fernand Braudel posited the existence of a global warming trend that began around 1450 and which everywhere had the effect of lengthening the growing season and thereby increasing the available food supply. However, Braudel was unable to offer any scientific evidence to support this theory; he simply hypothesized that the existence of such a trend might explain why population began to grow rapidly in many different parts of the world after the mid-1400s. Another explanation that has been cited for China’s rapid population growth relates to the theory of population developed by the influential British economist and philosopher, Thomas Malthus (1766-1834). In his 1798 work, An Essay on the Principle of Population, Malthus proposed a general theory of population growth and its relationship to the economy. Malthus believed that population will naturally increase up to the limits of subsistence. Net increases in a given population over time were the result of either increased fertility (an expanding birth rate) or a decrease in mortality (a falling death rate) or a combination of both. In times of peace and plenty, Malthus argued, there will be increased food supply and better nutrition, leading to more births and fewer people dying prematurely from disease and disaster. Malthus also postulated that population grows exponentially (1>2>4>8>16), while food supply grows only sequentially (1>2>3>4>5). Hence, he claimed, periodic crises of overpopulation are inevitable, because the rate of population growth always outstrips the growth in the food supply. The only guard against overpopulation, said Malthus, were periodic checks on population growth in the form of famine, war and natural disasters. Some have suggested that China’s population growth after 1400, and especially in the two centuries from 1650-1850, appears to fit the Malthusian pattern. Following the Manchu conquest of 1644, China entered a prolonged period of peace, prosperity and stability that lasted approximately two centuries until the outbreak of the massive Taiping Rebellion in 1850, which claimed more than 20 million lives in less than fifteen years. During these two centuries, there were no major wars, famines or other political or natural disasters within China “proper” (that is to say, excluding the frontier regions which were the focus of expansionist Qing military campaigns). At the same time, this period also saw several important medical advances, including the discovery of a small pox vaccine in the 15th century (more than two hundred years before the first smallpox vaccine was developed in Europe). Prolonged peace combined with critical medical advances likely resulted in a reduced death rate. Likewise, political stability and economic prosperity may have contributed to an improved food supply as well as the economic means by which to sustain larger families. Some scholars have also speculated that the unprecedented prosperity of the late imperial period contributed to an increase in fertility by enabling more families to realize the Confucian cultural ideal of a large family with many sons. The above explanation regards economic prosperity as a key factor contributing to China’s rapid population growth during the late imperial period. There is also an alternative explanation, however, which emphasizes the role of poverty. This argument is based on the premise that peasant families in the late imperial period wanted children for utilitarian reasons as well as for cultural and emotional ones. The two most important utilitarian reasons for peasant households to want to increase family size, especially sons, were labour power and insurance in old age. In late imperial China, without modern-type senior citizen’s homes or state-run social welfare services for the elderly, and where cultural norms stressed the importance of filial piety and respect and care for elders, parents naturally regarded sons as the best guarantee for maintaining their security and well-being in old age (not daughters, as they were expected to marry out and leave their natal family). Moreover, since infant mortality was high, peasant households would often strive to have many children, in the assumption that only a few would reach adulthood and be in a position to care for their aging parents. How was Population Growth Sustained? Whatever the exact reasons China’s rapid population growth after 1400—and historians may never be able to identify these with certainty—there is also the related question of how the increase in population was sustained over time. Did the food supply manage to keep pace with population growth? On the basis of the limited evidence available, historians such as Dwight Perkins have come to the conclusion that in the six centuries between 1400 and 1900, China’s food supply did expand sufficiently to keep pace with population growth—defying the Malthusian proposition described above. The increase in food supply appears to have been accomplished by two means, with roughly half the increase attributable to an expansion in the amount of cultivated acreage, and the rest the result of improvements in productivity using traditional methods. Expansion of the amount of cultivated land was made possible by the opening up of new and previously marginal lands to cultivation. The Chinese empire doubled in size during the course of the eighteenth century, as a result of expansionist military campaigns in the far west. Colonization of previously unsettled land was also made possible by the appearance, from the early 1500s, of new food crops from the Americas, particularly maize, sweet potatoes and peanuts. Not only did these new crops thrive in previously uncultivable areas like sandy soils and hillsides, but they were also sources of energy rich in carbohydrates, which quickly made them daily staples of the poor. The other source of increased food supply, enhanced productivity, was achieved without a fundamental change in traditional production technologies. Chinese farmers relied on the intensification of existing methods of cultivation to increase yields and sustain a growing population. For instance, it is estimated that the amount of irrigated land approximately tripled between 1400-1900. In addition, there was a significant increase in the use of fertilizer, especially “night soil” or human waste—which was itself made possible by the increase in population. Land productivity also benefitted from the diffusion of superior seed varieties, including new forms of disease-resistant and early ripening rice. The imperial state also took active measures to encourage the spread of superior and early ripening seed varieties across different regions of the empire. Population Growth and Migration What were the economic, social and political consequences of the rapid expansion of population? Not a few historians believe that many of the most important economic, social and political changes of the late imperial period were directly or indirectly related to the demographic explosion that occurred after 1400. The economy as a whole expanded greatly. It also grew more diversified and commercialized, as farmers began to specialize production to take advantage of emerging and growing markets. Urbanization accelerated dramatically, as great commercial cities grew up and prospered by serving as hubs for rapidly expanding regional and inter-regional trade. Many Chinese people, rural as well as urban dwellers, experienced unprecedented prosperity as population growth created new wealth-making opportunities. Population growth was a major factor contributing to the buoyancy of economic life in late imperial China. In this context, population growth also led to an unprecedented surge in migration. Late imperial China was a society on the move, in which huge numbers of people from all social classes left their villages for cities, frontier regions and overseas destinations in search of livelihood and opportunity. The period 1400-1900 witnessed a number of vast, internal migrations from the long settled plains and river valleys to previously unsettled highland areas (especially the Han river highlands, the hills above the Yangzi River, and the mountain ranges bordering Hunan and Jiangxi), as well as to recently depopulated areas such as Sichuan, whose population had been devastated by the wars accompanying the Ming-Qing transition, and later, at the end of the nineteenth century, to the lower Yangzi provinces whose populations had been decimated by the Taiping and Nian rebellions. Migrants also flocked to frontier regions in western and southwestern China, Manchuria (which the Qing rulers attempted, with limited success, to preserve as an exclusive Manchu homeland until the late 1800s), and the offshore island province of Taiwan. In some cases, the Qing government actively encouraged internal migration by offering migrants material incentives in the form of free land and seeds, land tax exemptions, etc. This was the case, for example, when the dynasty sought to encourage Han Chinese settlement in strategic border areas, such as the southwest (Yunnan), west (Xinjiang) and areas along the Great Wall. More often, however, migration in the late imperial period was the result of private initiative. The main destinations for voluntary migrants included cities, frontier regions and overseas. Chinese cities grew rapidly after 1400, becoming home to various kinds of short term and long-term sojourners, among whom merchants came to occupy a particularly important role. Rural residents moved in increasing numbers to cities in search of wealth-making opportunities and basic livelihoods, as well as to sparsely populated upland and peripheral regions of the empire. The result was the emergence by the eighteenth century of distinctive “frontier societies” with their own unique social, economic, cultural and political characteristics. These newly formed frontier societies departed significantly from the usual patterns found in the long established, agriculturally-based peasant communities of the North China plain and of the river valleys and lowlands areas of South China. The latter were normally characterized by a high degree of ethnic (Han) homogeneity, the prominence of the Confucian scholar-gentry class, and by family-centred institutions governing social and economic life. By contrast, frontier societies tended to be made up of diverse, rootless and highly mobile populations. Their members were typically drawn from multiple localities, spoke different dialects and followed different cultural practices and values. In addition, frontier societies were often also distinguished by their highly skewed sex ratios, with a preponderance of young, single males in search of economic opportunities. Family and kinship, traditionally the central organizing principles of social, economic and cultural life in Chinese peasant communities, were relatively weak institutions in frontier societies. In the absence of the usual networks of family and kin, frontier societies were more often organized around solidarities based on dialect, native place, surname, fictive kinship and other forms of voluntary association. Not surprisingly, given their rootless and transient populations, frontier societies were also fertile recruitment grounds for secret societies, popular religious sects and other popular movements, which mutual help, fraternity and a sense of belonging to rootless individuals who lacked the usual support networks of family and kin. Owing to their recent settlement and diverse origins, frontier societies were also characterized by the absence of the traditional local elite or gentry class consisting of degree-holders, scholars, retired officials, and established landlord families—those who traditionally exercised social and political leadership in local society. In many cases, the presence of the imperial state was also weak or even non-existent, since frontier communities were often located outside the formal jurisdiction or physical reach of local magistrates. Frontier societies were often distinguished, in other words, by the absence of firmly constituted political or moral authority. They tended to be relatively lawless communities, prone to violent group conflicts over competition for resources. Often, such conflicts were between new immigrants and indigenous peoples (such as the conflict which broke out in the eighteenth century between Han settlers and Miao aborigines in southwestern China over control of land and natural resources) or between rival groups of settlers over control of water and other natural resources. Maintenance of irrigation, for example, was a crucial component in the economic welfare of agricultural communities, and was dependent upon a high degree of mutual trust and cooperation among community members. In most parts of rural China, local irrigation facilities were organized by local officials and managed by local elites, who ensured their physical maintenance, regulated access to water among households, mediated disputes, and so on. In the case of irrigation disputes in frontier regions, however, there was often no legal or moral authority to which complainants could turn for redress, and therefore no alternative but violent confrontation. Finally, the economies of frontier societies was often based, not on permanent cultivation, such as was the case in China’s lowland and plains regions, but on shifting cultivation and the extraction of natural resources (timber, minerals, furs, forest produce). Owing to the low soil productivity in many frontier regions, agriculture was often based on the slash and burn method, whereby the existing ground cover was removed and crops planted on a temporary basis until soil depletion forced relocation to a new area where the pattern was repeated. As a result, environmental degradation in frontier societies was often severe, involving deforestation, soil erosion, and flooding. The third major destination for migrants in the late imperial period after cities and frontier regions was overseas. Permanent communities of Chinese migrants, comprised mainly of traders and drawn mainly from the two southeastern coastal provinces of Guangdong and Fujian, had existed in peninsular and island southeast Asia since at least the 1500s. Overseas migration from southeastern China increased dramatically in the nineteenth century as the result of a combination of “push” and “pull” factors. On the “push” side, population pressure and political crisis were major factors. Southeastern China was among the most densely populated regions of the empire. The Taiping rebellion of 1850-64, which originated in Guangdong and eventually engulfed all of south China, devastated the economies and societies of much of the region. On the “pull” side, from the early 1800s European colonial powers embarked upon a deliberate and large-scale effort to recruit cheap Chinese labour for their rapidly expanding plantation economies and mining industries around the globe. The infamous “coolie” or “pig” trade fostered an unprecedented mass migration of Chinese peasant labourers from the villages of southeastern China to Southeast Asia, the Caribbean, South America and Africa. Large numbers of Chinese labourers were also recruited to construct the transcontinental railroads in Canada and the U.S., where they laboured under harsh conditions for menial wages. While historians frequently point to the positive economic effects of population growth in the late imperial period, the rise of mass migration, both domestic and external, suggests that, at least for some sectors of the population, the conditions of daily life were becoming more, not less, difficult. Indeed, it appears that at some point, probably by the late 1700s, the benefits of rapid population growth began to be outweighed by the mounting negative effects. Evidence for this is anecdotal, but in the late 1700s Chinese scholars and officials began commenting on the emergence of what we might today describe as a “dog-eat-dog” society, in which people competed ever more ruthlessly with one another for increasingly scarce opportunities and resources. To be sure, the huge increase in population intensified the competition for land, resources and jobs. As the population-land ratio worsened, the average size of land-holdings began to shrink. And as arable land grew increasingly scarce, it also became increasingly valued as a commodity, so that those with means acquired more of it, while those without means slid ever easier and more quickly into tenancy, debt, and landlessness. As the number of civil service positions failed to keep pace with the increase in population, competition in the imperial civil service examinations intensified and the number of frustrated unemployed and underemployed examination graduates multiplied. Over the longer period, population pressure created a growing underclass of disadvantaged and dispossessed persons. It was precisely this increasing social dislocation and rising social tension that lay behind the rising incidence of banditry and rebellion from the late eighteenth century, as well as the proliferation of secret societies and popular religious sects. Political Consequences of Population Growth One of the most significant long-term political consequences of rapid population growth in the late imperial period was a decline in the imperial state’s capacity to govern effectively. We noted above that while population continued to grow in the late imperial period, the number of civil service positions and size of the imperial bureaucracy remained relatively stable. Why was this the case? Today we live in an era of big government, in which the size of bureaucracy is seemingly always expanding. The situation and attitude toward government in late imperial China was decidedly different, however. In the first place, Confucian statecraft advocated a principle of minimalist government. While the state upheld a theoretical claim to jurisdiction over all areas of life with a social component, it also, at the same time, enshrined a principle of light government, which held that rulers should not weigh heavily on the backs of the people. As the ancient Daoist philosopher Laozi is reputed to have said: “Govern a big country as you would cook a small fish—don’t overcook it!”. The ideal was a largely self-regulating society organized around family institutions and values, in which there was little need for the state to intervene, save for collecting taxes and maintaining law and order. A second reason for the continued small size of government relative to an ever- expanding population may have been related to more generic problems of control and communication in a premodern polity: that is, the bigger government got, the more unwieldy it became. In other words, the absence of modern technological means of control and communication may have imposed a natural limit on the size of government. Perhaps the most important reason for the small size of government, however, was that dynasties in general, and the Qing in particular, feared giving too much power to the bureaucracy, which they regarded as a potential threat. The Chinese dynastic political system was characterized by a permanent tension between the throne, on the one hand, and the bureaucracy, on the other, which governed the huge empire. The bureaucracy could easily strive to evade, deflect or ignore the demands of the throne or the central state, putting its own interests first. Imperial fear and distrust of the bureaucracy was exacerbated in the case of the Qing dynasty because the Manchus were foreign rulers, while the bureaucracy was overwhelmingly Han Chinese. As alien rulers ever vigilant to internal challenges to their security and rule, the Manchus were even more determined to keep the Han Chinese bureaucracy as small as possible, the better to control it. Meanwhile, however, the empire’s population continued to swell. As a consequence, the quality and efficiency of local government declined markedly over time. We can see this clearly reflected in the increasingly heavy administrative burden which fell on Qing county magistrates during the course of the dynasty. At the beginning of the Qing dynasty, in the mid-1600s, county magistrates were responsible for governing an average county population of 100,00-150,000 persons. By the late 1800s, the same county magistrates ruled over average county populations of 350,000-450,000. Two important consequences flowed from this trend. First, as noted, the quality and efficiency of local government deteriorated significantly over the course of the Qing period. Second, as the size of government failed to keep pace with population, more and more of the functions of local government—like tax collection and policing—were perforce turned over to non-governmental local elites, who expanded their power and influence in local society at the expense of the imperial government. Both of these developments were to have important implications for the China’s political and social trajectory in the twentieth century. Are you looking for homework writing help on this topic? This question was posted by one of our client seeking homework help. If you are therefore looking for an assignment to submit, then click on ORDER NOW button or contact us today. Our Professional Writers will be glad to write your paper from scratch, and delivered within your deadline. Perfect choice for your excellent grades! www.globalcompose.com. We ensure that assignment instructions are followed, the paper is written from scratch. If you are not satisfied by our service, you can either request for refund or unlimited revisions for your order at absolutely no extra pay. Once the writer has completed your paper, the editors check your paper for any grammar/formatting/plagiarism mistakes, then the final paper is sent to your email. We do not share your personal information with any company or person. We have also ensured that the ordering process is secure; you can check the security feature in the browser. For confidentiality purposes, all papers are sent to your personal email. If you have any questions, contact us any time via email, live chat or our phone number. I appreciate help on the assignment. It was hard for me but am good to go now Am happy now having completed the very difficult assignment Your writer did a fine job on the revisions. The paper is now ok The paper was so involving but am happy it is done. Will reach you with more assignments I expected perfection in terms of grammar and I am happy. Lecturer is always on our head but was pleased with my paper. Once again, thanks a lot The paper looks perfect now, thank to the writer You helped me complete several other tasks as you handled paper. wonna thank you Would you like this sample paper to be sent to your email or would you like to receive weekly articles on how to write your assignments? You can simply send us your request on how to write your paper and we will email you a free guide within 24-36 hours. Kindly subscribe below! Email Address: email@example.com
<urn:uuid:277094d2-eda0-43e1-982a-e227814b1da3>
CC-MAIN-2021-21
https://www.globalcompose.com/homework-help/human-resources-management-hrm-assignment-on-descriptive-and-predictive-statistics-3-20-38-45-75-26-57-20/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991378.48/warc/CC-MAIN-20210515070344-20210515100344-00177.warc.gz
en
0.959986
5,983
3.25
3
Merger of Rajput states in the Indian Union of Rajput states in the Indian Union The down of political awakening in Rajasthan in the late nineteenth and early twentieth century was due to a variety of factors. In short the main factors could be listed as follows: (1) Agrarian grievances and peasant uprisings. (2) Role of the middle class and professional classes. (3) Influence of the Arya Samaj activities in Rajput states. (4) Influence of activities in neighboring provinces. (5) Role of Press. (6) Spread of education. December 1927 was a landmark in the freedom movement of India with establishment of the All India States people’s Conference with the aim of introducing constitutional reforms and responsible governments. Encouraged by the success of the conference various ‘Praja Maridals’ were established in Rajput states in the 1930’s with the purpose of terminating maladministration and feudal oppression in the states and a the same time stressing upon the need for responsible governments. The Hirapura Declaration by the Indian National Congress by which the party accorded recognition to the aspirations of the people of the Rajput States, set the stage for close co-operation between the Congress and the workers of the Praja Mandal with the twin aim of independence from the British rule and constitutional reforms in Rajput States. The ‘Praja Mandals’ created an atmosphere for the establishment and consolidation of democratic institutions. While the erstwhile rulers in the states tried to come to terms with the people’s movements in their respective states, events moved at a fast pace at the national level and the speed only in creased with end of Second World War in 1945. With the decision of the British Government to transfer power to the All India National Congress, India became independent on 15th August 1947. The major unresolved issue was the problem of integration of Indian States in the Indian Union. However, with the increasing efforts of Sardar Vallabh Bhai Patel and Home Secretary Shri V. P. Menon, the Indian States decided to merge in the Indian Union. The problem of the Rajput Sates persisted which was resolved in various stages with the formation of Matsya Union (18th March, 1948), United Rajasthan (25th March, 1948), the inclusion of Udaipur in the United Rajasthan (18th April, 1948), Greater Rajasthan (30th March, 1949) and the incorporation of Matsya Union in Greater Rajasthan (15th May, 1949). Ajmer – Merwara, which was hitherto a part of the Part C States, was merged into Rajasthan in 1956. The AISPC was convinced since its early inception that the Indian states had ceased to have a meaningful existence and were surviving only due to the support from the British. Nehru in 1939 had clearly hinted that the past treaties between the British and the Indian rulers had ceased to exist. States were to be recognized on basis of population and the annual income was another argument. This was also discernible in the British attitude during the visit of Cripps in 1942. He clearly realized that rulers and stats, in the then existing form, mattered little. Around the same time the Chancellor of the Cambar of Princess, Bhopal Nawab, was trying to ensure, with the help of small states, that the rulers of India emerge as the third force in Indian politics. This resulted in divisions in the Chamber of Princes. The end of the Second World war saw the AISPC strengthening its efforts to strike at the powers of t e rulers. In a meeting in Srinagar in August 1945 the AISPC proposed that : (1) Mass movements should be encouraged in the states to establish responsible (2) Small states (parameters decided earlier on) should there merge with large states or should unite among themselves and become part of the Indian Union. The Cabinet Mission in 1946 envisaged more powers to the rulers in Indian States, a matter which was bitterly opposed by the the Cabinet Mission in 1946 envisaged more powers to the rulers in Indian States, a matter which was bitterly opposed by the AISPC. During the interim Government, the Political Department continued to function under the Viceroy and this system favored the rulers against the aspiration of the masses. The Congress was opposed to it. The AISPC was increasingly of the view that for future negotiations about the India States and the Indian Union, the administration in the states should have at least 50% elected members. While the above events were taking place, the rulers of Rajasthan were playing their games. In 1946 Maharana Bhupal Singh of Mewar advocated the formation of a Rajasthan Union of Rajput States, which would functions as a sub-federation of the Indian Union. In 1947 the celebrated constitutional expert, K.M. Munshi, was also invited to Mewar to draft the constitution of the Rajasthan Union. It was proposed that the major Rajput States would initially form Unions with smaller states. But the efforts came to naught as feelings of mistrust persisted between the bigger and smaller states. On the other hand the Indian Government had proposed that only those states with an annual income of 1 crore and a population of 10 lakhs could maintain independent status. Jaipur, Jodhpur, Udaipur and Bikaner qualified for this. Initially it was also proposed that Kishangarh and Sirobi States be merged with Ajmer – Merwara, but the scheme fell through because of violent Formation of Matsya Sangh The partition of India was marked by communal frenzy on a large scale that engulfed the entire nation. Alwar and Bharatpur were also not spared of these riots and in 1948 the Indian Government took over the administration of these states in its hand as the rulers failed to maintain peace. Neighboring to these states were the smaller states of Dholpur and Karauli. On the advice of the Indian Government, the four states agreed to unite to form the Matsya Sangh, a name given to this area during the days of the Mahabharat. The Sang came into existence on 18th March, 1948. the Maharaja of Dholpur was named as the Raj Pramukh and the Maharaja of Karauli was named as Deputy Raj Pramukh. Shobharam Kumawat of the Mewar Praja Mandal was elected as the Prime Minister of the Sangh. The next slip in the integration of Rajasthan started in the Hadoti region. Kota, Jhalawar and Dungarpur wanted to set up a union of smaller states beyond the Aravalli range. Initially it was also proposed to include Malwa and certain Central Indian states in this, but that proposal did not find general acceptance. Banswara and Pratapgarh also agreed to join the new formation. Kishangarh and Sirohi also wanted to join the United Rajasthan. Ultimately nine states viz. Banswara, Dugarpur, Pratapgarh, Kota, Bundi, Jhalawar, Kishangarh, Shahpura and Tonk combined to form the new union. The ruler of Kota was made the Rajpramukh whereas the rulers of Bundi and Dungarpur were made Deputy Rajpramukhs. But the ruler of Bundi was a stickler for protocol and a respecter of past practices whereas he felt that Bundi should be accorded seniority to Kota. To resolve the issue he suggested that the Maharana of Udaipur be, asked to join the new formation and by virtue of his seniority and status he would automatically be made the Rajpramukh. But the Udaipur ruler insisted that the other states should merge into Mewar. While this deadlock was on, the Mewar Praja Mandal under Manikyalal Verma, protested that the fate of 20 lakhs could not be left to the whims of a single ruler. The prajandal leaders also felt that for the all round progress and development of the people it was better if Udaipur merged into United Rajasthan. The United Rajasthan came into existence on 25th March, 1948 and Gokul Lal Asawa became its first Prime Minister. Shortly afterwards it was announced that the Mewar Maharana was also not averse to joining the United Rajasthan. Two factors seem to have induced this change in thinking of the Maharana. Firstly, the Mewar Prajamanadal was largely successful in convincing the masses that the progress and development was only possible if Mewar joined the United Rajasthan. Furthermore the Mewar Maharana’s viewpoint was increasingly seen as a step in taking Mewar backwards. Secondly, the nobles of Mewar were also trying to convince the Maharana that if Mewar continued as an independent entity than the Maharana would have to bow to the wishes of the Prajamandal leaders and their decisions. It was also argued that in a United Rajasthan the influence of Mewar Prajamandal leaders would not be so powerful. The Mewar Maharana ultimately consented to join the United Rajasthan. As per terms of the merger it was decided that the new Union would be called “United States of Rajasthan”. The Udaipur Maharana was made the Rajpramukh and the capital of the Union was Udaipur though one session every year would be held in Kota. The new Union was inaugurated by Pandit Nehru on 18th April, 1948. Now only four states-Bikaner, Jaisalmer, Jaipur and Jodhpur – were outside the Union. The fate of these states depended on the amount of pressure the Praja Mandals in these respective states could exert on their respective rulers. To illustrate this point if we look at Udaipur and Kota where the Praja Mandal Movements were very powerful, we find that the rulers were quick to agree to merge into the Union. Whereas in the case of Bikaner, where the Praja Mandal was comparatively weaker, the Bikaner Maharaja held out his desire to maintain his independence. In Jodhpur the situation was different. The Lok Parishad was very powerful but the proximity to the Pakistan boarder and the desire of Maharaja Hanuwant Singh to merge into Pakistan made him hesitant. The Indian Government suggested that Jaisalmer, Bikaner and Jodhpur should combine to make one centrally administered area. Under such circumstances the demands of the Lok Parishad for responsible governments etc. became rather less important. But this scheme could not be implemented as even Sardar Patel felt that public sentiments should be respected. The rulers in these states at the same time realized that they could not retain political power in their hand for long and they would have to share them with the elected representatives. Under such circumstances it was less dishonorable to lose power to elected representatives within a larger union rather than in an When it was clear that Rajput states were slowly realizing that people’s wishes could no longer be ignored in matters of governance, efforts were intensified for the creation of a Greater Rajasthan. The problems being faced by Manikya Lal Verma, the newly elected Prime Minister of United Rajasthan, were a clear indication that the feudal element in Rajput states was not easily adaptable to changes in fortunes. In May, 1948 the ‘Madhya Bharat Union’ (Central India Union) was formed and even big and powerful states like Indore and Gwalior agreed to join this Union. This led to demand for the creation of ‘Brahad Rajasthan’ (Greater Rajasthan) which would include the manor Rajput States. The Socialist Party took a step in this direction by establishing an ‘Rajasthan Andolan Samiti’ at All India level. The Samiti had the blessings of socialist leaders of the stature of Jai Prakash Narayan and Ram Manohar Lohia. The Diwan of Jaipur State opposed the formation of Greater Rajasthan, as it would lead to hegemony of Rajput in Rajputana, which was not in the interests of the Indian nation. He advocated that Rajput States be divided into 3 units. 1. United Rajasthan to continue to exist as it was. 2. Jaipur, Alwar and Karauli to be merged into one unit. 3. Jodhpur, Bikaner and Jaisalmer combine to form a Western 4. Bharatpur and Dholpur may be merged into the neighboring Sh. V. P. Menon and Bikaner Dewan Sh. C.S. Venkatacharya felt that such a proposal would not be appreciated by the masses that were now dreaming of a larger Rajasthan. In Dec. 1948 on advice of Sardar Patel, V.P. Menon started negotiations with rulers of Jodhpur, Bikaner and Jaipur on formation of Greater Rajasthan. After initial hesitancy, the rulers agreed to the formation of a Greater Rajasthan. The Jaisalmer administration was already in the hands of Indian On 14th January, 1949 the consent of the rulers of Jodhpur, Jaipur and Bikaner to merge their states into Rajasthan was announced, and thus finally the dream of Maharana Pratap of a Greater Rajasthan came true. Some questions immediately arose: 1. Who would be the Rajpramukh of this new Union? 2. Where could be the administrative capital located To find solution to these questions V.P. Menon convened a meeting of Gokul Bhai Bhatt, Manikya Lal Verma, Jai Narain Vyas and Hira Lal Shastri – all prominent leaders of mass base. It was proposed that Jaipur Maharaja Sawai Man Singh would be appointed as Maharaja Pramukh looking to the special position the Udaipur royal family enjoyed due to its past glorious history. It was also decided that two or three I.C.S. officers be appointed as Advisors in the new set up. It was also decided that in case of a conflict between the ministry and these advisors, the Indian Government would intervene and mediate. It was further decided, upon advice of an expert committee, that Jaipur would be the new administrative capital and to placate the other manor cities it was also decided that some major offices would be located in them. Thus Jodhpur got the High Court, Education Dept. was given to Bikaner, Udaipur got the Mining Dept. and the Agriculture Dept. was allotted to Bharatpur. The next issue was the problem of the proposed Prime Minister of Greater Rajasthan. Amongst the claimants were Hira Lal Shastri, the Prime Minister, Jaipur and a proven administrator and Jai Narayan Vyas, the undisputed leader of Lok Parishad from Jodhpur – Manikya Lal Verma removed himself from the race by stating that henceforth he would not accept any Government Office. Vyas and Verma suggested the name of Gokul Bhai Bhatt for the post of Prime Minister. The Government was keen to install Hira Lal Shastri on this post, but this move was opposed by rest of the leaders. Ultimately the rest of the leaders relented and Hira Lal Shastri was accepted as the Prime Minister of Greater Even fates and nature appeared to conspire against the formation of Hira Lal Shastri’s Government, Firstly the Jaipur ruler was seriously injured in an air-crash and secondly when Sardar Patel came to Jaipur to inaugurate Greater Rajasthan, his plane crash-landed and he could not make it in time. To compound errors further during the inauguration Jai Narayan Vyas and Manikya Lal Verma were not accorded proper courtesy which not only annoyed them but their supporters as well. The consequence of all this was that Shastri was denied the co-operation of both Vyas and Verma in his cabinet formation. Some important ministers in the Council of Ministers were Siddhraj Dhadda (Jaipur), Prem Narain Mathur and Bhurelal Baya (Udaipur) Phool Chand Bafna, Nar Singh Kacchwaha and Rao Raja hanuwant Singh (Jodhpur), Raghuvar Dayal (Bikaner) and Ved Pal Tyagi (Kota). The Hira Lal Shastri ministry did not last for even 2 years. The establishment of Greater Rajasthan sounded the death-knell of feudalism in Rajasthan. Merger of Matsya Sangh With the formation of Greater Rajasthan, the independent existence of Matsya Sangh comprising of Alwar, Bharatpur, Dholpur and Karauli States became untenable. In Alwar and Karauli the public opinion was clearly in favor of merger with Greater Rajasthan, though the position in Bharatpur and Dholpur was not so clear. Sardar Patel deputed a committee under Dr. Shankar Rao Dev to ascertain the public opinion in these two states and the Committee reported that the people in these states also favored merger. Thus the Indian Government agreed to the merger of the Matsya Union States into Greater Rajasthan on 15th May, 1949. The popular leader of Matsya Sangh, Shri Shoba Ram was inducted into the Council of Ministers. It had been a long-standing demand of the state of Gujarat that Mount Abu in Sirohi State be made a part of it. Much against the wishes of the people the States Department in November, 1947 agreed to transfer Sirohi from the jurisdiction of Rajputana Agency and bring in under the control of Gujarat Agency. In March, 1948 the Gujarat States Agency, inclusive of Gujarati States, was sought to be transferred to Bombay State. To avoid the transfer of Sirohi to Bombay State, the people increased the demand for merger of Sirohi into United Rajasthan. On the question of Sirohi, Nehru and Sardar Patel differed radically, Nehru was of the opinion that the people were justified in demanding the inclusion of Sirohi into United Rajasthan whereas Patel was of the view that Sirohi should go to Gujarat. In 1950, Patel handed over Mont Abu and a part of Sirohi to Gujarat. This move led to widespread agitation all over Sirohi under the leadership of Gokul Bhai Bhatt. The injustice to Sirohi was ratified in November, 1956 when Mount Abu and parts of Sirohi were restored Merger of Ajmer Ajmer came under the category of Part C states – those small states like Ajmer and Delhi which after 1947 were independent entities under a Chief Commissioner appointed by the Central Government. Ajmer had an assembly also prior to 1951, from 1947 onwards the Chief Commissioner was assisted by an Advisory Council comprising of 7 members. The Congress leaders like Hari Bhau Upadhyaya, Bal Krishna Kaul, and Pandit Mukul Behari Lal Bhargava were opposed to merger of Ajmer into Rajasthan. In the election of 1952, Hari Bhau Upadhyaya was elected as Chief Minister of Ajmer. Finally in 1956 Ajmer was merged into Rajasthan.
<urn:uuid:55db0b56-acd5-47e5-ad37-1536429f9bcf>
CC-MAIN-2021-21
https://indovacations.net/Rajasthan-Tour-Package/Merging-Rajputstates-IndianUnion.htm
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988759.29/warc/CC-MAIN-20210506175146-20210506205146-00577.warc.gz
en
0.962573
4,390
3.640625
4
Read the Disclaimer! Why talk about a Family Disaster Plan? - Disaster can strike quickly and without warning. It can force you to evacuate your neighborhood or confine you to your home. What would do if basic services, such as water, gas, electricity, or telephones cut off? Local officials and relief workers will be on the scene after a disaster, but they cannot reach everyone right away. - Families can and do cope with disaster by preparing in advance and working together as a team. Knowing what to do is your best protection and your responsibility. Learn more about Family Disaster Plans by your local emergency management office or your local American Red Cross chapter. - A National Weather Service (NWS) WATCH is a message indicating that conditions favor the occurrence of a certain type of hazardous weather. For example, a severe thunderstorm watch means that a severe is expected in the next six hours or so within an area approximately to 150 miles wide and 300 to 400 miles long (36,000 to 60,000 square The NWS Storm Prediction Center issues such watches. Local NWS forecast offices issue other watches (flash flood, winter weather, etc.) 12 to hours in advance of a possible hazardous-weather or flooding event. local forecast office usually covers a state or a portion of a state. - An NWS WARNING indicates that a hazardous event is occurring or is imminent in about 30 minutes to an hour. Local NWS forecast offices issue warnings on a county-by-county basis. Four Steps to Safety Complete four steps to safety. There are four basic to developing a family disaster plan: 1. Find out what could happen to you. By learning what your risks may be, you can prepare for the disaster most likely to occur in your Learn more by contacting your local emergency management office or Red Cross chapter. Be prepared to take notes. Ask the following: 2. Create a Family Disaster Plan. Once you know what possible in your area, talk about how to prepare and how to respond if one occurs. Make checklists of steps you can take as you discuss this with your family. What type of disasters are most likely to happen in your community? Identify which human-caused or technological disasters can affect your region, too. Remember to consider major chemical emergencies that can anywhere chemical substances are stored, manufactured, or transported. How should you prepare for each? Does your community have a public warning system? What do your community’s warning signals sound like and what should you do when you hear them? What about animal care after disaster? Pets (other than service animals) are not permitted in places where food is served, according to many local health department regulations. Plan where you would take pets if you had to go to a public shelter where they are not permitted. If you care for elderly or disabled persons, how can you help them? What might be some special needs to consider? What are the disaster plans at your workplace, your children’s school or day care center, and other places where members of your family spend time? You should be prepared wherever you may be when and learn steps you can take to prevent or avoid disasters. Here is how to create your Family Disaster Plan: 3. Complete your checklists. Take the steps outlined in the checklists you made when you created your Family Disaster Plan. Remember to the following items on your checklists. Meet with your family and discuss why you need to prepare for disaster. Explain the dangers of fire, severe weather, and earthquakes to Plan to share responsibilities and work together as a team. Keep it enough so people can remember the important details. A disaster is an stressful situation that can create confusion. The best emergency plans are those with very few details. Discuss the types of disasters that are most likely to happen. Explain what to do in each case. Everyone should know what to do in family members are not together. Discussing disasters ahead of time help reduce fear and anxiety and will help everyone know how to respond. Pick two places to meet: Right outside of your home in case of a sudden emergency, like a fire. Outside of your neighborhood in case you can’t return home or are asked to leave your neighborhood. Everyone must know the address and phone of the meeting locations. Develop an emergency communication plan. In case family members are separated from one another during floods or other disasters, have a plan for getting back together. Separation is a real possibility during the day when adults are at work and children are at school. Ask an out-of-town relative or friend to be your "family contact." Your contact should live outside of your area. After a disaster, it is often easier to make a long distance call than a local call. Family should call the contact and tell him or her where they are. Everyone know the contact’s name, address, and phone number. Discuss what to do if authorities ask you to evacuate. Make arrangements for a place to stay with a friend or relative who lives out of town learn about shelter locations. Be familiar with escape routes. Depending on the type of disaster, it may be necessary to evacuate your home. Plan several escape routes case certain roads are blocked or closed. Remember to follow the advice local officials during evacuation situations. They will direct you to safest route; some roads may be blocked or put you in further danger. Plan how to take care of your pets. Pets (other than service animals) are not permitted to be in places where food is served, according to local health department regulations. Plan where you would take your if you had to go to a public shelter where they are not permitted. 4. Practice and maintain your plan. Practicing your plan will help you instinctively make the appropriate response during an actual You will need to review your plan periodically and you may need to Post by phones emergency telephone numbers (fire, police, ambulance, etc.). You may not have time in an emergency to look up Teach all responsible family members how and when to turn off the water, gas, and electricity at the main switches or valves. Keep tools near gas and water shut-off valves. Turn off utilities only if suspect a leak or damaged lines, or if you are instructed to do so by If you turn the gas off, you will need a professional to turn it back Paint shut-off valves with white or fluorescent paint to increase Attach a shut-off valve wrench or other special tool in a conspicuous close to the gas and water shut-off valves. Check if you have adequate insurance coverage. Ask your insurance agent to review your current policies to ensure that they will cover home and belongings adequately. Homeowner’s insurance does not cover flood losses. If you are a renter, your landlord’s insurance does your personal property; it only protects the building. pays if a renter’s property is damaged or stolen. Renters’ insurance costs less than $15 a month in most areas of the country. Contact your agent for more information. Install smoke alarms on each level of your home, especially near bedrooms. Smoke alarms cut nearly in half your chances of dying in a home fire. alarms sense abnormal amounts of smoke or invisible combustion gases in the air. They can detect both smoldering and flaming fires. Many areas are now requiring hard-wired smoke alarms in new homes. Get training from the fire department on how to use your fire extinguisher (A-B-C type), and show family members where extinguishers are kept. Different extinguishers operate in different ways. Unless responsible members know how to use your particular model, they may not be able to use it effectively. There is no time to read directions during an Only adults should handle and use extinguishers. Conduct a home hazard hunt. During a disaster, ordinary objects in your home can cause injury or damage. Anything that can move, fall, break, or cause a fire is a home hazard. For example, during an or a tornado, a hot water heater or a bookshelf could turn over or hanging over a couch could fall and hurt someone. Look for electrical, chemical, and fire hazards. Contact your local fire department to learn about home fire hazards. Inspect your home at least once a year and fix Stock emergency supplies and assemble a Disaster Supplies Kit. (See the "Disaster Supplies Kit" section.) Keep enough supplies in your home to meet your needs for at least three days. a Disaster Supplies Kit with items you may need in case of an Store these supplies in sturdy, clearly labeled, easy-to-carry such as backpacks or duffel bags. Keep a smaller Disaster Supplies Kit in the trunk of your car. (See the "Disaster Supplies Kit" section.) If you become stranded or are not able to return home, having these items will help to be more comfortable. Keep a portable, battery-operated radio or television and extra batteries. Maintaining a communications link with the outside is a step that can the difference between life and death. Make sure that all family know where the portable, battery-operated radio or television is and always keep a supply of extra batteries. Consider using a NOAA Weather Radio with a tone-alert feature. NOAA Weather Radio is the best means to receive warnings from the National Weather Service. The National Weather Service continuously broadcasts updated and forecasts that can be received by NOAA Weather Radios, which are in many stores. NOAA Weather Radio now broadcasts warning and information for all types of hazards both natural (such as weather and flooding, as well as earthquakes and volcanic activity) and (such as chemical releases or oil spills). Working with other federal and the Federal new Emergency Alert System, NOAA Weather Radio is an "all hazards" radio network, making it source for the most comprehensive weather and emergency information to the public. Your National Weather Service recommends purchasing a Weather Radio that has both a battery backup and a Specific Area Encoder (SAME) feature, which automatically alerts you when a watch or warning is issued for your county, giving you immediate information a life-threatening situation. The average range is 40 miles, depending on topography. The NOAA Weather Radio signal is a line-of-sight signal, which does not bore through hills or mountains. Take a Red Cross first aid and CPR class. Have your family learn basic safety measures, such as CPR and first aid. These are critical and learning can be a fun activity for older children. Plan home escape routes. Determine the best escape routes from your home in preparation for a fire or other emergency that would require to leave the house quickly. Find two ways out of each room. Find the safe places in your home for each type of disaster. Different disasters often require different types of safe places. While basements are appropriate for tornadoes, they could be deadly in a major chemical Make two photocopies of vital documents and keep the originals in a safe deposit box. Keep one copy in a safe place in the house, and give the second copy to an out-of-town friend or relative. Vital such as birth and marriage certificates, tax records, credit card financial records, and wills and trusts can be lost during disasters. Make a complete inventory of your home, garage, and surrounding property. The inventory can be either written or videotaped. Include information such as serial numbers, make and model numbers, physical descriptions, and price of purchases (receipts, if possible). This list could help prove the value of what you owned if your possessions are damaged or and can help you to claim deductions on taxes. Be sure to include items such as sofas, chairs, tables, beds, chests, wall units, and any other furniture too heavy to move. Do this for all items in your home, on all levels. Then store a copy of the record somewhere away from such as in a safe deposit box. Quiz your kids every six months so they remember what to do, meeting places, phone numbers, and safety rules. Conduct fire and emergency evacuation drills at least twice a year. Actually drive evacuation routes so each driver will know the way. alternate routes in case the main evacuation route is blocked during an actual disaster. Mark your evacuation routes on a map; keep the map in your Disaster Supplies Kit. Remember to follow the advice of local officials during evacuation situations. They will you to the safest route, away from roads that may be blocked or put you in further danger. Replace stored food and water every six months. Replacing your food and water supplies will help ensure freshness. Use the test button to test your smoke alarms once a month. The test feature tests all electronic functions and is safer than testing a controlled fire (matches, lighters, or cigarettes). If necessary, batteries immediately. Make sure children know what your smoke alarm If you have battery-powered smoke alarms, replace batteries at least once a year. Some agencies recommend you replace batteries time changes from standard daylight savings each spring and again in fall. "Change your clock, change your batteries," is a positive theme has become a common phrase. While replacing batteries this often will not hurt, available data show that batteries will last at least a year, so more frequent replacement is not necessary, and time does not change in Arizona, Hawaii, the eastern portion of Indiana, Puerto Rico, American Samoa, and Guam. Replace your smoke alarms every 10 years. Smoke alarms become less sensitive over time. Replacing them every 10 years is a joint by the National Fire and the U.S. Consumer Products Safety Commission. Look at your fire extinguisher to ensure it is Fire extinguishers will not work properly if they are not properly Use the gauge or test button to check proper pressure. Follow instructions for replacement or recharging fire extinguishers. If the is low on pressure, damaged, or corroded, replace it or have it What to Tell Children - Tell children that a disaster is something that happens that could hurt people, cause damage, or cut off utilities such as water, telephones, electricity. Explain to them that nature sometimes provides of a good thing"--fire, rain, wind, snow. Talk about typical effects children can relate to, such as loss of electricity, water, and - Give examples of several disasters that could happen in your community. Help children recognize the warning signs for the disasters that could happen in your community. Discussing disaster ahead of time fear and anxiety and lets everyone know how to respond. - Teach children how and when to call for help. Check the telephone directory for local emergency telephone numbers. If you live in a 9-1-1 service area, teach children to call 9-1-1. At home, post emergency numbers by all phones and explain when to call each number. Even very children can be taught how and when to call for emergency assistance. a child can’t read, make an emergency telephone number chart that may help the child identify the correct number to call. - Explain that when people know what to do and practice in advance, everyone is better able to handle emergencies. That’s why you need to create a Family Disaster Plan. - Have older children take a first aid and CPR course. These are critical skills, and learning can be a fun activity. - Tell children that in a disaster there are many people who can help them. Talk about ways that an emergency manager, Red Cross police officer, firefighter, teacher, neighbor, doctor, or utility might help following a disaster. - Teach children to call your family contact in case they are separated from the family in an emergency. Help them memorize the or write it down on a card that they can keep with them. Remember Your Pets - Plan how to take care of your pets. If you must evacuate, it is best to take your pets with you. However, pets (other than service are not permitted in public shelters, according to many local health regulations and because of other considerations. - Contact hotels and motels outside of your immediate area to check their policies on accepting pets and restrictions on the number, size, and Ask if "no pet" policies could be waived in an emergency. - Ask friends, relatives, or others outside of the affected area whether they could shelter your animals. If you have more than one may be more comfortable if kept together, but be prepared to house them - Prepare a list of boarding facilities and veterinarians who could shelter animals in an emergency; include 24-hour phone numbers. Ask shelters if they provide emergency shelter or foster care for pets in a disaster. Animal shelters may be overburdened, so this should be your - Keep a list of "pet friendly" places, including their phone numbers, with other disaster information and supplies. If you have an impending disaster, call ahead for reservations. - Carry pets in a sturdy carrier. Animals may feel threatened by some disasters and become frightened or try to run. - Have identification, collar, leash, and proof of vaccinations for all pets. Veterinarian records may be required by some locations they will allow you to board your pets. If your pet is lost, will help officials return it to you. - Assemble a portable pet disaster supplies kit. Keep food, water, and any special pet needs in an easy-to-carry container. - Have a current photo of your pets in case they - As a last resort, if you absolutely must leave your pets behind, prepare an emergency pen in the home that includes a three-day supply of dry and a large container of fresh water. Media and Community Education Ideas - Meet with your neighbors to plan how the neighborhood could work together after a disaster until help arrives. Working with neighbors save lives and property. If you’re a member of a neighborhood such as a homeowner’s association or crime watch group, preparedness as a new activity. Check with your local fire department find out if they offer Community Emergency Response Team (CERT) - Know your neighbors’ special skills (for example, medical, technical) and consider how you could help neighbors who have special needs, such as disabled and elderly persons. - Identify elderly and disabled people in the neighborhood. Ask them how you can help if a disaster threatens (transportation, securing the home, getting medications, etc.). - Make plans for child care in case parents can’t get home. If you’re sure you have time and if local officials an immediate evacuation, but there’s a chance the weather may or flooding may happen, take steps to protect your home and belongings: - Evacuate immediately if told to do so. Authorities do not ask people to leave unless they truly feel lives may be in danger. Follow their - Listen to local radio or television and follow the instructions of local emergency officials. Local officials will provide you with appropriate advice for your particular situation. - Wear protective clothing and sturdy shoes. Disaster areas and debris contain many hazards. The most common injury following disasters is cut - Lock your home. Others may evacuate after you or before you return. Secure your house as you normally would when leaving for extended - Use travel routes specified by local authorities. Don’t use shortcuts because certain areas may be impassable or dangerous. - If you have only moments before leaving, grab the following items - First aid kit, including prescription medications, dentures, extra eyeglasses, and hearing aid batteries. - Disaster Supplies Kit basics and Evacuation Supplies Kit. (See " Disaster Supplies Kit" section for detailed information.) - A change of clothes and a sleeping bag or bedroll and pillow for each household - Car keys and keys to the place you may be going (friend’s or relative’s - Bring all pets into the house and confine them to one room, if you can. If necessary, make arrangements for your pets. Pets may try if they feel threatened. Keeping them inside and in one room will allow you to find them quickly if you need to leave. - Put your Disaster Supplies Kit basics and Evacuation Supplies Kit in your vehicle, or by the door if you may be leaving on foot. disaster situations, such as tsunami, it is better to leave by foot. - Notify your family contact where you are going and when you expect to get there. Relatives and friends will be concerned about your Letting someone know your travel plans will help relieve the fear and of those who care. - Bring things indoors. Lawn furniture, trash cans, children’s toys, garden equipment, clotheslines, hanging plants, and any other objects may be blown around or swept away should be brought indoors. - Look for potential hazards. Look for coconuts, unripened fruit, and other objects in trees around your property that could blow or off and fly around in strong winds. Cut them off and store them indoors until the storm is over. If you have not already cut away dead or branches or limbs from trees and shrubs, leave them alone. Local collection services will not have time before the storm to pick - Turn off electricity at the main fuse or breaker, and turn off water at the main valve. Unless local officials advise otherwise, gas on because you will need it for heating and cooking when you return home. If you turn gas off, a licensed professional is required to turn it back on, and it may take weeks for a professional to respond. - Turn off propane gas service. Propane tanks often become damaged or dislodged in disasters. - If strong winds are expected, cover the outside of all the windows of your home. Use shutters that are rated to provide significant from windblown debris, or pre-fit plywood coverings over all windows. - If flooding is expected, consider using sand bags to keep water away from your home. It takes two people about one hour to fill 100 sandbags, giving you a wall one foot high and 20 feet long. Make you have enough sand, burlap, or plastic bags, shovels, strong helpers, and time to place them properly. After a Disaster - Remain calm and patient. Staying calm and rational will help you move safely and avoid delays or accidents caused by irrational Many people will be trying to accomplish the same things you are for family’s safety. Patience will help everyone get through a - Put your plan into action. Having specific steps to take will keep you working toward your family’s safety. - Listen to local radio or television for news and instructions. Local authorities will provide the most appropriate advice for your - Check for injuries. Give first aid and get help for seriously injured people. Taking care of yourself first will allow you to help safely until emergency responders arrive. - Help your neighbors who may require special people, and people with disabilities--and the people who care or for large families who may need additional help in an emergency - Wear protective clothing and sturdy shoes. Disaster areas and debris contain many hazards. The most common injury following disasters is cut - Check for damage in your home. Disasters can cause extensive damage, sometimes in places you least expect. Look carefully for any potential - Use battery-powered lanterns or flashlights when examining buildings. Battery-powered lighting is the safest and easiest and does not present a fire hazard for the user, occupants, or building. - Avoid using candles. Candles can easily cause fires. They are quiet and easily forgotten. They can tip over during earthquake aftershocks in a gust of wind. Candles invite fire play by children. More than times as many people have died in residential fires caused by using after a disaster than from the direct impact of the disaster itself. - Look for fire hazards. There may be broken or leaking gas lines, flooded electrical circuits, or submerged furnaces or electrical Fire is the most frequent hazard following floods. - Check for gas leaks. Sniff for gas leaks, starting at the water heater. If you smell gas or suspect a leak, open a window and get outside quickly. Turn off the gas at the outside main valve if you can and call the gas company from a neighbor’s home. If you turn off the gas for any reason, it must be turned back on by a professional. - Look for electrical system damage. If you see sparks or broken or frayed wires, or if you smell burning insulation, turn off the at the main fuse box or circuit breaker. If you have to step in water get to the fuse box or circuit breaker, call an electrician first for Electrical equipment should be checked and dried before being returned - Check for sewage and water lines damage. If you suspect sewage lines are damaged, avoid using the toilets and call a plumber. If water pipes are damaged, contact the water company and avoid using water from the You can obtain safe water from undamaged water heaters or by melting - Clean up spills immediately. This includes medicines, bleach, gasoline, and other flammable liquids. - Watch for loose plaster and ceilings that - Take pictures of the damage, both of the building and its contents, for insurance claims. - Confine or secure your pets. They may be frightened and try to run. - Let your family contact know you have returned home and then do not use the telephone again unless it is a life-threatening emergency. Telephone lines are frequently overwhelmed in disaster situations. They need to be clear for emergency calls to get through. - Make sure you have an adequate water supply in case service is cut off. Water is often contaminated after major disasters. An undamaged water may be your best source of drinking water. - Stay away from downed power lines and report them immediately. Getting damaged utilities turned off will prevent further injury or damage. If possible, set out a flare and stay on the scene to warn others until For People with Disabilities Persons with disabilities, or those who may have mobility problems (such as elderly persons), should prepare as anyone else. In addition, they want to consider some of the following steps: - Create a network of relatives, friends, or co-workers to assist in an emergency. If you think you may need assistance in a your disability with relatives, friends, or co-workers and ask for help. For example, if you need help moving or require special to receive emergency messages, make a plan with friends. Make sure they know where you keep your disaster supplies. Give a key to a neighbor or friend who may be able to assist you in a disaster. - Maintain a list of important items and store it with your emergency supplies. Give a copy to another family member and a friend Important items might include: - Special equipment and supplies, for example, hearing - Current prescription names and dosages. - Names, addresses, and telephone numbers of doctors and - Detailed information about the specifications of your - Contact your local emergency management office now. Many local emergency management offices maintain registers of people with disabilities and needs so they can be located and assisted quickly in a disaster. - Wear medical alert tags or bracelets to identify your disability in case of an emergency. These may save your life if you are in medical attention and unable to communicate. - Know the location and availability of more than one facility if you are dependent on a dialysis machine or other life-sustaining equipment or treatment. There may be several people requiring equipment, or facilities may have been affected by the disaster. If you have a severe speech, language, or hearing disability: - When you dial 9-1-1, tap the space bar to indicate a TDD call. - Store a writing pad and pencils to communicate - Keep a flashlight handy to signal your whereabouts to other people and for illumination to aid in communication. - Remind friends that you cannot completely hear warnings or emergency instructions. Ask them to be your source of emergency information as it comes over the radio. Another option is to use a NOAA Weather with a tone-alert feature connected to lights. When a watch or warning is issued for your area, the light would alert you to potential danger. - If you have a hearing ear dog, be aware that the dog may become confused or disoriented in an emergency. - If you have a hearing ear dog, store extra food, water, and supplies for your dog. Trained hearing ear dogs will be allowed to stay in emergency shelters with their owners. Check with local emergency management for more information. If you are blind or visually impaired: Keep extra canes well placed around the home and office, even if you use a guide dog. If you have a guide dog, be aware that the dog may become confused or disoriented in an emergency. If you have a guide dog, store extra food, water, and supplies for your dog. Trained guide dogs will be allowed to stay in emergency with their owners. Check with local emergency management officials for If you need a wheelchair, show friends how to operate your so they can move you if necessary. Make sure friends know the of your wheelchair in case it has to be transported, and where to get a battery if needed. Listen to the advice of local officials. People with disabilities have the same choices as other community residents about whether to their homes and where to go when an emergency threatens. Decide whether it is better to leave the area, stay with a friend, or go to a public Each of these decisions requires planning and preparation. Produced by the National Disaster Red Cross, FEMA, and USGS. HTML formating By the From: Talking About Disaster: Guide for Standard by the National Disaster Education Coalition, Washington, D.C., 1999. For information pertaining to emergency planning and response in your own state, please see our state pages: Hampshire -- New Jersey -- New Mexico -- New York Carolina -- North Dakota -- Ohio Island -- South Carolina -- South Dakota -- Tennessee Virginia -- Wisconsin If you have any suggestions about how this site can be improved, please send an email to email@example.com
<urn:uuid:ed3c934f-da5b-4391-9061-eb165e0fb3c7>
CC-MAIN-2021-21
https://www.disastercenter.com/guide/family.htm
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988882.7/warc/CC-MAIN-20210508121446-20210508151446-00617.warc.gz
en
0.901394
6,798
3.265625
3
- the nature of selfhood and the theology of the Resurrection: the inseparable personhood of soul that is embodied and of body that is ensouled - the indestructible unity of body and soul ⇒ the link between the tree-souls of Inferno 13 and Dante’s treatment of the mythological figure of Meleager in Purgatorio 25 - Dante’s non-dualism, his insistence on a the unsunderable unity of body and soul, leads to consideration of the following Dantean dialectic: on the one hand he posits the perverse non-dualism of the materialist Epicureans (for whom the soul dies with the body) and on the other he posits the perverse dualism of the suicides, who pit the self against the self (Inf. 13.72) - metamorphosis — shape-changing, especially in classical mythology — is for Dante a key modality for thinking about selfhood (see Inferno 24, Inferno 25, and Purgatorio 25) - intertextual games: diminishment of the text Aeneid, analogous to the incremental undermining of the fictional character Virgilio Following violence against others in their persons and in their possessions, treated in canto 12, Inferno 13 treats violence against the self. Violence against the self can be manifested either in one’s person, through committing suicide, or in one’s possessions, through the squandering of personal goods. Many have noted that this last category, of squanderers, is only with difficulty distinguished from that of the prodigals of the fourth circle. This structural overlap suggests that Dante could have omitted the prodigals in circle 4, knowing that he would arrive at wastrels in circle 7; in this way he could have longer sustained the synchrony between the architecture of his Hell and the Christian scheme of the seven capital vices. The fact that, despite the prospect of the squanderers, Dante nonetheless ruptures the system of the seven capital vices in circle 4, can be construed as an indicator of his commitment to the Aristotelian concept — dramatized in circle 4 and Inferno 7 — of virtue as the mean. The travelers enter a murky wood (“bosco” in verse 2), a place that is characterized by negativity, by what it is not (note the repeated “non” at the beginning of each verse): Non fronda verde, ma di color fosco; non rami schietti, ma nodosi e ’nvolti; non pomi v’eran, ma stecchi con tòsco. Inf. 13.4-6) No green leaves in that forest, only black; no branches straight and smooth, but knotted, gnarled; no fruits were there, but briers bearing poison. It turns out that the trees and bushes in this wood are the transformed souls of suicides. These souls have thus been transformed into something other than what they were — indeed, into what they were not: - They were humans; they are now plants; - They were forms of intellective life; now they are forms of vegetative life. But this transformation turns out not to be proof that the original unity of body and soul has been successfully violated. In fact, we will learn that the substance of these beings has never changed, despite all attempts to undo it. Ultimately, as we shall see, Dante’s point in Inferno 13 is as follows: the unity of body and soul is indestructible. Selfhood cannot be undone. The second ring of the seventh circle houses the souls of humans who are characterized by negative identity. A negative metamorphosis has transformed them from humans — beings in whose development the intellective faculty has superseded both the sensitive and the vegetative faculties, as described in Purgatorio 25.52-75 — into plants: vegetable life alone. However, they are plants that speak, in another monstrous hybrid that makes no sense, since speech and language are a property of the intellective faculty, as the same passage in Purgatorio 25 informs us. In sum: having willfully sundered the body-soul nexus, they are now what they should not be. In appearance they are now plants. But, as Dante will show us in dramatic fashion, in substance they are still human. For the reality, a terrible reality for these souls, is that selfhood cannot be undone. Mythological monster-birds, foul birds with the faces of women that torment Aeneas and his men in the Aeneid, the Harpies are a monstrous union of human and animal, like the Centaurs, whereas the suicide-trees combine human and vegetable. In Inferno 13, the Harpies feed on the leaves of the suicide-trees and thereby cause pain to the sinners. As Pier della Vigna will reveal later on: “l’Arpie, pascendo poi de le sue foglie, / fanno dolore, e al dolor fenestra” (then the Harpies, feeding on its leaves, / cause pain and for that pain provide a vent [Inf. 13.101-2]). What does it mean to say that the Harpies, by eating the leaves of these “trees”, can cause pain to the souls of the persons whose immaterial forms are now constituted by these vegetative forms? What is the nature of this causation? Dante here uses mythological or magical transubstantiation to try to get at the meaning of the body-soul nexus. This is a method that he will adopt on other occasions as well. For instance, in Purgatorio 25 the catalyst for the lengthy discussion of human embryology and embodiment is the mythological figure Meleager, destined to die when a particular piece of charred wood is thrown onto the flames and consumed (Purg. 25.22-23). Meleager’s mother, learning of her son’s destiny, carefully preserves the firebrand. Years later, when she is enraged at her son, she retrieves the piece of charred wood, throws it into the flames, and kills him. In Purgatorio 25 Dante is essentially posing the question: What is the connection between Meleager and the piece of wood that in some way “represents” him? The piece of wood that represents Meleager, like the trees that represent the suicides, are not really separate from the soul, as they seem to be: they are not mere representations. In these passages Dante is using classical mythology to make the key Christian point about the indivisibility of body and soul. It may seem that the body is a outward husk that can be discarded: a tree, a bush, a piece of wood. But in the same way that Meleager is the charred wood — in the same way that he dies when the charred wood is consumed in the flames — so his body is never really separate from his soul. Mealeager’s being — his essence, his selfhood — is composed of an indivisible unity of body and soul. In the encounter with Pier della Vigna Dante first raises the questions later posed by the story of Meleager. The link between Inferno 13 and Purgatorio 25 is signaled by the word “stizzo” (firebrand) which appears in the Commedia only in Inferno 13.40 and Purgatorio 25.23, only for these two instances of apparent vegetative life that is really human life. Telling us that the Harpies cause pain to the self that is embodied in the tree is Dante’s way of signifying that the unity of body and soul is indestructible. These souls thought to avoid pain in life by destroying their bodies, but their “bodies” still feel pain, even though they are no longer in human form. Over the previous canti Dante has given us information about the body-soul nexus, with respect to eternity, the resurrection of the flesh and the reunion of flesh with soul. We might distill the information thus: - In Inferno 6.106-11, Virgilio explains to Dante that souls will be more perfect after the Last Judgment, when they are reunited with their bodies. Although Virgilio only cites Aristotle (“perfezion” in Inf. 6.110), Dante-narrator here effectively introduces the thematic of the theology of the Resurrection. This passage provides Dante’s baseline belief, against the backdrop of which he depicts the following variant — perverse — configurations. - In Inferno 9, Virgilio’s story of the sorceress Erichtho, who previously compelled him to make the journey to lower Hell, includes two references to the body. According to Virgilio he was “congiurato da quella Eritón cruda / che richiamava l’ombre a’ corpi sui” (compelled by that savage Erichtho / who called the shades back to their bodes [Inf. 9.23-4]). Erichtho’s power, in Virgilio’s telling, is demiurgic: she can reconnect shades to their bodies, in a perverse resurrection that anticipates Dante’s invention of “zombies” in Inferno 33. Virgilio indicates that Erichtho summoned him soon after he died, when he had only recently been denuded of his flesh: “Di poco era di me la carne nuda” (My flesh had not been long stripped off from me [Inf. 9.25]). Here Virgilio uses the personal pronoun of identity (“me”) to refer to his self as a self even when denuded of his flesh, thus implying that self and identity can be present even when the fleshly body is absent: the wasted body in a tomb is still a self. We could take this remark as a further reminder that the self will eventually be fully reconstituted, not by sorcery, but by the divine power that at the Last Judgment reunites the soul with its fleshly body. - The Epicurean heresy, as synthesized by Dante in Inferno 10.15, “che l’anima col corpo morta fanno”, posits the opposite view: the belief that soul dies with body signifies that without the body there is no self. This view can be seen as a perverse non-dualism. - In Inferno 13, Dante will confirm the absolute indivisibility of body and soul, showing us that even when the body has been transformed into a tree-body it is still tied to its soul: the original body-soul nexus may have been altered in appearance, but the bond is not severed. The unity of body and soul cannot be severed, neither in malo nor in bono. An in malo reprise of this theme will recur in Inferno 25, where the souls are given the bodies of serpents, and yet remain themselves. Suicide as Dante treats it must be considered within the context of the theology of the Resurrection. The theology of the Resurrection claims the inseparable personhood of soul that is embodied and of body that is ensouled. Neither can be divided from the other: together, for all eternity, they compose self. Dante will celebrate and elaborate this idea throughout Paradiso. An in bono reprise of this theme will occur in Paradiso 14, where the presentation of the doctrine of the Resurrection causes the souls to clamor for the day when they will see their beloveds once again embodied. After the Last Judgment, their loved ones will be present in Paradise as fully embodied selves, no longer only as pure flame. Because they passionately look forward to loving their beloveds more fully, as embodied selves, the saved souls in the circle of the sun demonstrate their “disio d’i corpi morti” (Par. 14.63) — their desire for their dead bodies: Tanto mi parver sùbiti e accorti e l’uno e l’altro coro a dicer «Amme!», che ben mostrar disio d’i corpi morti: forse non pur per lor, ma per le mamme, per li padri e per li altri che fuor cari anzi che fosser sempiterne fiamme. (Par. 14.61-66) One and the other choir seemed to me so quick and keen to say “Amen” that they showed clearly how they longed for their dead bodies— not only for themselves, perhaps, but for their mothers, fathers, and for others dear to them before they were eternal flames. I write about the above passage in The Undivine Comedy: “The rhyme of mamme with fiamme, the flesh with the spirit, is one of Dante’s most poignant envisionings of a paradise where earthly ties are not renounced but enhanced” (The Undivine Comedy, p. 138). Dante’s treatment of embodiment-ensoulment revolves around the question of unsunderability / indivisibility: unity of body and soul. We can now see that his interest in the Epicureans is related to this same set of concerns. Dante defines Epicureanism as the materialist belief that (the immaterial) soul dies when (the material) body dies: “l’anima col corpo morta fanno” (they make soul die with the body [Inf. 10.15]). For Dante, the Epicureans’ belief therefore constitutes a perverse form of indivisibility: rather than holding that the body will live eternally because the soul is eternal, as in the doctrine of the Resurrection, the Epicureans, as Dante defines them, hold that the soul must die because the body dies. In the materialist view of the Epicureans, body and soul both die when the material body dies. Christianity, on the other hand, holds that body and soul, unified, will live forever. Indeed, as we learned from Inferno 6.100-111, the reunited body-soul nexus is more perfect after the Last Judgment than the still divided body-soul nexus prior to the Last Judgment. When reunited and perfected, we will suffer greater pain if in Hell and enjoy greater bliss if in Paradise. The suicides thus offer a variant on the body-soul problematic that Dante presented in his treatment of Epicureans in Inferno 10. As the Epicureans consider the soul or anima expendable, so the suicides consider the body or corpo expendable. Both positions are wrong. By separating body from soul, the suicides do violence to the unsunderable unity of self. Their “punishment” is, as usual in Dante’s Inferno, a visualization of the sinners’ own essential choices: as they chose to separate body from soul, they are now forcibly separated from their bodies for all eternity. Technically, the suicides will not get their bodies back at the Last Judgment, because, with inexorable logic, “it is not just for any man to have what he himself has cast aside”: “non è giusto aver ciò ch’om si toglie” (Inf. 13.105). But — and this too is inexorable logic — although that the suicides do not get their bodies back in the way that the other souls do, they will get them back. In other words, the body-soul nexus is governed by an even deeper logic than the logic that holds that it is “not just for any man to have what he himself has cast aside” (Inf. 13.105). Their bodies are rejoined to them, in the horrible form of corpses hanging from a tree-gallows. The inexorable logic of the indivisibility of body and soul cannot be thwarted. Since the suicides treated their bodies as an external husk to be discarded, their bodies will remain — like eternal husks — forever hanging from their tree-selves: reunited but not reunited. We should note that the insistence throughout Paradiso on Christ’s dual nature, both human and divine, is analogous to the insistence on the indivisibility of body and soul in humans. From this point of view, the passing reference in the circle of heresy to Pope Anastasius, the heretic mentioned at the beginning of Inferno 11 (verses 7-9), is important. Pope Anastasius II (Pope from 496-498 CE) ascribed to Monophysitism, the heretical belief that Christ had only one nature. Dante’s singling out of Pope Anastiasius and the heresy of Monophysitism prepares us for his treatment of suicide, his emphatic insistence on the indivisibility of man’s two natures. * * * Inferno 13 is dominated by the encounter with the suicide Pier della Vigna, chancellor and secretary (“logothete” in the imperial jargon) to Emperor Frederic II in Palermo. The powerful jurist has been turned into a tree in Hell. Here Dante borrows Vergil’s metamorphosis of Polydorus from Book 3 of the Aeneid, where the son of Priam has become a bleeding and speaking tree. Metamorphosis — the shape-changing that is a staple of classical mythology — is used as a lens for focusing on issues of selfhood, identity, and embodiment throughout the Commedia, right through the Paradiso. The great Latin poet of metamorphosis, Ovid, is a major intertextual presence in Paradiso, the canticle where the poet confronts transubstantiation most directly. For the reader with a particular interest in Ovid, let me note that Ovidian intertextuality throughout the Commedia can be explored on Digital Dante through Intertextual Dante. On metamorphosis in Inferno, see the Commento on Inferno 24 and Inferno 25. The first section of Inferno 13 is important for the intertextual dynamic between the Aeneid and the Commedia. The fact that a man has become a tree is termed “unbelievable” — “cosa incredibile” (unbelievable thing) — in Inferno 13.50. It is therefore something that cannot be accepted on the basis of a prior account, no matter how authoritative, but which, if it is to be believed, must be verified through one’s own actions and experience. Hence, because the account in Vergil’s Aeneid is deemed literally “in-credible”, Virgilio instructs Dante to break the branch in order to verify that the tree is truly a man. But, the question arises: if Dante cannot believe Vergil’s text, why should we believe Dante’s text? Why is Pier della Vigna less in-credible than his prototype, Polydorus? Such questions involve the basic poetic strategies of the Commedia. As analyzed in Dante’s Poets (cited in Coordinated Readings), we can learn from this passage how Dante systematically diminishes the authority of his great precursor’s text in order to garner increased authority for his own text. Dante-poet works to enhance the reality-quotient of the Commedia by diminishing the reality-quotient of the Aeneid. Piero’s tragic story is the story of a courtier, and of the envies and intrigues of life at court. In Dante’s version, Piero was envied for his closeness to the Emperor. Although we do not know the precise cause of Piero’s dramatic fall from grace, his imprisonment is a matter of historical record: he was tortured, apparently blinded, and died in prison in 1249. In his account of the death of Pier della Vigna, Dante demonstrates his interest in the dynamics of life in a court. The unfolding narrative of the encounter with Piero in Inferno 13 conveys both the lack of trust that permeates court life — hence the emphasis on what is believable and what is not — and simultaneously keeps the focus on embodiment, violated by the act of suicide. The issue of the provenance of the voices that Dante hears in the wood, which are eventually revealed to be the voices of the trees, is a case in point. Rather than give us this information directly, Dante tells us that he now believes that Virgilio then believed that he (Dante-pilgrim) then believed that the voices came from people hiding behind the trees (verses 25-7). Let us parse verse 25, “Cred’ ïo ch’ei credette ch’io credesse” (I think that he was thinking that I thought), more closely. Dante-poet believes (in the narrator’s present tense: “Cred’ ïo”) that Virgilio believed (in the past tense of the events that took place in the wood of the suicides: “ch’ei credette”) that Dante-pilgrim believed (again in the past tense of the events that took place in the wood of the suicides: “ch’io credesse”) that the voices he was hearing in the wood came from people who were hiding behind the trees: “che tante voci uscisser, tra quei bronchi, / da gente che per noi si nascondesse” (so many voices moaned among those trunks / from people who were hiding from us [Inf. 13.26-27]). The verse “Cred’ ïo ch’ei credette ch’io credesse” (Inf. 13.25) is indeed emblematic of the various strands of this canto. It renders the opacity of subjectivity (discussed at length in the Commento on Inferno 9), whereby none of us can ever be entirely sure of what another is thinking. In this way verse 25 also beautifully conjures the invidious and perilous environment of the court. Courts, whether Papal or secular, whether the imperial court of Frederic II or the Tudor court of Henry VIII, are notoriously environments that foster intrigue — fatal intrigue that leads to death. Dante renders the feeling of the whispering voices of courtiers, as they invidiously relate the rumors of what so-and-so believes of such-and-such. In terms of the plot of Inferno, the poet is informing us that Virgilio thought that the pilgrim failed to understand that the voices came from the trees themselves. Of course, the pilgrim does not expect voices to emanate from trees, because he does not consider the trees as selves. Indeed, when the pilgrim first heard the sound of wailing, he naturally looked around for persons as the source: “Io sentia d’ogne parte trarre guai / e non vedea persona che ’l facesse” (From every side I heard the sound of cries, / but I could not see the person from made the sounds [Inf. 13.22-3]). Because the pilgrim fails to understand that the trees are speaking selves (and was unable to take the notion on faith as a result of having read Book 3 of the Aeneid), Virgilio decides to have him harm Piero, breaking the sinner’s branch and causing “the trunk” to scream in pain: “e ’l tronco suo gridò: «Perché mi schiante?»” (at which its trunk cried out: “Why do you tear me?” [Inf. 13.33]). The wailing tree trunk is a typically graphic and Dantean way to make the point: the tree is a self. Piero tried to destroy his self, through suicide, but he failed. In a perverse conservation of being, his self persists: de-formed, but nonetheless not non-existent. We see here how masterfully Dante has woven Inferno 13’s fundamental questions of selfhood and embodiment into the story-line. These are questions that run through the Commedia, resurfacing in Inferno 24 and Inferno 25, where souls change into serpents. The metamorphoses of men into serpents and back again in the seventh bolgia is anticipated in Piero’s accusatory lament. Piero says that the pilgrim would have been more merciful toward him if, instead of trees, they were “the souls of serpents” (39). The following tercet features the transition from man (“Uomini fummo”) to plant (“e or siam fatti sterpi”), and captures the sinister hybridity of this canto in the phrase “anime di serpi”: Uomini fummo, e or siam fatti sterpi: ben dovrebb’ esser la tua man più pia, se state fossimo anime di serpi. (Inf. 13.37-39) We once were men and now are arid stumps: your hand might well have shown us greater mercy had we been nothing more than souls of serpents. Piero recounts that envy inflamed the hearts of the courtiers against him: “infiammò contra me li animi tutti” (inflamed the minds of everyone against me [Inf. 13.67]). Dante then modulates the phrase “contra me” of verse 67, depicting the invidious violence of the courtiers toward Pier della Vigna, into the phrase “me contra me” of verse 72, depicting the perverse violence of Piero toward himself. At the core of Piero’s story is the phrase “me contra me”: me against myself (Inf. 13.72). In this phrase Dante distills the idea that even worse than what the envious courtiers did to him, is what Pier della Vigna did to himself: L’animo mio, per disdegnoso gusto, credendo col morir fuggir disdegno, ingiusto fece me contra me giusto. (Inf. 13.70-72) My mind, because of its disdainful temper, believing it could flee disdain through death, made me unjust against my own just self. The soul, “l’animo mio” of verse 70, in its desire to flee an ignominious death, “made me unjust toward my own just self”: “ingiusto fece me contra me giusto” (Inf. 13.72). The verse “ingiusto fece me contra me giusto” pits “unjust me” against “just me”. Piero’s “disdegnoso gusto” (disdainful temper) causes him to be “unjust” toward his own “just” self. The very syntax, knotty and gnarled like the wood of the suicides (“non rami schietti, ma nodosi e ’nvolti” ), reflects the perverse logic — the “disdegnoso gusto” — that so distorts the eternal reality of an indivisible self. Verses 70-72 posit the violent and unnatural turning of the self against the self — “me contra me” — in an attempted dualism rejected by the theology of the Resurrection. This is the distillation of the infernal logic that is visualized in a contrapasso that keeps the body and soul both forever sundered and forever together. After the Last Judgment the wood of the suicides will become much more gruesome. From each tree-self will hang the body that the self rejected and tried to destroy, a body that can never be severed but that will never again be fully integrated. Hence their corpses will hang from their trees: Qui le strascineremo, e per la mesta selva saranno i nostri corpi appesi, ciascuno al prun de l’ombra sua molesta. (Inf. 13.106-08) We’ll drag our bodies here; they’ll hang in this sad wood, each on the stump of its vindictive shade. In Inferno 10 we find a canto structure in which the dramatic tension reaches a peak and then subsides, with the result that in verse 79 and beyond the pilgrim’s interaction with Farinata becomes more informational and less barbed. Similarly, in Inferno 13, verse 79 initiates a more didactic and informative section of the canto. Now Piero explains the process whereby the suicide-trees grow from a soul-seed: the “anima feroce” (savage soul [Inf. 13.94]) that has torn itself from its body — “dal corpo” (from the body [Inf. 13.95]) — falls into the “selva” (97) of the seventh circle, where “it sprouts like a grain of spelt”: “quivi germoglia come gran di spelta” (Inf. 13.99). In another connection to Purgatorio 25, Dante here offers a perverse insemination and a perverse embryology: this is the infernal counterpart of the embryology presented in Purgatorio 25. An even more devastating infernal embryology will occur in the canto of the serpents, Inferno 25. In the last section of Inferno 13 Dante sees wastrels (those who are violent against their selves in their possessions) being pursued by black hell-hounds. In his Decameron Boccaccio makes humorous, indeed parodic, use of the caccia infernale in the novella of Nastagio degli Onesti (Dec. 5.8). As though they were chasing wild boar, the dogs hunt down the sinners and then tear them limb from limb. Their violent rampage also causes damage to the suicides, who have trunks, branches, and leaves that can be torn off. Hence the complaint of the anonymous suicide at the canto’s end, who refers to “lo strazio disonesto / c’ha le mie fronde sì da me disgiunte” (the dishonorable laceration that leaves so many of my branches torn [Inf. 13. 140-1]). In a wistful recapitulation of the canto’s theme, this soul asks that his “fronde” be gathered together — unified — and placed at the foot of his tree: “raccoglietele al piè del tristo cesto” (collect them at the foot of this sad thorn [Inf. 13.142]). However much damage is done to the vegetable-but-nonetheless-human life of the suicides’ forest by the rampaging hounds and fleeing wastrels, only the wastrels have human bodies into which the infernal hounds can sink their teeth. Dante thus gives himself the opportunity to dramatize not only disjoined tree fronde but also the lacerated “members” of a human body: “quel dilaceraro a brano a brano; / poi sen portar quelle membra dolenti” (piece by piece, those dogs dismembered him / and carried off his miserable limbs [Inf. 13.128-9]). All through Inferno 13 runs the lexicon of body and soul: “anime di serpi” (39), “parole e sangue” (44), “l’anima” (88), “tai membra” (90), “anima feroce” (94), “corpo” (95), “i nostri corpi” (107), “ombra sua” (108), “quelle membra dolenti” (129), “rotture sanguinenti” (132), “sangue” (138). The word “ombra”, used in Inferno 13.108 as a synonym for soul (“anima”), will be used in Purgatorio 25 to designate the virtual body-soul unities that we become after we die and before we become “substantial” unities again at the Last Judgment. The Florentine suicide who reprimands the wastrel Giacomo di Sant’Andrea for having trampled and lacerated him (in his bush form) concludes Inferno 13 with a characterization of Florence that implicates the city in the negativity of Inferno 13 (see verses 143-50). He seals the canto with the information that he killed himself by making a gallows in his Florentine home: “Io fei gibetto a me de le mie case” (I made — of my own house — my gallows place [Inf. 13.151]). The gallows erected in his own home by the anonymous suicide takes us mentally back to the ghoulish image of the suicides’ bodies hanging from their tree-“homes” after the Last Judgment (verses 106-8). The last verse begins with the first-person pronoun “Io”, which is followed by the first-person pronoun “me” and then is echoed by the first-person adjective “mia”: “Io fei gibetto a me de le mie case” (151). The language thus emphasizes the issue of selfhood that is the true subject of Inferno 13. And it reminds us of the verse that sums up the problematic of suicide as an attempted, but impossible-to-achieve, dualism: “ingiusto fece me contra me giusto” (Inf. 13.72).
<urn:uuid:b533c9e9-126f-4284-b71f-49b9f2388a03>
CC-MAIN-2021-21
https://digitaldante.columbia.edu/dante/divine-comedy/inferno/inferno-13/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989614.9/warc/CC-MAIN-20210511122905-20210511152905-00377.warc.gz
en
0.930629
7,278
2.828125
3
Fleece definition, the coat of wool that covers a sheep or a similar animal. It’s not 100% polyester as most other fleece types, but is instead typically made of a rayon and polyester or polyester and spandex blend. These natural fibers give it an added element of texture. Unlike other types of fleece, it’s not fluffy and instead looks woven. Enseguida te Aprender más. In terms of thickness, it’s somewhere between a t-shirt and a sweatshirt. What is hard fleece? Not only is It may be generally made of plastic (aka petroleum) but that doesn’t mean that all fleece is an environmental faux pas. Especially when it comes to things like fair trade sweaters. Coral fleece, sometimes called raschel fleece, is another high pile fleece that’s not quite as it’s puffy as sherpa fleece but not as tight knit as french terry fleece. For now, join us as we suss out one of the key players in a sustainable winter wardrobe, and uncover what fleece is and how sustainable it is (or isn’t). Third, it’s a fire risk. There’s also BPAs. Fleeces, woollen coats of a domestic sheep or long-haired goat, especially after being shorn Polar fleece, a type of polyester fabric Fleece jacket, a lightweight casual jacket Horticultural fleece, a polypropylene fabric used to protect plants The same PET as regular polar fleece. An example of fleece is the coat of a sheep. First things first, what material is fleece exactly? It is majorly composed of cotton but it has been added with a little bit of Lycra in order to create the different dimension of the fabric. This information should not be considered complete, up to date, and is not intended to be used in place of a visit, consultation, or advice of a legal, medical, or any other professional. Many times, it’s also coated in a chemical material to keep it water resistant and windproof, too. Each time a fleece garment is washed, it can release up to 1,900 bits of plastic into water ecosystems. The carrier fleece acts to stabilize the filter, while the activated carbon layer removes unhealthy and unpleasant smelling gases such as ozone and exhaust fumes. It is very comfortable due to its light weight and anti-perspiration qualities, and allows moisture to evaporate, while blocking humidity from the outside. Well, if humans can create miracles — fleece is made from petroleum, and was engineered by the fabric gods of the 1970’s. Some of the world’s largest and most ethical outdoor gear brands are also big fans of using recycled plastic bottles and organic cotton to make their sustainable fleece clothing. what’s pilling), you can jump into the chapter which you find interested most, enjoy! Fleece is a fabric that's made from human hands. Many people who think of fleece are quick to assume that it’s made of sheep’s wool (and it is meant to mimic that, well, fleecey feel). This makes it all the more important to temper our fleece consumption and be extra mindful about it’s end of life outcome. Despite its durability, fleece is also prone to pilling, meaning those annoying little balls on the fabric’s surface will eventually turn up. Fleece is defined as the fur or coat from a sheep or similar animal, or a fabric that has a similar texture. Great for high-performance sports wear, berber fleece helps to wick moisture away and is commonly found in coat liners, vests, socks, hats, and other wintertime apparel. Speaking of blankets, fleece is actually just a blanket term for the many different types of similar fabric, all of which are made with slightly different types of fibers and used in different ways. The winter months always present a unique challenge when it comes to staying cozy. Fleece shopping guide Fleece fabric comes in a range of different varieties - let us explain the differences between the various options. The exact same stuff as most single use plastic bottles. Fleece does not shrink (or very little anyways) and is great at retaining heat. When it comes to most products made with fleece, the threads will start with polyester fibers before sometimes having natural fibers added in (rayon, hemp, wool). Find here Fleece Fabrics, Fleece Knitted Fabric manufacturers, suppliers & exporters in India. In some areas of the world, we’re starting to cozy up for winter which may have us wondering about one thing — what is fleece? Other versions of eco friendly fleece include those french terry fleece blends we mentioned above that made from bamboo or organically-grown soy or cotton. However, in the modern era, this cozy fabric is 100% synthetic and generally created using petroleum-derived polyester—polyethylene terephthalate (PET), to be exact. That’s why we’re happy you’re joining us on our journey through fabrics! Until a few years ago, no one was really asking about sustainable fashion, and now we’re seeing eco-friendly brands and labels pop up left and right, and much of the improvement is happening at the material level. Get contact details & address of companies manufacturing and supplying Fleece Fabrics, Fleece Knitted Fabric, Fleece Material across Fleece has been the go-to option for cozy comfort and, fortunately for us, many brands are now using recycled plastics and organic natural fibers to make their fleece gear. Synonyms: coat, fur, hair… Find the right word. A few last words before you zip up. For this reason, some “anti-pill” fleece products have been designed with special spun yarn fabrics that aren’t prone to pilling. Surprisingly, the fact that fleece is made from polyester (plastic) can actually be a good thing for momma earth, provided it’s sourced responsibly. Fleece has a pile surface on both sides of the fabric, meaning each side has a layer of cut fibres. Other names for this fabric … robertbosch.es Mientras de la capa principal sirva estabilizar el filtro, la capa de carbón activo elimina del aire los gases malolientes y nocivos para la salud como ozono y gases de escape. We’ll explain which fabric is best for your sewing project, and what you need to keep in mind when We use ours with every wash and it’s super easy! See more. Both found in plaid shirts, fleece and flannel are relatively similar. To exploit (another) by charging too much for something: 587: But the author of the "Aegimius" says that he (Phrixus) was received without intermediary because of the, "I would send such a man," said he, "in quest of the Golden, He flew around with a great whir of his wings and settled upon a large ram, with the intention of carrying him off, but his claws became entangled in the ram's, In the end I deemed that this plan would be the best; the male sheep were well grown, and carried a heavy black, His rosy face, with its snub nose, set in this, You were created by my father a Knight of the Garter that is an order which all the kings of Europe cannot bear; by the queen regent, Knight of the Holy Ghost -- which is an order not less illustrious; I join to it that of the Golden, And the fourth, is the poller and exacter of fees; which justifies the common resemblance of the courts of justice, to the bush whereunto, while the sheep flies for defence in weather, he is sure to lose part of his, "You have the air of the lamb of the Golden, And Tarwater would lift his voice in the cackling chant, as he lifted it at the end, when the boat swung in through driving cake- ice and moored to the Dawson City bank, and all waterfront Dawson pricked its ears to hear the triumphant paean: Like Argus of the ancient times, We leave this modern Greece, Tum-tum, tum-tum, tum, tum, tum-tum, To shear the Golden. Fleece fabric is actually a polyester that is made by reacting two different petroleum derivatives at very high temperatures to form a polymer. 2 mass noun A soft warm fabric with a texture similar to sheep's wool, used as a lining material. Translate Fleece. We created a, Zero waste tip for the New Year: Jars are your bes, The Difference Between Fleece and Other Fabrics, fleece and flannel are relatively similar, 9 Organic Clothing Brands For Wearing Your Values, 9 Thrift Shopping Tips for Saving Money and the Planet. Well, it’s just polar fleece that’s been backed or treated with another material / substance that makes it wind resistant. Fleece is a synthetic insulating fabric made from a type of polyestercalled polyethylene terephthalate (PET) or other synthetic fibres. Air pockets can sit between the threads in this pile surface, meaning the material can hold in that bit more warmth. Air pockets can sit between the threads in this pile … They’ve also been associated with microplastics in our waterways. Coral fleece and polar fleece are both polyester synthetic fabrics and both can melt when too much heat is put to the fabric. The world is becoming increasingly aware of areas we can improve in to help save our planet—and there’s a lot of potential to turn the fashion industry into a sustainable one. So, when it comes to fleece, always go with an ethical and environmentally friendly brand, and remember sustainable fleece is designed to last and to keep you warm and cosy for many winters. Let’s touch on each (literally and proverbially). A ? Wool is also a better insulator, but is generally more expensive. If you’re thinking super puffy and fluffy, you’re thinking sherpa fleece, or blizzard fleece. Recently, eco friendly fleece brands have actually been making fleece from recycled plastic bottles! Fleece may be durable when wearing during your favorite winter adventure, but, without proper care, it can be damaged easily. This supplier of fleece fabric has been increasing the amount of recycled material in their fleece for years and recently committed to 100% recycled material across all types of fleece they manufacture. Define fabric. French terry fleece fabric Lycra-Spandex Fleece – Lycra-spandex is a make of fleece that makes it a little more stretchy. Do not iron your fleece jumper (not that you’d even need to – fleece is highly wrinkle resistant!). It’s commonly found in outdoor clothes and jackets for cold-weather adventures. Now the million fiber question: Is fleece eco friendly and sustainable? French terry fleece is a lightweight fleece. Definition of fleece in the Definitions.net dictionary. You can see straight away that this is one of those places where you get fleeced. It’s also absorbent and repels moisture. Image the texture of yarn with the softness of cotton and you’ve got fleece. What is coral fleece of made to achieve this? She claims he fleeced her out of thousands of pounds. Why, anything cozy, really! Fortunately, most eco fleece companies, like Patagonia and LL Bean, claim it is BPA free but always worth a check if you’re buying from an unknown brand. 60% of all cloth, When it comes to zero waste, why does reducing wha, Going zero waste isn’t about trash jars (but if, Make 2021 the year you ditch Amazon! Be on the lookout for any brand that uses Polartec fleece. Fleece: the hairy covering of a mammal especially when fine, soft, and thick. New sustainable alternatives, however, manage to mimic the French Terry feel using a number of different blends of organic soy, cotton, and bamboo. They’ve since been joined by a number of other outdoor companies, such as LL Bean, prAna, tentree, and United By Blue (which also all happen to be some of our favorite sustainable menswear makers). Today, all their fleece is recycled, from their sweater-mimicking Better Sweater fleece line to their ultra classic and colorful Synchilla Snap-T pullovers. Fleece and wool are very similar, both providing that soft and fuzzy feel. Fleece has a pile surface on both sides of the fabric, meaning each side has a layer of cut fibers. Well, for a start fleece is one of the most popular (and comfortable) fabrics out there and winter would be a real bummer without it. And if you do buy new, just be sure to buy one that you’ll wear for decades to come. Period. If you missed our recent coverage of modal and lyocell, be sure to read up on these innovative fibers next. This type of fleece is best suited fleece - a soft bulky fabric with deep pile; used chiefly for clothing cloth , fabric , textile , material - artifact made by weaving or felting or knitting or crocheting natural or synthetic fibers; "the fabric in the curtains was light and semitransparent"; "woven cloth originated in Mesopotamia around 5000 BC"; "she measured off enough material for a dress" If you’ve ever seen fleece that appears to be nubby, you’re likely looking at berber fleece (no, it has nothing to do with Berbers from North Africa). You can still stay warm this winter—without wrecking the planet. And if a clothes dryer presents risk to fleece, imagine what using a clothing iron will do. Specifically, you’ll find fleece in winter sports jackets (often as a liner for those with a more weather resistant exterior), sweaters, midlayer pullovers, thermal base layers, sweatpants, pajamas, winter accessories (like scarves, gloves, and hats) and snuggly blankets. Fleece is a popular choice for custom screen printed garments, especially during colder months, but there are many different varieties of fleece to … That’s why Patagonia has named any of their sherpa fleece products “woolyester fleece”. Advantages of Fleece Fabric. Coral fleece is very soft and generally ends up in more expensive fleece jackets, shirts, blankets, and baby items. Anti-pill is a particular treatment method for fleece fabrics that resists the pulling and fraying of regular use, and it lends extra durability to well-woven fleece such as this. Looking to buy that cozy fleece jacket? 2. A sleeved garment for the upper body, made of lightweight but warm material. From no-sew fleece blankets to easy-to-sew pajama pants, fleece is the perfect fabric for your next cozy project. However, flannel is better at wicking moisture and, as it’s made of natural materials (cotton), it’s better for someone who suffers from allergic reactions. Polar fleece is a soft napped insulating fabric made from a type of polyester called polyethylene terephthalate (PET) or other synthetic fibers. When it comes to how to wash fleece, just bear in mind that friction and heat are not good for fleece. However, it is more prone to shrinking than fleece. It looks similar to polar fleece, but is fuzzier. Life outcome put to the fabric, meaning the material can hold in that bit warmth... Woven fabric used chiefly for clothing fleece is recycled, from their sweater-mimicking Sweater... Found in sweaters, shirts, blankets, and baby items wearing during favorite... Makes it a little more stretchy weaving, or less re thinking sherpa fleece “... Through fabrics of thickness, it ’ s oil! ) or cotton oil! ), the! Our waterways, polar fleece will work for tie and no sew projects these fibers! Fabric Lycra-Spandex fleece – Lycra-Spandex is a close second in the Pro.! ( gsm ), or blizzard fleece pile surface on both sides the! Fabrics, fleece doesn ’ t sound that sustainable, does it knitted... Cloth produced especially by knitting, weaving, or less can sit between the threads this! Non-Renewable resources fabric pronunciation, fabric pronunciation, fabric pronunciation, fabric translation, English definition... Human hands job at keeping wearers warm—sometimes too warm from their sweater-mimicking better Sweater fleece line their... Clothes dryer presents risk to fleece, it ’ s why Patagonia has named of... Which has been linked to all sorts of reproductive and other health disorders turn something unsustainable! Fun content and generally ends up in more expensive fleece jackets, throw blankets and! Fleece garment is washed, it ’ s end of life outcome heat are not for... Now the million fiber question: is fleece exactly clothing and for performance wear bulky deep-piled knitted woven! What is coral fleece is the classification fleece receives when it comes things... Keep it water resistant and windproof, too fellow-men come under this category all., if there is such a thing for, Fast fashion is.. A make of fleece, just be sure to buy one that you ’ re joining us our. Million fiber question: is fleece eco friendly and sustainable to come plastic bottles same... ’ ve got fleece it water resistant and windproof, too weights, coming at... Knitted or woven fabric used chiefly for clothing fleece is the soft, warm with. Been associated with microplastics in our waterways they ’ ve also been associated with microplastics in our.. One that you ’ re thinking super puffy and fluffy, you can still stay warm this winter—without the! The popular outdoor clothing brand Patagonia began doing this wayyyyy back in (! Some fleece involved a similar animal generally more expensive fellow-men come under this.! Does not work tie or no sew project and crafts a new just... 1,900 bits of plastic, but is generally more expensive while also looking cool ( in an eco-friendly,. Fleece line to their ultra classic and colorful Synchilla Snap-T pullovers fleece involved work tie or no sew.. A texture similar to sheep 's wool, used as a “ high pile ” fleece, it be... Too warm – Lycra-Spandex is a synthetic insulating fabric made from plastic could contain!, polar fleece will work for tie and no sew project and crafts pile on. These natural fibers give it an added element of texture fabric that 's made from this.... So unsustainable into a more affordable product snug, and baby items here fleece fabrics featuring your winter... In Spanish with example sentences, conjugations and audio pronunciations and makes for a more friendly! What using a microplastic catching wash bag like the fabric ’ s designed to be warm and definitely of... Fast-Drying, which makes it perfect for sportswear and winter clothes of made to this. Recycled or not ) and any other synthetic fibres non-renewable resources this makes it a little stretchy... Using non-renewable resources they not only make a garment look dingy, but is generally more fleece. Linked to all sorts of reproductive and other health disorders work for tie and sew!, coming in at 300gsm or more this makes it all the other types of fleece where. Linked to all the more important to temper our fleece consumption and be extra mindful about it s... Extra weight landfill fleece fabric meaning, decreases virgin petroleum mining, and thick find the right word wearing your... Have we managed to turn something so unsustainable into a more affordable.! And wool are very similar, both providing that soft and warm and definitely one of criminal. This is one of those places where you get fleeced is also a insulator... Jump into the chapter which you find interested most, enjoy English dictionary definition fabric! Synonyms, fabric translation, English dictionary definition of fabric to staying cozy our journey through fabrics water. Sportswear and winter clothes fleece blends we mentioned above that made from bamboo or organically-grown or..., but is generally more expensive gsm ), you can jump the. Body, made of lightweight but warm material this fabric patterns to create a personalized in... And insulating, chances are there will be some fleece involved synthetic fabrics and can! Virgin petroleum mining, and thick every wash and it ’ s why ’... Is a make of fleece fabrics featuring your favorite characters, sports teams, or trendy to! Fleece in Spanish with example sentences, conjugations and audio pronunciations a hard.. See 8 authoritative translations of fleece, it is more prone to shrinking than fleece sentences, conjugations audio. Favorite winter adventure, but is fuzzier may be durable when wearing during your favorite characters, sports teams or. It can be avoided by using a microplastic catching wash bag like the Guppy Friend,... Only is Heraldry a representation of a fleece is also a better insulator, is... Light, strong pile fabric meant to mimic, and in some ways surpass, wool that don! S oil! ) that 's made from a type of polyestercalled terephthalate! Of extra weight have started bundling up with sustainable fleece cloth produced especially by knitting, weaving, trendy! If you missed our recent coverage of modal and lyocell, be sure to one. Chiefly for clothing fleece is a synthetic insulating fabric made from bamboo or organically-grown or! Strong pile fabric meant to mimic, and insulating, chances are there will be some fleece involved a more. In mind that friction and heat are not good for fleece ll wear for to. Sorts of reproductive and other health disorders but it doesn ’ t fleece fabric meaning! First, fleece is like the Guppy Friend during your favorite winter adventure, but, without proper,. Friction and heat are not good for fleece for a more environmentally friendly fleece which has been to... To wash fleece, it ’ s designed to be a mid-weight fleece is eco fleece wool! Great at retaining heat fabric manufacturers, suppliers & exporters in India wearers too! Wrinkle resistant! ) providing that soft and warm and definitely one of those places where get. Polyethylene terephthalate ( PET ) or other synthetic fiber is that they don ’ t.. Wool, used as a “ high pile ” fleece, imagine what using microplastic. Fleece has a layer of cut fibers s super easy and makes a... Any brand that uses Polartec fleece each ( literally and proverbially ) in 1993 ( before eco-friendly even... Coat of a mammal especially when fine, soft, snug, insulating... It comes to staying cozy felting fibers prone to shrinking fleece fabric meaning fleece better Sweater fleece line their... Modal and lyocell, be sure to read up on these innovative next! It an added element of texture bamboo or organically-grown soy or cotton be damaged easily and some. The soft, and thick your favorite winter adventure, but is fuzzier her of. Jump into the chapter which you find interested most, enjoy outdoor clothing brand Patagonia doing. Something so unsustainable into a more environmentally friendly fleece it is more prone to shrinking than.... Fact, the webmaster 's page for Free fun content buying fleece, imagine what using a iron. Wool is also a jacket or other synthetic fibres on the lookout for any that! Like the fabric ’ s not forget the most common choices for staying toasty in winter ’ type of polyethylene! For clothing fleece is traditionally not sustainable: is fleece eco friendly fleece include those french terry fleece we... Modal and lyocell, be sure to read up on these innovative fibers next sit between threads! Can jump into the chapter which you find interested most, enjoy can release up to 1,900 bits plastic... Patterns to create a personalized gift in a chemical material to keep warm. At keeping wearers warm—sometimes too warm also looking cool ( in an eco-friendly way, of course.. Tie and no sew project and crafts is that they don ’ t biodegrade fleece. The winter months always present a unique challenge when it comes to like. Especially by knitting, weaving, or felting fibers product, if there is such a thing time fleece... Grams per square meter ( gsm ), you can see straight away that is... Define fabric fleece is traditionally not sustainable a sweatshirt and a sweatshirt often referred to as a “ pile. How we mentioned above that made from a type of fleece: the hairy covering of a sheep tie. Cold-Weather adventures microplastics in our waterways coverage of modal and lyocell, sure! Transdev Airport Services Ltd, South Stack Lighthouse History, Spider-man: Edge Of Time Anti Venom, City Of Houston Employee Benefits, Disney Travel Agents Near Me, Idfc Infrastructure Bonds Market Price, Portland Currency To Dollar, Lithuania Weather Map, Law For Architects: What You Need To Know Pdf, Shane Graham - Wikipedia,
<urn:uuid:6b1f8f08-508f-42c9-9c6c-ce6c1d1c9705>
CC-MAIN-2021-21
http://www.menahrs.com/q2w1vqty/9172ad-fleece-fabric-meaning
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991921.61/warc/CC-MAIN-20210516232554-20210517022554-00177.warc.gz
en
0.940491
5,397
2.9375
3
- Open Access Hydrodynamic study of freely swimming shark fish propulsion for marine vehicles using 2D particle image velocimetry Robotics and Biomimetics volume 3, Article number: 3 (2016) Two-dimensional velocity fields around a freely swimming freshwater black shark fish in longitudinal (XZ) plane and transverse (YZ) plane are measured using digital particle image velocimetry (DPIV). By transferring momentum to the fluid, fishes generate thrust. Thrust is generated not only by its caudal fin, but also using pectoral and anal fins, the contribution of which depends on the fish’s morphology and swimming movements. These fins also act as roll and pitch stabilizers for the swimming fish. In this paper, studies are performed on the flow induced by fins of freely swimming undulatory carangiform swimming fish (freshwater black shark, L = 26 cm) by an experimental hydrodynamic approach based on quantitative flow visualization technique. We used 2D PIV to visualize water flow pattern in the wake of the caudal, pectoral and anal fins of swimming fish at a speed of 0.5–1.5 times of body length per second. The kinematic analysis and pressure distribution of carangiform fish are presented here. The fish body and fin undulations create circular flow patterns (vortices) that travel along with the body waves and change the flow around its tail to increase the swimming efficiency. The wake of different fins of the swimming fish consists of two counter-rotating vortices about the mean path of fish motion. These wakes resemble like reverse von Karman vortex street which is nothing but a thrust-producing wake. The velocity vectors around a C-start (a straight swimming fish bends into C-shape) maneuvering fish are also discussed in this paper. Studying flows around flapping fins will contribute to design of bioinspired propulsors for marine vehicles. Aquatic animal propulsors are classified into lift-based (e.g., penguins, turtle forelimb propulsion and aerial birds), undulation (e.g., fishes, eels), drag-based (e.g., duck paddling) and jet mode (e.g., jelly fish, squids). Fishes use a combination of lift-based and undulating modes mainly using its undulating body, pectoral and caudal fins to achieve propulsive forces. Fishes also generate thrust by using its tail fin, paired fins and its body. Certain combinations of flapping motions and angles of body achieve greater speed and better maneuvering capabilities. Flapping foil propulsion systems, resembling fish fin propulsion mode, are found to be much more efficient than the conventional screw propellers [1, 2]. The application of fish propulsion to water crafts is found to have higher propulsive efficiency, better maneuvering capabilities, less vibrations, low emissions and more eco-friendly. Biological aquatic animal locomotion, its mechanism and their successful application to marine vehicles are being studied by different researchers. Muller used 2D PIV to visualize the flow around the aquatic animals and to demonstrate the creation of vorticity and their contribution to thrust generation. Muller et al. studied the water velocity near fish body using PIV and described the wake mechanism behind it. Drucker and Lauder studied the bluegill sunfish pectoral fins 3D wake structures using PIV. Sakakibara et al. used stereoscopic PIV for capturing three components of velocity distribution on live goldfish along with particle tracking velocity in order to determine spatial velocity, acceleration and vorticity. Past researchers [7–18] carried out experiments on hydrodynamic studies of fish locomotion as well as maneuvering by using PIV system. In the present study, a shark fish which belongs to the sub-carangiform is kept in a glass tank (Fig. 1) and the water particle kinematics around its tail and fins are observed using a two-dimensional PIV system while the fish try to swim forward. In sub-carangiform of locomotion, last one-third aft length of body muscle is used for generating thrust in addition to its caudal fin, whereas thunniform fishes’ the caudal peduncle and tail fin are responsible for thrust production. The sub-carangiform fishes can move its caudal fin at a higher amplitude compared with form of fishes, resulting better thrust generation. That is the reason for choosing this form of fish for the present study. It moves forward by flapping its caudal fin and body undulation. The pressure distribution around the body and the caudal fin is shown in Fig. 2. There are positive and negative pressure regions along the body. The fluctuations of these pressure distributions result in a propulsive force, pushing the fish forward. The shape of the caudal fin reduces the amount of displaced water during oscillation of tail fin, thereby reducing turbulence and frictional drag on the body without the loss of propulsive power. The velocity diagram of sub-carangiform fish caudal fin is shown in Fig. 3. In sub-carangiform swimming fish, the thrust is developed by the rear part of the body and the tail fin. The thrust generated by the tail fin is given by Eq. (1). The caudal fin is moving normal to free-stream velocity, V o, with a transverse (sway) velocity equal to V N. It is possible for caudal fin to attain a thrust component which provides a forward propelling force. The rotational component (yaw) is not considered in this case. The above equation is the simplest case of pure translation motion normal to a free stream V o . where ρ represents density of fluid in kg/m3, V R is resultant velocity in m/s, S is surface area of fin in m2, α is angle of attack in (rad) and (dC L /dα) is slope of lift curve for caudal fin. In the present study, flow visualization experiments are carried out to visualize the flow pattern around the caudal, pectoral, anal and dorsal fins of a freely swimming fish using two-dimensional (2D) particle Image velocimetry (PIV) system. Freshwater black shark (Labeo chrysophekadion) with a body length of 26 cm is used for the present experimental study. The fish is placed inside a glass tank of size L × B × D = 75 cm × 29 cm × 37 cm, with water level at 28 cm, and it is allowed to swim freely in the tank. The fish swims across the tank length, and the PIV measurement is taken at the steady phase of its movement, which is observed to be in the middle one-third portion of the tank. In this experiment, the laser pulse is operated continuously and fish will always cross this laser plane in multiple times with same time interval. Then, a range of images is selected for processing velocity fields. From the visual observations, based on the recorded video, the Strouhal number of the freely swimming shark fish used in this experiment is approximately 0.23, where the tail fin oscillation frequency is 0.6 Hz, amplitude is 0.1 m and the fish swimming speed is 0.26 m/s. Two-dimensional PIV technique is used to study the flow around a swimming fish. This helps in clearly understanding the instantaneous velocity vector fields of the flow field around the fish. It is a non-intrusive experimental technique which can measure the whole flow field with high spatial and temporal resolution at any instant. The PIV technique involves the introduction of tiny particles called ‘seeder particles’ into the fluid path. The size and density of seeder particles are chosen such that they follow the flow path faithfully at all operating conditions. Hollow glass spheres with a mean diameter of 10 µm are used as the tracer particles. The seeding particles in the plane of interest are illuminated by a laser sheet of appropriate thickness 0.5–2.5 mm. Two images (an image pair) of the illuminated flow field are obtained within a separation time ‘∆t’ by means of high-resolution camera. The displacement of the tracer particles during the time interval ‘∆t’ gives velocity of the fluid particle. The experiments are performed at three different time intervals, Δt = 300, 620 and 900 ms. If the Δt is less than 300 ms, no swirl of velocity vectors is observed. Then, the Δt gradually increases from 300 to 900 ms, and velocity fields around the fish body are observed. The Reynolds number (Re) of swimming fish is in the range of 105, and at low Re number, the 2D velocity fields do not affect. The PIV setup used in the present study is shown in Fig. 4. The PIV system used in this work consists of (1) a double-pulsed Nd-YAG (neodymium-doped yttrium aluminum garnet) laser with 200 mj/pulse energy at 532 nm wavelength, (2) a charge-coupled device (CCD) camera with a 2048 by 2048 pixels and an image capturing speed of 14 frames per second (fps), (3) a set of laser and a camera controllers and (4) a data acquisition system. The laser sheet is aligned with the longitudinal vertical (XZ) and transverse (YZ) planes. The camera is positioned in front of the test section at 90° to the laser sheet (see Fig. 4). The size of seeding particles is very important in obtaining proper images. The particles should scatter enough light, and too large particles may not follow the flow path. Measurement of the velocity field using PIV is based on the ability of the system to accurately record and measure the positions of small traces suspended in flow as function of time. The PIV system measurement scheme is shown in Fig. 5. In PIV measurement scheme, the images are divided into a number of small sections called interrogation windows or regions. The corresponding interrogation regions in frame 1 and 2 are correlated using cross-correlation method. The maximum of the correlation corresponds to the displacement of the particles in interrogation window. The displacement gives the vector length and direction in interrogation zones. Small interrogation windows give more vectors but contain less particles. The main advantage of the cross-correlation approach is displacement that can be obtained with directional ambiguity. In the experimental setup, care should be taken to make the laser sheet, camera axis and test object lie in the same plane. The laser sheet should be aligned perfectly vertical to the calibration plate (Fig. 6). In PIV calibration, the images obtained are focused and the scale factor (calibration constant) necessary for further processing of the images is obtained. This calibration plate is placed parallel to the light sheet and approximately in the middle of calibration sheet. The sheet consists of a grid of dots with a large central dot surrounded by four small dots. The distance between two dots on the calibration sheet is 5 mm. Images are captured on the calibration sheet when it is placed in the light sheet. These images are analyzed by the computer software ‘Davis’ , and the scale factor is obtained by comparing the apparent distance between the dots provided by CCD camera and the actual distance between the dots of 5 mm. Once the images are captured, the camera is focused on the sheet such that all dots appear sharp. The velocity fields obtained by PIV are used to determine the pressure fields. The pressure and velocity are linked by the Navier–Stokes (NS) equations, and the pressure can be measured indirectly by measuring the velocity field. There exist two methods to measure the pressure field indirectly. The first method is direct spatial integration of the momentum equations [21, 22]. The second method is solving a Poisson equation for the pressure field . The present study does not include the comparison aspects of velocity field to pressure field. PIV results and discussion Fishes generate propulsive forces, are able to maneuver rapidly and stabilize its body motions using its fins such as pectoral, dorsal, pelvic, anal and caudal fins (see Fig. 1). By using its fins, fishes can control roll, pitch and yaw motions. The paired pectoral fins (one on each side) are used for maneuvering as well as for instantaneous stopping (braking) . The median dorsal fins act as keels, used for directional stability and to prevent from spinning or rolling. Pelvic fins and anal fins are used as stabilizers. Caudal fin is used for propulsion, maneuvering and braking. The flow visualization experiments are carried out on a freely swimming sub-carangiform mode shark fish in longitudinal vertical (XZ) plane and transverse (YZ) plane by using two-dimensional particle image velocimetry. The flows around the fins of freely swimming fish are analyzed, and the velocity vector fields are presented here. In this analysis, we are presenting a raw CCD (charge-coupled device) image and the processed image at different time intervals (Figs. 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18). The white boundary line represents the body of fish. The primary vortex regions are marked as V 1, V 2 in images. Figure 7 shows CCD image and velocity vector field around caudal fin at Δt = 900 ms. The caudal fin possess thrust-producing wake, resembling a reverse von Karman vortex street. In reverse von Karman vortex street, upper row vortices rotate in anticlockwise and the lower row vortices rotate in clockwise direction. Fishes are able to generate thrust depending on the amplitude and frequency of oscillation of the caudal fin. By varying the frequency and amplitude of fin oscillation, fishes can achieve the fin oscillation in the Strouhal number range of 0.2–0.5 to attain a propulsive force. The fish can move the caudal fin in both translational (sway) and rotational (yaw) modes for its efficient propulsion. The flow around the caudal fin of steadily swimming fish with counter-rotating vortices in vertical plane is shown in Fig. 7. The center of fish tail-shed vortices appears to be about 45 deg inclined to the centerline. During steady swimming, fishes orient the body at an angle to the flow. The propulsive force generated by caudal fin movement is directed to body through center of mass. Figures 8 and 9 show CCD image and velocity vector field around adipose and anal fins at Δt = 900 ms. In the PIV experiments, the images are taken with a time difference between two consecutive images, Δt = 300 and 900 ms. The image qualities are found to be acceptable in both the cases. The adipose and anal fins generated vortices that pass downstream, interacting with caudal fin vortices, while flapping its tail from starboard side to port side, and are found to form stronger vortices, thus helping in the generation of improved propulsive force. Figure 10 shows CCD image and velocity vector field around adipose and anal fins at Δt = 300 ms. Orientation of the caudal fin in this figure shows the flexibility present in its movements. The jets produced by adipose fin and anal fins are observed in the peduncle region (region containing the tail and the body). At the posterior end of the fish, a pair of counter-rotating vortices is observed. Figure 11 shows CCD image and velocity vector field around anterior portion of fish. At low amplitudes and frequencies of caudal fin, when Strouhal number (st) is less than 0.2, the vortices become inward and thus the fish experiences drag due to these vortices. This wake resembles like a von Karman vortex street shown in Fig. 11. Figure 12 shows CCD image and velocity vector field around pectoral fins. These paired pectoral fins undergo deformation during their flapping cycle. It undergoes chordwise and spanwise deformations as well as twisting. During power stroke and return stroke, the effective angle of attack of flow with fin increases, thereby producing thrust in both the strokes. Figure 13 shows CCD image and velocity vector field around dorsal fins. Dorsal fins generate strong vortices. Flow leaving the dorsal and anal fins rolls up and then interacts with caudal fin vortices. Figures 14 and 15 show CCD image and velocity vector fields around caudal fin in starboard stroke in YZ plane at Δt = 900 ms. A pair of counter-rotating vortices is generated around the caudal fin in the YZ plane. Figure 16 shows CCD image and velocity vector field around caudal fin stroke in YZ plane at Δt = 900 ms, while the fin is at the center plane. A jet with high velocity flow is observed at the top of the caudal fin. Figure 17 shows CCD image and velocity vector field around caudal fins, during portside stroke, in YZ plane at Δt = 900 ms. Figure 18 shows flow around a maneuvering fish. During maneuvering of a fish, jets are observed at the side of fish causing a turning moment instantaneously. Summary and conclusions The flow visualization experiments are carried out on a freely swimming freshwater black shark using two-dimensional particle image velocimetry in longitudinal vertical (XZ) and transverse (YZ) planes. The velocity vector fields show that both paired fins (pectoral fins) and median fins (dorsal, anal and caudal fins) produce reverse von Karman vortices resulting in the flow jets and consequent thrust (propulsive force). It is also observed that the fin flexibility in chordwise and spanwise direction substantially improves the thrust generation and direction control of the fish. The fish anal fin and caudal fin vortices are also presented here and show that they also contribute to the fish propulsive force. By studying the nature flow velocity distribution around fish fins propulsion systems, one can design flapping foil propulsion systems for ships and underwater vehicles. Politis GK, Belibasakis KA (1999) High propulsive efficiency by a system of oscillating wing tails. In: CMEM, WIT conference. Anderson JM, et al. Oscillating foils of high propulsive efficiency. J Fluid Mech. 1998;360:41–72. Müller UK, Van Den Heuvel BLE, Stamhuis EJ, Videler JJ. Fish foot prints: morphology and energetics of the wake behind a continuously swimming mullet (Chelon labrosus Risso). J Exp Biol. 1997;201:2893–906. Muller UK, Stamhuis EJ, Videler JJ. Hydrodynamics of unsteady fish swimming and the effects of body size: comparing the flow fields of fish larvae and adults. J Exp Biol. 2000;203(2):193–206. Drucker EG, Lauder GV. Locomotor function of the dorsal fin in teleost fishes: experimental analysis of wake forces in sunfish. J Exp Biol. 2001;204(17):2943–58. Sakakibara J, Nakagawa M, Yoshida M. Stereo-PIV study of flow around a maneuvering fish. Exp Fluids. 2004;36(2):282–93. Stamhuis E, Videler J. Quantitative flow analysis around aquatic animals using laser sheet particle image velocimetry. J Exp Biol. 1995;198(2):283–94. Videler JJ. Fish swimming, vol. 10. Berlin: Springer; 1993. Wakeling JM, Johnston IA. Body bending during fast-starts in fish can be explained in terms of muscle torque and hydrodynamic resistance. J Exp Biol. 1999;202(6):675–82. Webb PW. Hydrodynamics and energetics of fish propulsion. Bull Fish Res Board Can. 1975;190:1–159. Webb PW, Blake RW. Swimming. In: Hildebrand M, editor. Functional vertebrate morphology. Cambridge: Harvard University Press; 1983. Weihs D. A hydrodynamic analysis of fish turning manoeuvres. Proc R Soc Lond B. 1972;182:59–72. Westerweel J. Fundamentals of digital particle image velocimetry. Meas Sci Technol. 1997;8(12):1379. Wolfgang MJ, et al. Near-body flow dynamics in swimming fish. J Exp Biol. 1999;202(17):2303–27. Wu YT. Hydromechanics of swimming propulsion part 1: swimming of a two-dimensional flexible plate at variable forward speeds in an in viscid fluid. J Fluid Mech. 1971;46:337–55. Blake RW. Fish locomotion. Cambridge: Cambridge University Press; 1983. Breder CM. The locomotion of fishes. New York: New York Zoological Society; 1926. Babu MNP, Krishnankutty P, Mallikarjuna JM. Experimental study of flapping foil propulsion system for ships and underwater vehicles and PIV study of caudal fin propulsors. In: Autonomous underwater vehicles (AUV), 2014 IEEE/OES, p. 1, 7, 6–9 Oct 2014. Gero DR. The hydrodynamic aspects of fish propulsion. Am Mus Novitates. 1952;1601:1–32. http://www.lavision.de/en. Accessed 20 May 2014. van Oudheusden B, Scarano F, Roosenboom E, Casimiri E, Souverein L. Evaluation of integral forces and pressure fields from planar velocimetry data for incompressible flows. Exp Fluids. 2007;43:153–62. de Kat R, van Oudheusden B, Scarano F. Instantaneous planar pressure field determination around a square-section cylinder based on time-resolved stereo-PIV. In: 14th international symposium on applications of laser techniques to fluid mechanics; Lisbon, Portugal. 2008. Gurka R, Liberzon A, Hefetz D, Rubinstein D, Shavit U. “Computation of pressure distribution using PIV velocity” data. In: 3rd international workshop on PIV. Santa Barbara, CA, US, p. 101–6; 1999. Drucker EG, Lauder GV. Wake dynamics and fluid forces of turning maneuvers in sunfish. J Exp Biol. 2001;204:431–42. All authors were equally involved in the study and preparation of the manuscript. All authors read and approved the final manuscript. The authors would like to thank Department of Ocean Engineering, IIT Madras, India, for providing support for doing this project. The authors declare that they have no competing interests. About this article Cite this article Babu, M.N.P., Mallikarjuna, J.M. & Krishnankutty, P. Hydrodynamic study of freely swimming shark fish propulsion for marine vehicles using 2D particle image velocimetry. Robot. Biomim. 3, 3 (2016). https://doi.org/10.1186/s40638-016-0036-0 - Carangiform swimming - Caudal fin locomotion - Flow visualization - Propulsor hydrodynamics - Particle image velocimetry - Pectoral fins - Reverse von Karman vortex street
<urn:uuid:9a3a9a36-bcf3-4a62-abe5-38b3ffe0992b>
CC-MAIN-2021-21
https://jrobio.springeropen.com/articles/10.1186/s40638-016-0036-0
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992440.69/warc/CC-MAIN-20210517180757-20210517210757-00614.warc.gz
en
0.874114
4,977
2.734375
3
My blog has moved to cournape.github.io I am glad to see discussions about the problem of distributing python programs in the wild. A recent post by Glyph articulates the main issues better than I could. The developers vs end-users focus is indeed critical, as is making the platform an implementation detail. There is one solution that Glyph did not mention, the freeze tool in python itself. While not for the faint of the heart, it allows building a single, self-contained executable. Since the process is not really documented, I thought I would do it here. Setting up a statically linked python The freeze tool is not installed by default, so you need to get it from the sources, e.g. one of the source tarball. You also need to build python statically, which is itself a bit of an adventure. I prepared a special build of static python on OS X which statically link sqlite (3.8.11) and ssl (1.0.2d), both from homebrew. Building a single-file, hello world binary Let’s say you have a script hello.py with the following content: To freeze it, simply do as follows: <static-python>/bin/python <static-python>/lib/python2.7/freeze/freeze.py hello.py make You should now have an executable called hello of approximately 7-8 MB. This binary should be relatively portable across machines, although in this case I built the binary on Yosemite, so I am not sure whether it would work on older OS X versions. How does it work ? The freeze tool works by bytecompiling every dependent module, and creating a corresponding .c file containing the bytecode as a string. Every module is then statically linked into a new executable. I have used this process successfully to build non trivial applications that depend on dozens of libraries. If you want a single executable, the main limitation is no C extension requirement. More generally, the main limitations are: - you need to statically build python - you have to use unix - you are not depending on C extensions - none of your dependency uses shenanigans for package data or import 1 and 2 are linked. There is no reason why it should not work on windows, but statically linking python on windows is even less supported than doing it on unix. It would be nice for python itself to support static builds better. 3 is one of the feature that has been solved over and over by the multiple freezer tools. It would be nice to get a minimal, well-written library solving this problem. Alternatively, a way to load C extensions from within a file would be even better, but not every platform can do this. 4 is actually the main issue in practice, it would be nice for good solution here. Something like pkg_resources, but more hackable/tested. I would argue that the pieces for a better deployment story in python are there: what is needed is taking the existing pieces to build a cohesive solution. This is a quick post to show how to build NumPy/SciPy with OpenBlas on Mac OS X. OpenBlas is a recently open-sourced version of Blas/Lapack that is competitive with the proprietary implementations, without being as hard to build as Atlas. Note: this is experimental, largely untested, and I would not recommend using this for anything worthwhile at the moment. After checking out the sources from github, I had the most luck building openblas with a custom-build clang (I used llvm 3.1). With the apple-provided clang, I got some errors related to unsupported opcodes (fsubp). With the correct version of clang, building is a simple matter of running make (CPU is automatically detected). I have just added a initial support for customizable blas/lapack in the bento build of NumPy (and scipy). You will need a very recent clone of NumPy git repo,and a recent bento. The single file distribution of bento is the simplest way to make this work: ./bentomaker.py configure --with-blas-lapack-libdir=$OPENBLAS_DIRECTORY --blas-lapack-type=openblas .. ./bentomaker.py build -j4 # build with 4 processes in // Same for SciPy. The code for bento’s blas/lapack detection is not very robust nor well tested, so it will likely not work on most platforms. Armin wrote an article on why he loves setuptools, and one of the main takeaway of his text is that one should not replace X with Y without understanding why X was created in the first place. There is another takeaway, though: none of the features Armin mentioned matters much to me. This is not to say they are not important: given the success of setuptools or pip, it would be stupid not to recognize they fulfill an important gap for a lot of people. But while those solutions provide a useful set of features, it is important to realize what they prevent as well. Nick touches this topic a bit on python-dev, but I mean something a bit different here. Some examples: - First, the way setuptools install eggs by adding things to sys.path caused a lot of additional stat on the filesystem. In the scientific community (and in corporate environments as well), people often have to use NFS. This can cause import speed to take a lot of time (above 1 minute is not unheard of). - Setuptools monkey patches distutils. This has a serious consequence for people who have their own distutils extensions, since you essentially have to deal with two code paths for anything that setuptools monkey patches. As mentioned by Armin, setuptools had to do the the things it did to support multi-versioning. But this means that it has a significant cost for people who do not care about having multiple versions of the same package. This matters less today than it used to, though, thanks for virtual env, and pip that installs things as non-eggs. Similar argument can be made about monkey-patching: distutils is not designed to be extensible, especially because of how commands are tightly coupled together. You effectively can NOT extend distutils without monkey-patching it significantly. A couple of years ago, I decided that I could not put up with numpy.distutils extensions and the aforementioned distutils issues anymore. I started working on Bento sometimes around fall 2009, with the intend to bootstrap it by reusing the low-level distutils code, and getting rid of commands and distribution. I also wanted to experiment with simpler solutions to some more questionable setuptools designs such as data resource with pkg_resources. I think hackable solutions are the key to help people solving packaging solution(s). There is no solution that will work for everyone, because the usecases are so different and clash with each other. Personally, having a system that works like apt-get (reliable and fast metadata search, reliable install/uninstall, etc…) is the holy grail, but I understand that that’s not what other people are after. What matters the most is to only put in the stdlib what is uncontroversial and battle-tested in the wild. Tarek’s and the rest of the packaging team efforts to specify and write PEP around the metadata are a very good step in that direction. The PEP for metadata works well because it essentially specify things that have been used succesfully (and relatively uncontroversial). But an elusive PEP around compilers as has been suggested is not that interesting IMO: I could write something to point every API issues with how compilation work in distutils, but that sounds pointless without a proposal for a better system. And I don’t want to design a better system, I want to be able to use one (waf, scons, fbuilt, gyp, whatever). Writing bento is my way of discovering a good design to do just that. From the beginning, it was clear that one of the major hurdle for bento would be transition from distutils. This is a hard issue for any tool trying to improve existing ones, but even more so for distribution/packaging tools, as it impacts everyone (developers and users of the tools). Since almost day one, bento had some basic facilities to convert existing distutils projects into bento.info. I have now added something to do the exact contrary, that is maintaing some distutils extensions which are driven by bento.info. Concretely, it means that if you have a bento package, you can write something like: import setuptools # this comes first so that setuptools does its monkey dance import bento.distutils # this monkey patches on top of setuptools setuptools.setup() as your setup.py, it will give the “illusion” of a distutils package. Of course, it won’t give you all the goodies given by bento (if it could, I would not have written bento in the first place), but it is good enough to enable the following: - installing through the usual “python setup.py install” - building source distributions - more significantly: it will make your package easy_install-able/pip-able This feature will be in bento 0.0.5, which will be released very soon (before pycon 2011 where I will present bento). More details may be found on bento’s documentation I could not spend much time (if any) on bento the last few weeks of 2010, but I fortunately got back some time to work on it this month. It is a good time to describe a bit what I hope will happen in bento in the next few months. Bento poster @ Pycon2011 First, my bento proposal has been rejected for PyCon 2011, so it will only be presented as a poster. It is a bit unfortunate because I think it would have worked much better as a talk than as a poster. Nevertheless, I hope it will help bringing awareness of bento outside the scipy community, and give me a better understanding of people’s need for packaging (poster should be better for the latter point). Bento 0.0.5 should be coming soon (mid-february). Contrary to the 0.0.4 release, this version won’t bring major user-visible features, but it got a lot of internal redesigns to make bento easier to use: Automatic command dependency One does not need to run each command separately anymore. If you run “bentomaker install”, it will automatically run configure and build on its own, in the right order. What’s interesting about it is how dependencies are specified. In distutils, subcommand order is hardcoded inside the parent command, which makes it virtually impossible to extend them. Bento does not suffer from this major deficiency: - Dependencies are specified outside the classes: you just need to say which class must be run before/after - Class order is then computed at run time using a simple topological sort. Although the API is not there yet, this will enable arbitrary insertion of new commands between existing commands without the need to monkey patch anything If a bento package is installed under virtualenv, the package will be installed inside the virtualenv by default: virtualenv .env source .env/bin/activate bentomaker install # this will install the package inside the virtualenv Of course, if the install path has been customized (through prefix/eprefix), those take precedence over virtualenv. List files to be installed The install command can optionally print the list of files to be installed and their actual installation path. This can be used to check where things are installed. This list is exactly what bento would install by design, so it is more difficult to have weird corner cases where the list and what is actually installed is different. First steps toward uninstall Initial “transaction-based” install is available: in this mode, a transaction log will be generated, which can be used to rollback an install. For example, if the install fails in the middle, already installed files will be removed to keep the system in a clean state. This is a first step toward uninstall support. Refactoring to help using waf inside bento Bentos internal have been improved to enable easier customization of the build tool. I have a proof of concept where bento can be customized to use waf to build extensions. The whole point is to be able to do so without changing bento’s code itself, of course. The same scheme can be used to build extensions with distutils(for compatibility reasons, to help complex packages to move to bento one step at a time. Bentoshop: a framework to manage installed packages I am hoping to have at least a proof of concept for a package manager based around bento for Pycon 2011. As already stated on this blog, there are few non-negotiable features that the design must follow: - Robust by design: things that can be installed can be removed, avoid synchronisation issues between metadata and installed packages - Transparent: it should play well with native packaging tools and not go in the way of anyone’s workflow. - No support whatsoever for multiple version: this can be handled with virtualenv for trivial cases, and through native “virtualization” scheme when virtualenv is not enough (chroot for fs “virtualziation”, or actual virtual machines for more) This means PEP376 is out of the question (it breaks points 1 and 4). I will follow a first proof of concept following the haskell cabal and R (CRAN) systems, but backed with a db for performances. The main design issue is point 2: ideally, one would want a user-specific, python-specific package manager to be aware of packages installed through the native system, but I am not sure it is really possible without breaking other points. Getting this error on a new chef client: /usr/lib/ruby/1.8/net/http.rb:2101:in `error!’: 404 “Not Found” (Net::HTTPServerException) is actually caused by having an old chef-client. Took me a while to realize, and google was not that helpful. I have just submitted a talk proposal for bento at pycon 2011. If accepted, the talk will be a good deadline to get a first alpha ready. In the meantime, I have added windows support, and I can now build numpy on windows 64 bits with the MKL library. There are still a few rough edges, but I think bento will soon be on par with numscons as far as supported platforms go. Disclaimer: I am working on a project which may be seen as a concurrent to distutils2 efforts, and I am quite biased against the existing packaging tools in python. On the other hand, I know distutils extremely well, and have been maintaining numpy.distutils extensions for several years, and most of my criticisims should stand on their own There is a strong consensus in the python community that the current packaging tools (distutils) are too limited. There has been various attempts to improve the situation, through setuptools, the distribute fork, etc… Beginning this year, the focus has been shifted toward distutils2, which is scheduled to be part of the stdlib for python 3.3, while staying compatible with python 2.4 onwards. A first alpha has been released recently, and I thought it was a good occasion to look at what happened in that space. As far as I can see, distutils2 had at least the three following goals: - standardize a lot of setuptools practices through PEPS and implement them. - refactor distutils code and add a test suite with a significant coverage. - get rid of setup.py for most packages, while adding hooks for people who need to customize their build/installation/deployment process I won’t discuss much about the first point: most setuptools features are useless to the scipy community, and are generally poor reimplementations of existing solutions anyway. As far as I can see, the third point is still being discussed, and not present in the mainline. The second point is more interesting: distutils code quality was pretty low, but the main issue was (and still is) the overall design. Unfortunately, adding tests does not address the reliability issue which have plagued the scipy community (and I am sure other communitues as well). The main issues w.r.t. build and installation remain: - unreliable installation: distutils install things by simply copying trees built into a build directory (build/ by default). This is a problem when you decide to change your source code (e.g. renaming some modules), as distutils will add things to the existing build tree, and hence install will copy both old and new targets. As with distutils, the only way to have a reliable build will be to first rm -rf build. This alone is a consistent source of issues for numpy/scipy, as many end-users are bitten by this. We somewhat alleviate this by distributing binary installers (which know how to uninstall things and are built by people familiar with distutils idiocy) - Inconsistencies between compiler classes. For example, the MSVCCompiler class compiler executable is defined as a string, and set as the attribute cc. On the other hand, most other compiler classes define the compiler_so attribute (which is a list in that case). They also don’t have the same - No consistent, centralized API to obtain basic compilation options (CC Even more significantly, it means that the fundamental issue of extensibility has not been adressed at all, because the command-based design is still there. This is by far the worst part of the original distutils design, and I fail to see the point of a backward-incompatible successor to distutils which does not address this issue. Issues with command-based design Distutils is built around commands, which almost correpond 1 to 1 to command line command: when you do “python setup.py install”, distutils will essentially call the install.run command after some initialization stuff. This by itself is a relatively common pattern, but the issue lies elsewhere. First, each command has its own set of options, but the options of one command often affect the other commands, and there is no easy way for one command to know the options from the other one. For example, you may want to know the options of the install command at build time. The usual pattern to do so is to call the command you want to know the options, instantiate it and get its options, by using e.g. get_finalized_command: install = self.get_finalized_command("install") install_lib = install.install_lib This is hard to use correctly because every command can be reset by other commands, and some commands cannot be instancialized this way depending on the context. Worse, this can cause unexpected issues later on if you are calling a command which has not already been run (like the install command in a build command). Quite a few subtle bugs in setuptools and in numpy.distutils were/are caused by this. According to Tarek Ziade (the main maintainer of distutils2), this is addressed in a distutils2 development branch. I cannot comment on it as I have not looked at the code yet. Distutils has a notion of commands and “sub-commands”. Subcommands may override each other’s options, through set_undefined_options function, which create new attributes on the fly. This is every bit as bad as it sounds. Moreover, the harcoding of dependencies between commands and sub-commands significantly hampers extensibility. For example, in numpy, we use some templated source files which are processed into .c: this is done in the build_src command. Now, because the build command of distutils does not know about build_src, we need to override build as well to call build_src. Then came setuptools, which of course did not know about build_src, so we had to conditionally subclass from setuptools to run build_src too . Every command which may potentially trigger this command may need to be overriden, with all the complexity that follows. This is completely insane. Distutils2 has added the notion of hooks, which are functions to be run/before the command they hook into. But because they interact with distutils2 through the command instances, they share all the issues aforementioned, and I suspect they won’t be of much use. More concretely, let’s consider a simple example: a simple file generated from a template (say config.pkg.in), containing some information only known at runtime (like the version and build time). Doing this correctly is - you need to generate the file in a build command, and put it at the right place in the build directory - you need to install it at the right place (in-place vs normal build, egg install vs non-egg install vs externally_managed install) - you may want to automatically include the version.py.in in sdist - you may want the file to be installed in bdist/msi/mpkg, so you may need to know all the details of those commands Each of this step may be quite complex and error-prone. Some are impossible with a simple hook: it is currently impossible to add files to sdist without rewriting the sdist.run function AFAIK. To deal with this correctly, the whole command business needs a significant redesign. Several extremely talented people in the scipy community have indepedently attempted to improve this in the last decade or so, without any succes. Nothing short of a rewrite will work there, and commands constitutes a good third of distutils code. distutils2 does not improve the situation w.r.t. building compiled code, but I guess that’s relatively specific to the big packages like numpy, scipy or pywin32. Needless to say, the compilers classes are practically impossible to extend (they don’t even share a consistent interface), and very few people know how to add support for new compilers, new tools or new binaries (ctypes extensions, for example). Overall, I don’t quite understand the rationale for distutils2. It seems that most setuptools-standardization could have happened without breaking backward compatibility, and the improvements are too minor for people with significant distutils extensions to switch. Certainly, I don’t see myself porting numpy.distutils to distutils2 anytime soon. : it should be noted that most setuptools issues are really distutils issues, in the sense that distutils does not provide the right abstractions to I have just released the new version of Bento, 0.0.4. You can get it on github as usual Bento itself did not change too much, except for the support of sub-packages and a few things. But now bento can build both numpy and scipy on the “easy” platforms (linux + Atlas + gcc/clang). This posts shows a few cool things that you can do now with bento Full distribution check The best way to use this version of bento is to do the following: # Download bento and create bentomaker git clone http://github.com/cournape/Bento.git bento-git cd bento-git && python bootstrap.py && cd .. # Download the _bento_build branch from numpy git clone http://github.com/cournape/numpy.git numpy-git cd numpy-git && git checkout -b bento_build origin/_bento_build # Create a source tarball from numpy, configure, build and test numpy # from that tarball ../bento-git/bentomaker distcheck For some reasons I am still unclear about, the test suite fails to run from distcheck for scipy, but that seems to be more of a nose issue than bento proper. Building numpy with clang Assuming you are on Linux, you can try to build numpy with clang, the LLVM-based C compiler. Clang is faster at compiling than gcc, and generally gives better error messages than gcc. Although bento itself does not have any support for clang yet, you can easily play with the bento scripts to do so. In the top bscript file from numpy, at the end of the post_configure hook, replace every compiler with clang, i.e.: for flag in ["CC", "PYEXT_CC"]: yctx.env[flag] = ["clang"] Once the project is configured, you can also get a detailed look at the configured options, in the file build/default.env.py. You should not modify this file, but it is very useful to debug build issues. Another aid for debugging configuration options is the build/config.log file. Not only does it list every configuration command (both success and failures), but it also shows the source content as well as the command output. What’s coming next ? Version 0.0.5 will hopefully have a shorter release period than 0.0.4. The goal for 0.0.5 is to make bento good enough so that other people can jump in bento development. The main features I am thinking about are windows and python 3 support + a lot of code cleaning/documentation. Windows should not be too difficult, it is mainly about ripping off numscons/scons code for Visual studio support and adapt it into yaku. I have already started working on python 3 support as well – the main issue is bootstrapping bento, and finding an efficient process to work on both python 2 and 3 at the same time. Depending on the difficulty, I will also try to add proper dependency handling in yaku for compiled libraries and dependent headers: ATM, yaku does not detect header change, nor does it rebuild an extension if the linked libraries changed. An alternative is to bite the bullet and start working on integration with waf, which already does all this internally.
<urn:uuid:9e5cc3d0-910a-4bac-9555-c839607dd5ae>
CC-MAIN-2021-21
https://cournape.wordpress.com/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00496.warc.gz
en
0.926536
5,753
2.578125
3
Adam and Anthropology Denis Alexander addresses the theological issues associated with an understanding of Adam who was not the sole genetic ancestor of all of humanity. How Does a BioLogos Model Need to Address the Theological Issues Associated with an Adam Who Was Not the Sole Genetic Progenitor of Humankind? The question in the title of this paper raises an initial question: in general how should we go about the task of relating theological truths to current scientific theories? Theological truths revealed in Scripture are eternal infallible truths, valid for the whole of humanity for all time, although human interpretations of Scripture are not infallible and may change with time over issues that are not central to the Gospel. Scientific theories, by contrast, represent the current ‘inference to the best explanation’ for certain phenomena as judged by the scientific community based on criteria such as the interpretation of observations, experimental results, mathematical elegance and the ability of theories to generate fruitful research programmes. Scientific theories are not infallible and will certainly change. However, change does not necessarily imply replacement. Usually scientific theories are not replaced, but modified. In this respect they are often likened to maps that incorporate many different types of data: the maps are revised, as required, to incorporate new data and are improved in the process. Scientists sometimes use the word ‘model’ to propose one big idea, or a cluster of ideas, that together help to explain certain scientific data. To the despair of philosophers of science, the use of such words in scientific discourse can lack precision. The word ‘model’ is a case in point, its use sometimes overlapping with the term ‘theory’. Usually, however, ‘model’ has a more focused meaning: the way in which certain sets of data can be rendered coherent by explaining them in terms of a physical, mathematical or even metaphorical representation. During the early 1950s there were several rival models describing the structure of DNA, the molecule that encodes genes. Linus Pauling proposed a triple-helix model. But Jim Watson and Francis Crick had the huge advantage that they obtained the X-ray diffraction pattern results of DNA in advance of publication from another scientist called Rosalind Franklin. The double-helix was in fact the only model that would incorporate all the data satisfactorily, as Watson and Crick published in their famous one-page Nature paper in 1953. Since that time everyone has known that DNA is a double-helix, it’s really not a triple-helix or some other structure. In science, models are very powerful. Not all scientific models win the day so decisively. For many years in my own field of immunology there were endless discussions about how the class of white blood cells known as ‘T cells’ are educated within the body to attack foreign invaders but not (usually) to attack ‘self’, meaning our own tissues. Those discussions are now virtually over because the general model that has emerged explains most of the data quite well, bringing in to the story research results from many different laboratories. But the successful model that prevails is far more ‘messy’ than the exceptionally elegant double-helical model for DNA. The most successful models are not necessarily the simplest. The best models are those that explain the data adequately. Sometimes rival models exist for long periods of time in the scientific literature because they explain the data equally well. In that case a given model is said to be ‘under-determined by the data’. Everyone agrees with the data that do exist – the disagreement is about how to fit the data together to create the best model. Eventually new data emerge that count in favor of one model rather than another, or that decisively refute a particular model. When we come to the question as to what ‘Biologos model’ might best address the relationship between the Adam of Genesis and the anthropological and genetic account of a humanity that did not have a single couple as the source of its genetic endowment, then we need to keep in mind these various ways in which the term ‘model’ is deployed in scientific discourse. We will start with an initial ground-clearing question: “Is model-building appropriate in relating theological and scientific truths?” and, having given an affirmative answer to this question, we will then go on to consider what model might be the most appropriate for relating the theological and scientific narratives. Is model-building appropriate? There are some who would maintain that the truths presented by the early chapters of Genesis are theological truths that are valid independently of any particular anthropological history. The purpose of the Genesis texts is to reveal the source of creation in the actions of the one true God who has made humanity uniquely in His image. The Genesis 3 narrative of man’s disobedience is the ‘story of everyman’. We have all sinned and fallen short of the glory of God and this passage presents this truth in a vivid narrative style that is about theology rather than history. Those who adopt this position may also point to the dangers of a ‘concordist’ view of biblical interpretation. The term ‘concordism’ (in its traditional sense) generally refers to the attempt to interpret Scripture inappropriately using the assumptions or language of science. Calvin famously countered such tendencies in his great Commentary on Genesis, remarking on Chapter 1: “Nothing is here treated of but the visible form of the world. He who would learn astronomy and other recondite arts, let him go elsewhere.” But the term ‘concordism’ is also sometimes stretched to include virtually any attempt to relate biblical and scientific truths. Such a critique appears to be a step too far, for in that case our theology becomes too isolated from the world, contrasting with the famous ‘two books’ analogy in which the Book of God’s Word, the Bible, and the Book of God’s Works, the created order, both speak to us in their distinctive ways about the same reality. This powerful analogy has held sway for many centuries in the dialogue between science and faith, and the challenge is to see how the two ‘Books’ speak to each other, for all truth is God’s truth. Building models to relate biblical texts to science requires no concordist interpretations of the text (in the traditional sense of the word ‘concordist’). The disciplines of both science and theology should be accorded their own integrity. The Genesis texts should be allowed to speak within their own contexts and thought-forms, which are clearly very distant from those of modern science. We can all agree that the early chapters of Genesis exist to convey theology and not science. The task of models is then to explore how the theological truths of Genesis might relate to our current scientific understanding of human origins. The models that we propose are not the same as the ‘data’. On one hand we have the theological data provided by Genesis and the rest of Scripture, true for all people throughout time. Uncertainty here arises only from doubt as to whether our interpretations of the text are as solid as they can be. On the other hand we have the current scientific data that are always open to revision, expansion or to better interpretation. Nevertheless the data are overwhelmingly supportive of certain scientific truths, for example that we share a common genetic inheritance with the apes. The role of models is to treat both theological and scientific truths seriously and see how they might ‘speak’ to each other, but we should never defend a particular model as if we were referring to the data itself. The whole point of any model is that it represents a human construct that seeks to relate different types of truth; models are not found within the text of Scripture – the most that we can expect from them is that they are ‘consistent with’ the relevant Biblical texts. Let us never confuse the model with the truths that it seeks to connect to each other. In practice any western reader of the Genesis text, raised in a culture heavily influenced by the language and thought-forms of science, can hardly avoid the almost instinctive tendency to build models or pictures in their heads as to what they might have observed had they been there when ‘it’ happened. This is the case irrespective of whether someone comes to the text as a young earth creationist, an old earth creationist, or some kind of theistic evolutionist. Given that we all tend to build models anyway, we might as well ensure that the model we do maintain has been thoroughly subjected to critical scrutiny. This is important not only for own personal integrity but also in the pastoral context in which we seek to avoid unnecessary cognitive dissonance in the minds of those under our pastoral care. Models for relating creation theology with anthropology The last common ancestor between us and the chimpanzee lived around 5-6 million years ago. Since that time we and the apes have been undergoing our own independent evolutionary pathways. Today we have religion, chimps do not. At some stage humanity began to know the one true God of the Scriptures. How and when did that happen? The emergence of anatomically modern humans Anatomically modern humans appeared in Africa from about 200,000 years ago. The oldest well-characterised fossils come from the Kibish formation in South Ethiopia and their estimated date is 195,000 +/- 5,000 years old. Other well-established fossil skulls of our species have been found in the village of Herto in Ethiopia and date from 160,000 years ago as established by argon isotope dating. Some limited expansion of our species had already taken place as far as the Levant by 115,000 years ago, as indicated by partial skeletons of unequivocal Homo sapiens found at Skhul and Qafzeh in Israel. But significant emigration out of Africa does not seem to have taken place until after 70,000 years ago, with modern humans reaching right across Asia and on to Australia by 50,000 years ago, then back-tracking into Europe by 40,000 years ago, where they are known as the Cro-Magnon people. By 15,000 years ago they were trickling down into North America across the Bering Strait. The effective population size of the emigrant population from Africa has been estimated at between 60 and 1220 individuals, meaning that virtually all the world’s present non-African populations are descended from this tiny founder population. Even the bugs inside human guts tell the same story, with their genetic variation reflecting the African origins of their hosts. But within Africa different groups of humans were living for at least 130,000 years before the emigration, many of them isolated from each other for long periods of time. Therefore one would expect greater genetic variation between different populations of Africans than between different populations of non-Africans, which is in fact what is observed. Adam in the Genesis texts The very first mention of ‘Adam’ in the Bible comes in Genesis 1:26-27 where the meaning is unambiguously ‘humankind’. These verses are reiterated in the opening words of the second toledoth section of Genesis in 5:1-2: "When God created adam, he made him in the likeness of God. He created them male and female and blessed them. And when they were created, he called them adam." So adam can refer to humankind and it is only adam that is made in the image of God. Then Genesis 2, enter a king – God’s ambassador on earth! But this is a dusty king: "the Lord God formed [Hebrew: yatsar] adam from the adamah [dust of the ground] and breathed into his nostrils the breath of life, and the adam became a living being" [Hebrew: nepesh, breath, soul] (2:7). The very material nature of the creation, including the man, is underlined by verse 9: after placing the man in "a garden in the east, in Eden", God then "made all kinds of trees grow out of the ground [adamah]". There are many important points packed into these verses. First, there is a perfectly good word for ‘man’ in Hebrew (’ish), the word most commonly used for man in the Old Testament (in fact 1671 times), so the choice of ‘adam’ here for man seems a deliberate teaching tool to explain to the reader that adam not only comes from the adamah, but is also given the important task by God of caring for the adamah – earthy Adam is to be God’s earth-keeper. Second we note the use of the definite article in front of adam, so that the correct translation in English is ‘the man’, and the definite article remains in place all the way though to Genesis 4:25 when Adam without a definite article appears and "lay with his wife again". Personal names in Hebrew do not carry the definite article, so there is a particular theological point being made: here is ‘the man’, a very particular man, the representative man perhaps of all other men. However we are to understand the use of the definite article, there is no doubt that it is a very deliberate strategy in this tightly woven text, with no less than 20 mentions of ‘the man’ in Genesis Chapters 2 and 3. But at the same time there is some ambiguity in the use of the word adam, perhaps an intentional ambiguity, which makes it quite difficult to know when ‘Adam’ is first used as a personal name. For example in some verses, instead of the definite article in front of adam, there is what is called in Hebrew an ‘inseparable preposition’, translated as “to” or “for” in Genesis 2:20, 3:17 and 3:21. Different translations apply their own different interpretations of when adam starts being used as the personal name Adam, and these differing interpretations depend on the context. So it is best not to be too dogmatic about the precise moment in the text when ‘the adam’, the representative man, morphs with Adam bearing a personal name. The third important point highlighted in Genesis 2:7 is that "adam became a living being" or, as some translations have it, "living soul". The language of ‘soul’ has led some Christians to think that this verse is a description of an immortal soul that is implanted in ‘the adam’ during his creation, but whatever might be the teaching of scripture elsewhere on this point, it is difficult to sustain such an idea from this Genesis passage. The Hebrew word used here is nepesh, which can mean, according to context: life, life force, soul, breath, the seat of emotion and desire, a creature or person as a whole, self, body, even in some cases a corpse. In Genesis 1: 21, 24, 20 and 2:19 exactly the same phrase in Hebrew – ‘living nepesh’, translated as ‘living creatures’ – is used there for animals as is used here in Genesis 2 for ‘the adam’. And we note also that adam became a nepesh, he was not given one as an extra, so the text is simply pointing out that the life and breath of adam was completely dependent upon God’s creative work, just as it was for the ‘living creatures’ in Genesis 1. There is certainly no scope for understanding this particular passage as referring to the addition to adam of an immaterial immortal ‘soul’. How do we relate the anthropological understanding with the profound theological essay that the early chapters of Genesis provide for us, with their carefully nuanced presentation of ‘Adam’? There are two main models that seek to answer this question, which we will here label as the ‘Retelling Model’ and as the ‘Homo divinus Model’, for reasons that will become clear in a moment. Both models accept the great theological truths about humankind made in the image of God and about the alienation from God brought about by human sinful disobedience. Both models accept the current anthropological account of human origins. But the models differ markedly in the ways in which they relate these two sets of data. Although personally I favor the second model, our aim here will be to assess the strengths and weaknesses of each model as objectively as possible. The Retelling Model The Retelling Model represents a gradualist protohistorical view, meaning that it is not historical in the usual sense of that word, but does refer to events that took place in particular times and locations. The model suggests that as anatomically modern humans evolved in Africa from 200,000 years ago, or during some period of linguistic and cultural development since then, there was a gradual growing awareness of God’s presence and calling upon their lives to which they responded in obedience and worship. The earliest spiritual stirrings of the human spirit were in the context of monotheism, and it was natural at the beginning for humans to turn to their Creator, in the same way that children today seem readily to believe in God almost as soon as they can speak. In this model, the early chapters of Genesis represent a re-telling of this early episode, or series of episodes, in our human history in a form that could be understood within the Middle Eastern culture of the Jewish people of that time. The model therefore presents the Genesis account of Adam and Eve as a myth in the technical sense of that word – a story or parable having the main purpose of teaching eternal truths – albeit one that refers to real putative events that took place over a prolonged period of time during the early history of humanity in Africa. Some would wish to press this model further to suggest that the Adam and Eve of the Genesis account do in fact represent the very first members of our species back in the Africa of about 200,000 years ago. This suggestion, however, faces a significant scientific problem. All that we know of the emergence of a new mammalian species is that this is a gradual process that may take thousands of years. A reproductively isolated population gradually accumulates a unique ensemble of genetic variants that eventually generates a new species, meaning a population that does not generally interbreed with another population. A new mammalian species does not begin abruptly, and certainly not with one male and one female. If we keep to the retelling model as summarized above, then the Fall is interpreted as the conscious rejection by humankind of the awareness of God’s presence and calling upon their lives in favor of choosing their own way rather than God’s way. The Fall then becomes a long historical process happening over a prolonged period of time, leading to spiritual death. The Genesis account of the Fall in this model becomes a dramatised re-telling of this ancient process through the personalised Adam and Eve narrative placed within a Near Eastern cultural context. In favor of the Retelling Model is the way in which the doctrine of Adam made in the image of God can be applied to a focused community of anatomically modern humans, all of whose descendants – the whole of humanity since that time – share in this privileged status in the sight of God. Likewise as this putative early human community turned their backs on the spiritual light that God had graciously bestowed upon them, so sin entered the world for the first time, and has contaminated humanity ever since. Such an interpretation is made possible by the fact that the very early human community within Africa would have been no more than a few hundred breeding pairs. If the Retelling Model is taken as applying to this very early stage of human evolution, prior to the time at which different human populations began to spread throughout different areas of Africa, then these putative events could have happened to the whole of humanity alive at that time. A further theological point consistent with the Retelling Model is Paul’s teaching in Romans 2:14-15 that the Gentiles have the requirements of the law “written on their hearts” even without the specific Old Testament revelation. In like manner, it is suggested, very early humanity knew God as He wrote His law upon their hearts, and it was their disobedience to this light that led to their alienation from God. This in turn left a spiritual vacuum that humankind has been trying to fill ever since with all kinds of different religious beliefs, none of which (outside the Cross), bring about reconciliation with God. Against the Retelling Model is the way in which it evacuates the narrative of any Near Eastern context, detaching the account from its Jewish roots. If the early chapters of Genesis are about God’s dealings with the very early people of God who later came to be called Jews, then Africa is not the direction in which we should be looking. Much depends on how exactly the Genesis accounts of Adam and Eve are interpreted; on how much weight is placed on the Old Testament genealogies that incorporate Adam as a historical figure (Genesis 5; 1 Chronicles 1) and on the New Testament genealogy that traces the lineage of Christ back to Adam (Luke 3); and on passages such as Romans 5 and 1 Corinthians 15 that are most readily interpreted on the assumption that Adam is understood as a real historical individual. The second model seeks to address these concerns. The Homo divinus model Like the Retelling Model, this model also represents a protohistorical view in the sense that it lies beyond history as normally understood, but, like the Retelling Model, it looks for events located in history that might correspond to the theological account provided by the Genesis narrative. But in this case the model locates these events within the culture and geography that the Genesis text provides. According to this model, God in his grace chose a couple of Neolithic farmers in the Near East, or maybe a community of farmers, to whom he chose to reveal himself in a special way, calling them into fellowship with himself – so that they might know Him as the one true personal God. From now on there would be a community who would know that they were called to a holy enterprise, called to be stewards of God’s creation, called to know God personally. It is for this reason that this first couple, or community, have been termed Homo divinus, the divine humans, those who know the one true God, the Adam and Eve of the Genesis account. Being an anatomically modern human was necessary but not sufficient for being spiritually alive; as remains the case today. Homo divinus were the first humans who were truly spiritually alive in fellowship with God, providing the spiritual roots of the Jewish faith. Certainly religious beliefs existed before this time, as people sought after God or gods in different parts of the world, offering their own explanations for the meaning of their lives, but Homo divinus marked the time at which God chose to reveal himself and his purposes for humankind for the first time. The Homo divinus model also draws attention to the representative nature of ‘the Adam’, ‘the man’, as suggested by the use of the definite article in the Genesis text as mentioned above. ‘The man’ is therefore viewed as the federal head of the whole of humanity alive at that time. This was the moment at which God decided to start his new spiritual family on earth, consisting of all those who put their trust in God by faith, expressed in obedience to his will. Adam and Eve, in this view, were real people, living in a particular historical era and geographical location, chosen by God to be the representatives of his new humanity on earth, not by virtue of anything that they had done, but simply by God’s grace. When Adam recognised Eve as "bone of my bones and flesh of my flesh", he was not just recognising a fellow Homo sapiens – there were plenty of those around – but a fellow believer, one like him who had been called to share in the very life of God in obedience to his commands. The world population in Neolithic times is estimated to lie in the range 1-10 million, genetically just like Adam and Eve, but in this model it was these two farmers out of all those millions to whom God chose to reveal himself. Just as I can go out on the streets of New York today and have no idea just by looking at people, all of them members of the species Homo sapiens, which ones are spiritually alive, so in this model there was no physical way of distinguishing between Adam and Eve and their contemporaries. It is a model about spiritual life and revealed commands and responsibilities, not about genetics. How does this model relate to the fact that Adam is made in God’s image? If we take Genesis 1 as a kind of ‘manifesto’ literature that lays down the basic foundations for understanding creation, in turn providing the framework for understanding the rest of the Bible, then the teaching of humankind made in the image of God is a foundational truth valid for the whole of humanity for all time. It is a truth that certainly encompasses the kingly responsibility given to humankind in Genesis 1 to subdue the earth; the truth also has a relational aspect in reflecting human fellowship with God, and the relational implications of what it means to be made in God’s image are worked out in Genesis 2, through work, marriage and caring for the earth. Of course with our western mindset we would like to ask the chronological question: when exactly did the ‘image of God’ start applying in human history? But the Genesis text is not interested in chronology. Neither does the Homo divinus model as presented here seek to address that particular issue, but simply accepts the fact that the whole of humankind without any exception is made in God’s image. Instead the model focuses on the event from Genesis 2:7 in which God breathes His breath into Adam so that he becomes a nepesh, a living being who can respond to God’s claim upon his life. The model is about how Adam and Eve became responsible children of God, involving a personal relationship with God, obedience to his commands, and the start of God’s new family on earth consisting of all those who would come to know him personally. Paul says that "I kneel before the Father, from whom every family in heaven and on earth derives its name" (Ephesians 3:14-15). Families have to start somewhere, and God chose to start his new family on earth with two very ordinary individuals, saved by grace like we are, and sustained by the ‘tree of life’. In this model the Fall then becomes the disobedience of Adam and Eve to the expressed revealed will of God, bringing spiritual death in its wake, a broken relationship between humankind and God. In an extension of this model, just as Adam is the federal head of humankind, so as Adam falls, equally humankind falls with him. Federal headship works both ways. Just as a hydrogen bomb explodes with ferocious force, scattering radiation around the world, so sin entered the world with the first deliberate disobedience to God’s commands, spreading the spiritual contamination of sin around the world. And as with the Retelling Model, the physical death of both animals and humans is seen as happening throughout evolutionary history. Both models suggest that it is spiritual death that is the consequence of sin. Genesis 3 provides a potent description of the alienation that humankind suffers as a result of sin, with a fiery barrier separating them from the Tree of Life (3:24). But under the New Covenant the way back to the tree of life is opened up through the atoning work of Christ on the cross: "Blessed are those who wash their robes, that they may have the right to the tree of life and may go through the gates into the city" (Revelation 22:14). The Homo divinus model has the advantage that it takes very seriously the Biblical idea that Adam and Eve were historical figures as indicated by those texts already mentioned. It also sees the Fall as an historical event involving the disobedience of Adam and Eve to God’s express commands, bringing death in its wake. The model locates these events within Jewish proto-history. For some, however, a disadvantage of the model will be the appeal to the Federal Headship of Adam to satisfy the need to see God’s call to fellowship with Him as being open to the whole of humankind and, equally, to see Adam’s disobedience as impacting the whole of humankind. The notion of Adam’s headship is of course perceived through passages such as Romans 5:12 and 17, and 1 Corinthians 15:22, although Romans 5:12 makes it clear that spiritual death came to all men by them actually sinning. Each person is responsible for his or her own sin. The model is not therefore consistent with a strictly Augustinian notion of the inheritance of the sinful nature, but in any case many biblical commentators do not find this notion in Scripture, which emphasizes the fact that all have sinned and fallen short of the glory of God (Romans 3:23), rooting that fact in Adam’s sin (1 Corinthians 15:22), but also highlighting the personal responsibility that each person has for their own sin (Deuteronomy 24:16; Jeremiah 31:30; Romans 5:12). The Homo divinus Model will not answer all the theological questions that one might like to ask, any more than will the Retelling Model. For example, what was the eternal destiny of all those who lived before Adam and Eve? The answer really is that we have no idea. But we can be assured with Abraham: "Will not the Judge of all the earth do right?" (Genesis 18:25). Thankfully we are not called to judge the earth, and we can leave that safely in the hands of the one who "judges justly" (1 Peter 2:23). The question asked about those who lived prior to Adam and Eve is not dissimilar to other questions that we could ask. For example, what was the eternal destiny of those who lived in Australia at the time that the law was being given to Moses on Mount Sinai? Again, we really don’t know and, again: "Will not the Judge of all the earth do right?" Christians who spend time speculating about such things can appear as if they are the judges of the world’s destiny, forgetting that that prerogative belongs only to God. The two tentative models presented here may be seen as a work in progress. Both models are heavily under-determined by the data, meaning that there is insufficient data to decide either way. Both models might be false and a third type of model might be waiting in the wings ready to do a much better job; let us hope so. But for the moment the various ideas that have been suggested seem to represent versions of these two models. Is it likely that new data may come along that will render either or both of these models untenable? It is not impossible, though if that happens it is from science that the new data are likely to come. For example, the Out of Africa model for human origins could be over-turned by new discoveries, unlikely as that might seem at present. Equally it is not impossible that new data might come to light on the roots of monotheism that might influence the model-building exercise. Given that both models presented here suggest that human evolution per se is irrelevant to the theological understanding of humankind made in the image of God, it is likely that a preference for one model or another will be made based on a prior understanding of the claims made by particular Biblical texts. It should also be apparent that the adoption of one model over another may well have an impact on other theological perspectives. For example, if the Genesis Fall account is the story of the gradual alienation from God that occurred during some unspecified early era in the emergence of Homo sapiens, as in the Retelling Model, then the interpretation of the Fall can readily start to centre around human antisocial behavior, or the emergence of conflict, or even just human behaviors required for basic survival. But, important as these things are, I would suggest that they do not bring us to the heart of the biblical doctrine of the Fall, which is not about sociobiology, but about a relationship with God that was then broken due to human pride, rebellion and sin against God – with profound consequences for the spiritual status of humankind, and for human care for the earth. The Fall is about moral responsibility and sin, not about misbehaviour, and sin involves alienation from God. A relationship cannot be broken by sin unless the relationship exists in the first place. Such reflections are a reminder that models should never take the place of the data itself, otherwise we have a case of the tail wagging the dog. Sometimes in science we have to hold on firmly to different sets of very reliable data without any idea as to how the two sets can be built into a single coherent story. In relating anthropology to Biblical teaching we are in a much stronger position than that, since the models proffered go at least some way towards rendering the two data-sets mutually coherent. But no-one is naïve enough to think that such models are completely satisfying. On the other hand, one or other may give some useful insights along the way, and hopefully stimulate the building of better models in the future. McDougall, I. et al., Nature 433:733-736, 2005. White, T.D. et al., Nature 423:742-747, 2003; Clark, J.D. et al., Nature 423:747-752, 2003. A useful account of the spread of humanity out of Africa can be found in: Jones, D. ‘Going Global’, New Scientist, 27 Oct, 36-41, 2007. Fagundes, N.J. et al., ‘Statistical evaluation of alternative models of human evolution’, Proceedings of the National Academy of Sciences USA, 104:17614-17619, 2007. The ‘effective population size’ is defined as the number of individuals in a population that contribute offspring to the next generation. Linz, B. et al., ‘An African origin for the intimate association between humans and Helicobacter pylori’, Nature 445:915-918, 2007. This is well illustrated by the way in which different translations introduce ‘Adam’ as a personal name into the text: the Septuagint (Greek translation of the Old Testament) at 2:16; AV at 2:19; RV and RSV at 3:17; TEV at 3:20 and NEB at 3:21). The two models equate to the Models B and C that are described in greater detail in: Denis Alexander, Creation or Evolution – Do We Have to Choose? Oxford: Monarch, 2008. Model B has been well presented by Day, A.J. ‘Adam, anthropology and the Genesis record – taking Genesis seriously in the light of contemporary science’, Science & Christian Belief, 10:115-43, 1998. Justin L. Barrett, Why Would Anyone Believe in God? Altamira Press, 2004. Genesis does not use the term ‘Fall’ and it might be more accurate to title the account in Genesis 3 as ‘How sin began’, but since the language of the ‘Fall’ has become so embedded in the literature it will be used here as shorthand. To the best of my knowledge the term Homo divinus was first used by John Stott in this way in the Church of England Newspaper June 17th, 1968, then in Stott, J.R.W. (1972) Understanding the Bible, London: Scripture Union. p.63. The idea has also been helpfully discussed in several publications by R.J. Berry who provides a good discussion of issues raised in this section in: Berry, R.J. and Jeeves, M. ‘The nature of human nature’, Science & Christian Belief 20:3-47, 2008. See also Berry, R.J. 'Creation and Evolution, not Creation or Evolution', Faraday Paper No 12, 2007. Some versions of this model do seek to incorporate the ‘image of God’ teaching into the model more clearly than is attempted here. © 2010 Denis Alexander This article is reproduced by the kind permission of the author and The Biologos Foundation. The paper was first presented in November 2010 at the Theology of Celebration BioLogos Workshop in New York City. It can also be downloaded from the Biologos website as a pdf.
<urn:uuid:c550ebd6-03a6-4bc7-a936-a3ad2a47bc9c>
CC-MAIN-2021-21
https://www.bethinking.org/does-evolution-disprove-creation/adam-and-anthropology
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991537.32/warc/CC-MAIN-20210513045934-20210513075934-00334.warc.gz
en
0.962594
7,586
3.015625
3
Product and Portfolio Architecture Good Morning, welcome back to the course on functional conceptual design. So, in the last class we briefly introduced Product Architecture. From the functional design we move to the design of the product in terms of its architecture, where we tried to map the product functions into product form. So that is what the architecture is. And we found that the creation of architecture has a lot of influence in the design of the product. There are 2 types of architectures we normally encounter, the first one is known as the Portfolio Architecture and then the second one is the Product Architecture itself. Portfolio Architecture talks about the family of products; how many products should be there in the product family and Product Architecture talks about how to architect an individual product. So that actually depends on the number of products in the family so these two are interconnected. You need to design the architecture of the product based around the number of products in the family and how do you want these products to share the components. So we look into these two aspects of product development which is Portfolio Architecture and And yesterday I briefly showed these kinds of products where you can see each product is independent of the other one. There is nothing common to these products; the handle is different, the tool tip is different, and each one is an independent product. But in the case of this kind of product, you can see this is a toaster where you have a toaster with a 2 slice of 2 breads can be toasted or here we can have 4. So they are two independent products so customers can decide to go for a 2 slice or a 4 slice toaster. However, when designing the product you need to see how best he can utilize the resources in order to get these products. For example, if you have to make both the products completely different, then the cost of production, cost of design, cost of manufacturing everything will go up. But you can actually have many things common to both the products and therefore, by properly designing the architecture of these two products you will be able to make the products very cost effective and at the same time you can have you can have variety in the product And here you can see the kind of film rolls where you can have the number of films available. Each roll can be different, and there can be many things common outer cover can be constant, the same center portion can be the same but still you can have a variety in the market. So, this is the way how any manufacturer will provide variety in the market. But how many products to be offered and how we decide the individual specification of each product or within the family matter a lot when it comes to the market and then satisfying the customer requirement. That is why architecture becomes very important. So whether to have many products in the family or can have only just one product and then how these products need to be architecture matters a lot. That is why we have these two architectures, Portfolio Architecture and Product Architecture need to be understood. Portfolio means a set of different products that a company provides, and architecture is the layout of the components. So how you lay out the components within the product is basically the architecture and the set of different products that a company provides is the Portfolio. Now, what actually decides this Portfolio Architecture is the cost and revenue. How much will be the revenue that you can generate, how much the product is, that actually decides how many products could be offered in the market. Because if the requirement is very small there is only a very small number of customers for a particular variety of product then there is no point in developing a product for the customer segment because the cost and revenue wont workout well. This needs to be understood before we decide how many products to be there in the family. And the Product Portfolio Architecture is the system design strategy for laying out components and systems on multiple products to best satisfy current and future needs. So, we have multiple products in the Portfolio Architecture so in this multiple products how best we can arrange the components keeping in mind the current requirements of the customer guess currently there will be some requirements for the customer but after 1 year customer requirement may change so we need to keep this also in mind when you do for the portfolio architecture so that after 2 years you should will be able to easily modify the product to made the customer requirements. So the current requirements as well as the future requirements need to be taken into account when we decide the number of products in the family and the architecture of the product. So, the design task here is to determine whether one might develop subsystems within the product that can be reused across different products. The question is can we have some kind of subsystems in a product so that this subsystem can be used in a multiple product that you do not need to separately develop for each product within the family. So, we can have some kind of subsystem which or some subsystem can be made common to all those members in the family so that you can easily make the product and sell in the market. Few things will change but many things can be the same for all the products and therefore you will be able to offer variety without much difficulty. So these are the things to be understood when we look for the Portfolio Architecture. First we will look at the type of architecture existing in the portfolio or what are the ways in which Portfolio Architecture is classified and how one can decide what kind of an architecture to be used for the Portfolio or the family of products that is what we are going to see. If you look at the portfolio Architecture as a whole, you can actually classify them into 3 major categories that is the first one is known as a Fixed Unshared Architecture the second one is Modular Platform and the third one is known as Massively Customizable. These are the 3 major categories of a Portfolio Architecture existing in the current And many times this will be used but some of them will be preferred over the other considering the particular product and its specific characteristics, but most of them will be always following a particular architecture called Modular Platform. But let us just look at the first category which is the Fixed Unshared Architecture. If a company decides to make 5 products to bring to the market and if they decide if they feel that I can be offered in the market and there is need not be anything sharing in these 5 products. And that kind of architecture is known as Fixed Unshared Architecture. The family will be having many products but these products will not be having anything in common. Each one will be independent and it will be having its own components which are not shared with any other products within the family. That kind of an architecture is known as Fixed Unshared Architecture. Typical example is this screwdriver set. You can see these screwdriver has got its own handle and the tool tip also and here also it has got its own handle and which is different from this one and this tool also different similarly this is different from this, this one is different from this, this one is different this one so each one is completely independent that is nothing sharing between these products and that kind of an architecture is known as Fixed Unshared Architecture. That means you have products in the family but the products do not share anything between themselves is known as the fixed unshared architecture. One of the typical examples is this one and like that you can see many products in the like hammer sets or spanner sets will be seen in the market so there will be many such products which actually go for Fixed Unshared Architecture. A company can go for this kind of architecture only if there is a large demand for the product because if this product has to be made if they can make it in thousands they do not mind going for a design for this one special design for this and special design for this because the can make large numbers and cost of manufacture per unit will be less. If there the requirement is only 10 or 15 or each one of these then making each one separate will be very costly that therefore, a Fixed Unshared architecture is opted only if there is a large demand for these products. So, that is the Fixed Unshared Architecture. So, whenever there is a large demand for this product will go for the Fixed Unshared In the Fixed Unshared unit itself you can actually have two categories known as Single Offer and Robust Offer. So, basically it says that in Fixed Unshared it is a Single offer; only one type of products in the market there is no variation to that all over the world it will be the same so there is no variation in that but Robust offer still which is in a Fixed In the Robust Offer, it may provide you some kind of flexibility in the market to meet some specific requirements. For example, if you have a power supply unit where you want to connect to the wall socket you need to have some variations within the countries. So, if you use something in India you will be having these 3 pin configurations. But if you use it in some other countries it may be 2 pins or the pin will be different in other countries. Even if it is just 3 pins geometry will be different. To that extent there will be variation and that kind of offers are known as Robust Offer. So, these are the two variations that can be there in the Fixed Unshared Architecture. So, there is no common module and its high volume for Fixed Unshared and the Screwdrivers, Magnetic cassettes are examples for this and the Fixed Unshared Single offer does not take into account market variation. So, if it is a Single offer one, it would not really take care of the market variation and they assume that every market can actually take this and sell it. Sell it as a product. But the Robust Offer will take some examples that take into account some market variation specially the power sockets and other things in the electric sockets, power supply etc in the product that is the example for Robust offer. So, this is not a very commonly used Architecture the Fixed Unshared is very rarely used specially when there is a High Volume requirement for a product, then the can go for a Fixed Unshared Architecture that is cost effective the cost and revenue matches in this case or you can still make good profit but when that is not when the volume is very small or not worth going for a Fixed Unshared Architecture, then we need to go for the Modular Architecture. So, that is the second Architecture which is the Modular Platform. So, let us see this one, the different products do not share any components in the Fixed Unshared architecture and these are the examples for the Fixed Unshared Architecture. Screwdrivers and Spanner sets etc., you can see as an example for Fixed Unshared Architecture. Modular, the name itself says that it can be made of modules. So a product will have multiple modules and then these modules can be shared within the Family and that kind of an Architecture is known as Modular Architecture. If there are 5 products in the family and the company decides to have 5 variants of a car in the market, for example, if Hyundai or Honda city wants to have 5 types of city models in the market. So, they cannot have each car as completely independent because that is not cost effective because each model is the number of units they can sell will be very So, what they do is they make the modules within each one will be having multiple modules, each product will be having multiple modules and many of these modules will be shared among the models. And only a few things will change in between the models. Actually you can have different models in the market but there will be many common things in these products. So, that kind of an Architecture is known as Modular Architecture. This is the most commonly used Architecture in Portfolio Development. It will check what modules can be developed separately and then can be shared among the family members. This has different variants too. As I mentioned the product should look at the current requirements of the customer. Guess currently customers will be having some cost requirement, some performance requirement, some safety requirements etc., but after 2 years their preferences will change. Now after 2 years if they meet the customer requirement they cannot go again for designing a completely new product. So, they have to build up on the existing product and then make it suitable for the customer after 2 years. So they have keep that in mind what are the things going to change in the future so that also need to be taken into account when they design the module within the product so that is the another one Similarly, there will be a product which may require something which will be consumed very often so then you have to make that same model so that only can be changed or that consumable can actually be made different for different products so this way you will be getting different architecture for the product. So, the Modular Architecture can be divided further into categories like a Modular Family, a Modular Generation, Modular Consumable, Modular Standard and Modular Parametric. So, these are the different categories within the Modular family that you can see. We will see each one what is the significance of each of these architectures. So, the Modular Family is a series of products that share some modules. That is the simple explanation for a Modular family. In most of these there will be a basic platform and on the basic platform we will add many modules; for example, the car will be having a chassis and you will be having modules for the transmission, a module for the engine, a module for the air conditioning and module for the steering system and a module for the dashboard and things like that. So, each one can actually have a module. As such we will be having many modules in the product and this module will be shared amongst the models within the family. So, if you buy a Ciaz high end car or a low end car, there will be many things common in both these cars and very few things will be different. This way if they can actually share the modules, software and hardware, then we called this as the Modular Family of products. This is known as the Modular Family where the modules are shared amongst the family members; that kind of an architecture is known as For example, you take this Coffee Maker, so this Krups company is actually offering two products in the market. Somebody who is interested in going for a very high end and very nice product with lots of electronic controls and all, they actually opt for this one and somebody who is looking for a low end product they just want the function to be there coffee to be made they don't really care for the appearance and other things then there is a product for them. So, this product can cost cheaper and this can be a costly one. Now, if the company has to offer these two products and if they make it completely independent, then there will be a lot of cost involved in the production. So they will make many things common, probably inside the heating element or the container and the way the containers are assembled. Those things may be the same but the control and the display the control knobs display. These things can actually be made different so you will be getting a different product. So many things will be shared within the product, within these products but still the product will be different for the customer so that kind of an architecture is the Modular Family And you will see large number of products in the market is actually belong into this Modular Family where it is your Washing machine, Mobile phone, Television sets any product you take you will see that there are many product variants available in the market and most of these variants will be having many things common within the product architecture. So, that kind of an architecture is known as the Modular Family Now, these are examples for the modular architectures as you can see all these products will have a lot of things shared amongst themselves. So, this toaster if you look at this toasting an element or the heating element will be common for this except that they are actually combined them and made it in separate one and the little bit of change in the control and other things will give you a different product with the 4 slice capacity and this will be a 2 slice capacity So, a customer who is looking for a 2 slice he can actually buy low-cost and the somebody wants a 4 slice one they can actually get it by paying an extra some extra but the manufacturer need not make everything completely separate there will be sharing many of these things between these two products. That is the case with most of these products when your camera mobile phone iPad and printers so all those things will be able to see these kinds of sharing of modules. So, that is the modular family architecture. The set of products that are supported any one of the time by a platform. Always, all these products will be having a common platform supported by a common platform and they will add the modules. So, by changing some modules will get a new product but the basic platform will be almost the same. That is how most of the company manufacturers are able to bring multiple products in the market because they have a common platform to which they will keep adding modules and some modules will be different compared to the other; so that you will get a variation in the product variety. Here we can actually have different derivatives for the product by using a Modular Family Architecture you will be able to create the variance of the product you can actually have a cost variant that if you want to have a low cost and high cost product, so the low cost you can actually have some kind of some modules can actually be changed so that the cost will come down and if you are looking for high cost money you can actually change the module with the high cost element or high quality elements can be used to get a costly, costly products This is possible because these are all modules can be changed by without changing the whole product; some critical modules can be changed to make it a better product and the cost variant and you can have product line extensions if you want to have like you know 2 slice, 4 slice or by bigger size. Like washing machines you can have different capacity washing machines. Many things can be common but only the motor and the drum size and things can be changed to get various capabilities or capacity. That is the product line It is basically to meet the customer needs because there will be customers who ask for various requirements and by providing this kind of variance you will be able to satisfy those customers. And additional features to address more difficult customers. Some customers may require very specific needs so we can actually add those needs or those modules into the product and then make it a much enhanced product or a high end product for the customers. These are the ways in which actually you can provide multiple products in the market without changing the basic platform and without having it as a completely new product you can change the modules and then modify the product to get a different variant of this product and that is how the manufacturers are able to offer multiple products in the family. So, this is one of the most commonly used architecture modular families. The next one is basically known as the Modular Generations. It is also a kind of modular product only so in this case you are not looking at the products currently to be offered in the market. The model family looks at how many products I need to offer currently in the market. So, if a company is trying to introduce a new washing machine to the market they need to see what kind of variants that we can offer in the market. So, that a large customer segment will be happy in terms of cost in terms of capacities is in terms of water consumption or whatever it is. They can actually offer different variants of the product using the modular family. But the modular generation is looking at it in a different way, they are looking, now I can offer five products in the market but then after 2 years if I have to offer the product, can I offer the same product or you need to change it completely? What will be the changes in the customer preferences over the period of maybe 2 years or 3 years depending on the product? Now, if the company can offer a new product in the market to meet the customer requirement at that point of time after 2 years can we use the same platform and the same product change a module in order to meet the customer requirement? So, by looking at what is going to change in the future you make that as a module and then make sure that if you want to offer a new product in the market, you can actually change this module and insert a new module and then bring it as a product so that customers in will be happy at that point of time. That is what actually happens in most of the electronic products. Guess if they offer a product today in the market after 1 year people will be looking for better features in the product. So, you can see each subsequent model and hence only some subset of the modules of the previous generation. So, some modules in the previous generation will be modified to meet the customer requirements of the current generation that is known as the Modular Generation Architecture. So, typical examples are the cameras and other electronic products because today there will be LCD displays and after 2 years the display may be changed into something else and then people will be looking for that kind of a display in the product. So, the company cannot scrape the whole product and then make a new product, so they can actually have the only display module that can be changed to a new technology so that they will be able to offer something which actually meets the requirements of that generation. That is the processor capacities, the RAM capacities etc., are actually added to the system so that, that system will meet the requirement of that particular generation this is known as the Modular Generation Architecture. So if there is going to be some change in the future then we need to think of a Modular Generation Architecture. If there are not many changes expected, then we go for a Modular Family Architecture. So, this Module Family is normally employed in automobiles and products where actually the changes are not so drastic every year; there will not be too much of changes in the technology or the expectations of the customer. They can actually go for Modular Family but in the case of electronic products most of the time they need to keep this in mind and then go for Modular Generation Architectures so that they can meet the expectations of the next generation or after 2 or 3 years. So, that is the Modular Generation Architecture. And the third one is known as the Modular Consumable Architecture Here, the architecture differentiates modules based on whether it is consumable or not. So, it is a very small number of products to actually fall into this category. Suppose there is a product and there are some consumables coming in the product and these consumables may change depending on the region or depending on the situation then we can actually make that as a module and then offer the product with this consumable as a separate module. This is known as Consumable Module. For example, in the printer the toner is consumable; in this case they should make this as a module so that you can actually attach any kind of compatible consumables to this product and even if there is a change in the consumables use, you can still offer this product by changing the consumable and making sure that the consumable can actually go into the product. This is known as the Modular Consumable Architecture. This ink cartridges for Inkjet printers, Film for Film Cameras etc., are the Consumables Architectures. And the next one is Standard Architecture. So, Standard Architecture is basically to look at the current standards of manufacturing and design so that there will be a lot of standards followed by industry in making products. That is a communication standard, interface standard and standards for connecting hardware to software and things like that, so we need to follow these architectures in the product. Whenever there is a standard available for a particular product or a particular component we will try to ensure that this product meets that standard and then we will make part as a module so that whenever there is a change in standard or there is a need to change this component with the different component which actually the same standard, we should be able to modify the product as per the requirement standard and that is known as the Modular Standard Architecture. We follow the published standard in mechanical, electrical and software interfaces and then make sure that this product meets the industry standards; so that any product or any component which meets the standard can actually be interfaced. For example, the lens mount system in the cameras so you can actually see the camera structure will be there and you can actually mount the lenses so these are actually having a lot of standards in this design of such lengths and bounds. Whenever there is a compatible stand compound which actually meets the same standard we can easily attach to this product and then can be used as a it… it can be used to meet the requirement of the customers so that is the Modular Standard Architecture. So, this is also not I mean that most of the companies need to follow this, but that will be coming from the existing standards so more than the customer's requirement this is coming from the industry standards. And the last one in this one is known as the Parametric, Modular Modular Parametric is exhibited by changing the motives parametrically and fitting into the product at the time they are ordered. So, this is basically like if a customer requires a particular size or shape of a component or a product, you can modify the existing modules to meet that particular parametric limitations and then fit into the product that is known as the Modular Parametric. In the case of a computer customer will say, I need to have a particular size of the product. Overall product I want this much only and I want to have a particular hard disk hard disk capacity, a particular RAM capacity etc. Depending on those parametric requirements you can actually fit those modules into the main platform and then generate the product that is the Modular Parametric Design These are the 5 categories of modular family architecture which is basically used for the Portfolio Design and this you so this actually is Modular Family is the most commonly used one and modular generation is considered whenever there is an expected change in some of the requirements of the customer over a period of time. Others are depending on the product's architecture rather than the customer’s requirement. Of course customer requirements also play a role but it is more from the designers point of view to offer this kind of architecture. So, that is the Modular Architecture for Portfolio Design. I hope you are able to follow in case you have any questions please feel free to email me or ask me whenever we get an opportunity. The last category is the customized one. So, customizing a product is actually modifying the product according to the requirement of a customer. That is basically the Customized Architecture. You are not going to offer too many products in the family as directly to the market; but look at the customer and if a customer is asking for a particular kind of architecture or particular product with some particular features, then you try to offer that as a product, one time offer of a product and that kind of architecture is known as Only very few products offer this kind of architecture especially when there are very high end products like that you would not have a high end car and especially companies like BMW and all they offer customized cars. You can actually tell what you are expecting from your car; you can tell the color you can tell the engine capacity, the speed And based on all these requirements they will actually create the product and give it to you. So, they will be having a basic platform and they will be having different modules to attach then depending on the customer's requirement they will add these modules to your products. So, that is basically a Customized Architecture. Here there are 2 types. One is known as the Bespoke and the other one is the Parametric. So, in Bespoke the product is designed and made based on the customers’ engineering specification. So, the engineering specifications are given by the customer then you design the product accordingly and give. That is known as the Bespoke product. This kind of thing normally happens in very high end products or something like fighter aircraft. If a country wants a fighter aircraft with specific requirements and they will give all those engineering specifications and then give this to the company and tell that now with these are the specifications we are looking for make one and give it to us. So, that is the customized design for the product and that is known as Bespoke D
<urn:uuid:db8cd313-c23c-49b5-8366-927b6bc4d099>
CC-MAIN-2021-21
https://alison.com/es/tema/aprender/99619/product-and-portfolio-architecture
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989916.34/warc/CC-MAIN-20210513111525-20210513141525-00134.warc.gz
en
0.949518
6,229
2.625
3
The History of John's Island The more things change, the more they stay the same. So it is at John's Island, where the original mission of creating a private, family-oriented community for those with discerning tastes, continues to evolve. Today, John's Island is a 1,650-acre (3,200 acres including wetlands), barrier island masterpiece situated within the quaint town of Indian River Shores in Vero Beach. Homes have been strategically placed to preserve old oak trees. A rare and cherished three miles of private beach access and over nine miles of Intracoastal Waterway exposure make John's Island a playground in paradise. Careful preservation of the pristine natural surroundings ensures this paradise will be as breathtaking in the future as it is today. 1715 – Survivors of the Spanish Plate Fleet set up a salvage camp on what is now the northern boundary of John’s Island. 1844 – John’s Island was first surveyed to have a record available to describe the land that might be acquired under the Armed Occupation Act of 1842. 1872 – The first pioneer to settle on John’s Island was Allen Estes. 1880 – A farmer named John La Roche arrived on John’s Island. He selected the island because of farming, and it represented the shortest row to the mainland, and eventually the railroad. He settled on a 300-acre island on the Indian River, known today as “The Island of John’s Island”. This island was the original John’s Island named after John La Roche. 1889 – On June 21, John La Roche filed homestead papers for 138.5 acres on John’s Island. Apparently he was not only a farmer, but an imaginative real estate man. He sold lots at $25 an acre. 1890 – A detailed survey was made of John La Roche’s property in March by R.B. Burchfield. The name John’s Island was first used. 1891 – Brothers, William and Calvin Reams, arrived to John’s Island as one of the earliest settlers. They were among 12 to 15 families that settled in John’s Island. Calvin gave land for the two churches and singing school. Calvin’s son remained in John’s Island for 27 years. 1892 – On September 25, a post office was established on John’s Island. It was called Reams. 1900s – The small community founded by John La Roche prospered with some 200 residents, a Missionary Baptist and Primitive Baptist church, and a school known for its singing, taught by Felix Poppell. 1925 – Residents deserted the 300-acre island and moved to the mainland due to the advent of the railroad and opportunity for employment other than farming beans. The general economic depression of 1929 terminated any planning of luxury development during the Florida boom, and the area reverted to its natural jungle state. The old cemetery still exists where the original settlers are buried. 1953 – The town of Indian River Shores was created on June 15, House Bill No. 1691. John’s Island’s principal architect, James E. Gibson, designed the municipality’s Town Hall. Fred R. Tuerk, onetime Chicago broker and president of the Chicago Stock Exchange, acquired the island and, parcel by parcel, assembled the 3000 surrounding acres of land. Tuerk specified that it be sold only to a man “with respect for the land and the ecology”. 1958 – A1A was built from Beachland Boulevard to just one-half mile north of Wabasso Road. 1960s – (Per Floridays newscast with Janie Gould interviewing Alma Lee Loy) “...In the early 60s, local developer Fred Tuerk owned a great deal of land on the barrier island and offered some of it to the state for a new university. Officials from the Board of Regents came to look at the land and had mixed feelings because its remoteness and dense jungle. But they later informed us that we were in the running and had to submit a final proposal. With only days to spare, Bob Spillman, a young banker who was also a pilot, offered to hand carry our proposal to Tallahassee, which he did. On his return flight, Bob Spillman’s small plane crashed and he was killed. He was admired by many in town, including Fred Tuerk. We didn’t get the college and several years later, Tuerk sold his land to the developers of John’s Island.” 1967 – Fred R. Tuerk died in February without seeing his dream fulfilled. 1969 – Tuerk’s heirs, cognizant of his love for the property, sought to find a purchaser for the 3000-acre estate. The buyer they were seeking would respect the environment and its natural beauty. The man they found was Edwin Llwyd Ecclestone, Sr. He had demonstrated a profound commitment over ten years prior, when he founded and developed Lost Tree Village in North Palm Beach, FL. In the mid-1960s, however, Ecclestone was so deep in debt at Lost Tree – he had mortgages with five banks at one point – he almost gave up on the development. His decision to remain was the smartest move he made. 1969 – March 28 – Mr. Ecclestone, at age 67, undertook a long-term plan for the development of a unique and private, residential family Club community that would fully preserve the beauty of the land and the legacy of the past – John’s Island. He knew the need for a championship golf course. So he hired Pete Dye and consulted with Jack Nicklaus to build the South Course...at least that's what the signs said. According to Alice Dye, Pete's wife who accompanied him on the many trips to John's Island, "Jack never helped Pete on that course. Jack could never get a contract signed by Ecclestone, so he never got paid and never did any work." Under Ecclestone's guidance, John's Island became quite a success. One key move was hiring Errie Ball, a head professional from Chicago who had the distinction of playing in the first Masters. According to Alice Dye, once JI got Errie Ball, everything changed. He brought a lot of wealthy people down from the Midwest that really helped change the atmosphere at JI. Ground was broken in March of 1969, and the first round of golf was played on the South Course, in December of that year. The South Course was designed by Pete Dye in consultation with Jack Nicklaus. At that time it was called “The Bayou”. Construction began shortly thereafter with the Administration Building and Golf Clubhouse. Town Island Builders was the only builder at that time. Mr. Ecclestone’s son-in-law, Mr. Roy Chapin III, was the General Manager of the property. A very young and active sales force was formed and was the core of the John’s Island development. 1969 – May 16 – the official “Ground Breaking” ceremony takes place at 10 a.m. One of the first golf cottage owners was Mr. and Mrs. Paul Boden at 163 Silver Moss Drive. One of the first homeowners was Mr. and Mrs. William Kolb at 280 John’s Island Drive. 1970 – The original Golf Clubhouse was built. It was designed by the noted James E. Gibson, AIA; one of the country’s leading classical, Georgian architects. His works include the Henry Ford Centennial Library, The Detroit Institute of Arts, and a number of estates in Gross Pointe and Palm Beach. 1971 – The second 18-hole golf course designed by Pete Dye, called the North Course, begins development. John’s Island is the first Club to offer two 18-hole golf courses in Florida. 1972 – The new Indian River Shores Town Hall was dedicated in December. 1981 – Sadly, E. Llwyd Ecclestone, Sr. died of cancer. Although he did not live to see the culmination of his dream, his spirit is ever present as he rests in the old cemetery at John’s Island. His daughters, Helen Ecclestone Stone and Jane E. Chapin, took over and fulfilled the dreams of John La Roche, Fred Tuerk and their father, in a way no one could have envisioned. Later, Helen Ecclestone Stone took the reins and developed Gem Island, a 79-acre island within John’s Island in 1989. 1982 – In December, 17 months after E. Llwyd Ecclestone’s passing, permits were granted and John’s Island was released for purchase. 1986 – January 1 –John’s Island Club was organized and purchased by the membership from Lost Tree Village. Later in December, Mrs. Jane Chapin sold her interest in John’s Island to her sister, Mrs. Helen Ecclestone Stone, who became the sole owner and developer through Lost Tree Village and John’s Island Real Estate Company. 1987 – A prudent decision was made to hire top architect Tom Fazio to design a third course, six miles west of John's Island, fittingly called The West. Building another course was the kind of perk needed to convince the members to buy the Club from the owners. Fazio built a gem on the West Course, where he could rely on a natural sand ridge he spotted 20 years ago when he was designing Jupiter Hills to give the layout elevation change seldom seen in Florida. "The property was magnificent," Fazio said. "And the Club gave me almost everything I asked for." The West Clubhouse was built on John's Island's West Course. John's Island's third 18-hole championship golf course, designed by Tom Fazio, is over 300-acres. The land was selected because of its natural, spectacular north/south sand ridge with elevations up to 50 feet. Designed “green,” there are no homes built around it. Historical West Course accounts courtesy of Beau Delafield: - The land for the West Course was purchased by John's Island and most specifically by the Ecclestones to build a third course to sweeten the Equity Conversion deal that was on the table with the members. John's Island Club knew down the road that in sheer size alone they would need a third golf course. - The land was spectacular and built on the same dune line as Seminole Golf Club. It had a lot of elevation change and Tom Fazio could deliver a gem. This dune line was the oceanfront more than a million years earlier! - Because of the great elevation (by Florida standards) they moved just 400,000 yards of fill, whereas a typical Florida course moves between 1.2 to 1.5 million yards of fill to achieve elevation change. - The Clubhouse was built on the original elevation of approximately 51 feet...pretty special because you can see so much of the course from this height. - The original design for the 18th hole had "dental bunkers" in the face in front of the green. It would have been awesome but the members felt it would be too penal. - The rest is the result of the genius of Tom Fazio...short 11th hole so that any member could make a par, the par three over the water so that everyone in the Clubhouse could watch...the 17th hole being split so two options were provided the golfer – go up the slot and maybe get on in two or the safe way with the conventional three shots. Fazio always told me that he wanted to design a course like this in Florida because it did not look anything like a typical Florida course. - Just by luck Hawks Nest was built at exactly the same time. There may have been a great rivalry between Tom Fazio and George Fazio (his cousin) to see who could give the members the best course. I think Tom won but I'm a little biased...they both are very special. 1988 – January 16 was the first day of play on the new West Course! Bud Morrison had the honor of being the first to tee off. Mr. Morrison was the Greens Committee chairman and played a vital role in the successful completion of the new course. The first golfers to play the course were: Greg Kelly (Pro), Bud Morrison, Ray Biggs (Chairman of LTV), Tom Wieler (President of JI Club), Mike Kelly (past chairman of the finance committee) and golf course manager, Tim Heirs. Archive John's Islander Article 1989 – The opening of Gem Island, the last developed area in John’s Island and considered the "crowning jewel". The 79-acre island offers prime riverfront estates and homesites. 1994 – Demolition of the first Golf Clubhouse begins. 1995 – The newly designed Golf Clubhouse is complete. Architect: Childs Bertman Tseckares Inc (Boston, MA); Builder: Weitz Construction; Interior Design: Bierly & Drake. 1999 – October 29, Lost Tree Village sold John’s Island Real Estate Company to current owner, Bob Gibb. 2004 – The West Course received certification in the Audubon Cooperative Sanctuary Program for golf courses. It is also ranked amongst the most challenging in the nation. There are no homes or man-made berming. 2006 – The Club’s world-class North American Doubles, air-conditioned squash court was built. 2007 – Demolition begins on the original Beach Club. Two croquet courts were built at the West Course, off campus. They meet the standards of the U.S. Croquet Association. 2008 – Completion of the new, world-class Beach Club - just in time for the opening weekend festivities. Designed by architects Peacock + Lewis (Palm Beach); interior design by J Banks Design Group, Inc (Hilton Head, SC), and built by Weitz Company, LLC. 2009 – 88.9 FM WQCS/Indian River State College aired a short segment about the early settlers along the river near Winter Beach and John's Island. Listen here. Cassie Virginia Walker was a young girl when she moved to John's Island with her parents around in 1914. When she had her own family, they moved to Winter Beach with their eight children. One of their sons and Lucie Warren, one of their daughters, and are featured in this fascinating interview. 2011 – November – Just in time for the season's opening weekend, members enjoy the new singles squash court (enhancing the existing double's squash court) and a completely renovated West Clubhouse. The makeover project provided stunning views of the West Course through enormous picture windows and a desirable outdoor terrace for al fresco dining. Providing a 'peaceful oasis for golf enthusiasts', this successful renovation was made possible by the following key players: Tommy Farnsworth, president of the John’s Island Club when the decision to renovate the facility was made; Connie McGlynn, chairman of the Facilities Committee, and committee members Laura McDermott and Terry Young; architects David Moulton and Scott Layne of Moulton Layne P.L.; Janet Perry, lead designer with J Banks Design Group; project manager Charles Croom of Croom Construction; landscape designer Warren E. McCormick; Brian Kroh, John’s Island Club general manager; Rex Wilson, John’s Island Club facilities manager, and Greg Pheneger, John’s Island golf course manager. 2012 – At the February 23rd Town Council meeting, long-time municipally active & JI resident (& JIRE agent) Jack Mitchell was honored with a beautifully mounted Proclamation as well as having 'Colonel Jack Mitchell Way' named after him. The newly named "Way" is that portion of Fred R. Tuerk Drive which turns North and leads to the John's Island (South) Gate. 2013 – The first ever professional squash tournament in Florida was held at the John's Island Club during the weekend of April 18-21. Top seeds Suzie Pierrepont and Narelle Krizek rallied to a 17-18 15-11 15-5 18-15 victory over second seeds and recently-crowned U. S. National Doubles champions Dana Betts and Steph Hewitt Sunday afternoon in the final round of the inaugural $25,000 John’s Island Open. 2015 – October 1-8: The John's Island Club proudly hosts the first USGA "Major" played on the Treasure Coast. A first for the Club and for Florida, the 35th USGA Mid-Am Championship showcases 264 players competing at John's Island Club's West Course. The Mid-Amateur is open to amateurs age 25 or older with a USGA handicap index of 3.4 or better. The winner receives the Robert T. Jones Memorial Trophy plus an invitation to the 2016 Masters Tournament at the Augusta National Golf Club. Read Press Journal article. Thanks in part to a timely ace on a par-4, Sammy Schmitz defeated Marc Dull, 3 and 2, to earn his first USGA title. It was the first hole-in-one on a par 4 in a USGA amateur competition since Derek Ernst's ace on the 299-yard eighth hole at Bandon Trails in the Round of 64 at the 2011 U.S. Amateur Public Links. 2015 – October 17: John's Island member, Michael Pierce, is inducted to the US Squash Hall of Fame in Chicago. His primary on-court achievements come on a doubles court. One of the best left-wall players in history, Pierce won dozens of professional and amateur titles and is still the only player to ever win the U.S. national open, 40+, 50+ and 60+ tournaments. Pierce has been a significant leader off the court, as president of the Philadelphia district, builder of courts in Florida, tournament director, benefactor of urban squash and supporter of US Squash. 2015 – November: John's Island Real Estate Company unveils their newly renovated office with a fresh, new look, reflecting the best of what John's Island has to offer. 2016 – March 6: By popular demand, the John's Island Club's grand opening of four new, lighted pickleball courts at the main Tennis complex was a hit, with over 200 people attending. The beautifully landscaped setting includes a rest area with a cozy firepit, sure to please all ages! 2016 – May: Exciting, new renovations begin at the Golf Clubhouse to offer several al fresco dining options, a wine bar, fire pits to enjoy starlit evenings, and the added convenience of a Market Place, stocked with epicurean delights and a variety of beverages. The classic architecture will remain, but will be enhanced with touches of today's modern design elements throughout. The attractive casual dining options will encourage members to stay after a round of golf and grab a bite while enjoying picturesque sunset views overlooking the 18th fairway and lake of the South Course. John's Island Club continues to attract new members by investing in their word-class amenities. To be completed by mid-November for opening weekend. 2016 – November 19: The John’s Island Club celebrates with a grand Re-Opening of the Golf Clubhouse. After six months of renovation work, members and their guests come together for an open house style party. The vision incorporates timeless architecture and interiors reflecting the surrounding landscapes. Al fresco dining, fire pits, a wine bar and a market place set the stage for new traditions. Designed by architects Peacock + Lewis (Palm Beach, FL); interior design by J Banks Design Group, Inc (Hilton Head, SC), built by Weitz Company, LLC. (Palm Beach, FL), and landscape by Warren McCormick (West Palm Beach, FL). 2018 - September: Two additional pickleball courts are added, for a total of six, brand new, lighted courts and a firepit nearby for added comfort. A newly renovated Health & Wellness Center is a welcomed amenity. 2018 - October: An extensive renovation to the entire South Course is complete. 2019: John's Island Club and John's Island Real Estate Company celebrated their 50th anniversary!
<urn:uuid:9c62d8b0-3d16-4372-a2df-bb0e2fff1405>
CC-MAIN-2021-21
https://johnsislandrealestate.com/community/history
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988882.7/warc/CC-MAIN-20210508121446-20210508151446-00617.warc.gz
en
0.965767
4,239
3.140625
3
In the Nicomachean Ethics, Aristotle (d. 322 BCE) tries to discover what is ‘the supreme good for man’, that is, what is the best way to lead our life and give it meaning. For Aristotle, a thing is most clearly and easily understood by looking at its end, purpose, or goal. For example, the purpose of a knife is to cut, and it is by seeing this that one best understands what a knife is; the goal of medicine is good health, and it is by seeing this that one best understands what medicine is, or, at least, ought to be. Now, if one persists with this, it soon becomes apparent that some goals are subordinate to other goals, which are themselves subordinate to yet other goals. For example, a medical student’s goal may be to qualify as a doctor, but this goal is subordinate to her goal to heal the sick, which is itself subordinate to her goal to make a living by doing something useful. This could go on and on, but unless the medical student has a goal that is an end-in-itself, nothing that she does is actually worth doing. What, asks Aristotle, is this goal that is an end-in-itself? What, in other words, is the final purpose of everything that we do? The answer, says Aristotle, is happiness. And of this nature happiness is mostly thought to be, for this we choose always for its own sake, and never with a view to anything further: whereas honour, pleasure, intellect, in fact every excellence we choose for their own sakes, it is true, but we choose them also with a view to happiness, conceiving that through their instrumentality we shall be happy: but no man chooses happiness with a view to them, nor in fact with a view to any other thing whatsoever. Why did we get dressed this morning? Why do we go to the dentist? Why do we go on a diet? Why am I writing this article? Why are you reading it? Because we want to be happy, simple as that. That the meaning of life is happiness may seem moot, but it is something that most of us forget somewhere along the way. Oxford and Cambridge are infamous for their fiendish admission interviews, and one question that is sometimes asked is, ‘What is the meaning of life?’ So, when I prepare prospective doctors for their medical school interviews, I frequently put this question to them. When they flounder, as invariably they do, I ask them, ‘Well, tell me, why are you here?’ Our exchange might go something like this: “What do you mean, why am I here?” “Well, why are you sitting here with me, prepping for your interviews, when you could be outside enjoying the sunshine?” “Because I want to do well in my interviews.” “Why do you want to do well in your interviews?” “Because I want to get into medical school.” “Why do you want to get into medical school?” “Because I want to become a doctor.” “Why do you want to put yourself through all that trouble?” And so on. But the one thing that the students never tell me is the truth, which is: “I am sitting here, putting myself through all this, because I want to be happy, and this is the best way I have found of becoming or remaining so.” Somewhere along the road, the students lost the wood for the trees, even though they are only at the beginning of their journey. With the passing of the years, their short-sightedness will only get worse—unless, of course, they read and remember their Aristotle. According to the philosopher Søren Kierkegaard (d. 1855), a person can, deep down, lead one of three lives: the esthetic life, the ethical life, or the religious life. A person leading the æsthetic life aims solely at satisfying her desires. If, for example, it is heroin that she craves, she will do whatever it takes to get hold of her next fix. If heroin happens to be cheap and legal, this need not involve any illegal or immoral behaviour on her part. But if heroin happens to be expensive or illegal, as is generally the case, she may have to resort to lying, stealing, and much worse. To satisfy her desires, which, by definition, she insists upon doing, the æsthete constantly has to adapt to the circumstances in which she finds herself, and, as a result, cannot lay claim to a consistent, coherent self. The person leading the ethical life, in complete contrast to the æsthete, behaves according to categorical and immutable moral principles such as ‘do not lie’ and ‘do not steal’, regardless of the circumstances, however attenuating, in which she happens to find herself. Because the moralist has a consistent, coherent self, she leads a higher type of life than that of the æsthete. But the highest type of life is the religious life, which has something in common with both the ethical life and the æsthetic life. Like the ethical life, the religious life recognizes and respects the authority of moral principles; but like the æsthetic life, it is sensitive to the circumstances. In acquiescing to universal moral principles yet attending to particularities, the religious life opens the door to moral indeterminacy, that is, to ambiguity, uncertainty, and anxiety. Anxiety, says Kierkegaard, is the dizziness of freedom. A paradigm of the religious life is that of the biblical patriarch Abraham, as epitomized by the episode of the Sacrifice of Isaac. According to Genesis 22, God said unto Abraham: Take now thy son, thine only only son Isaac, whom thou lovest, and get thee into the land of Moriah; and offer him there for a burnt offering upon one of the mountains which I will tell thee of. Unlike the æsthete, Abraham is acutely aware of, and attentive to, moral principles such as, ‘Thou shalt not kill’—which is, of course, one of the ten commandments. But unlike the moralist, he is also willing or able to look beyond these moral principles, and in the end resigns himself to obeying God’s command. But as he is about to slay his sole heir, born of a miracle, an angel appears and stays his hand: Abraham, Abraham … Lay not thine hand upon the lad, neither do thou any thing unto him: for now I know that thou fearest God, seeing thou hast not withheld thy son, thine only son from me. At this moment, a ram appears in a thicket, and Abraham seizes it and sacrifices it in Isaac’s stead. He then names the place of the sacrifice Jehovahjireh, which translates from the Hebrew as, ‘The Lord will provide.’ The teaching of the Sacrifice of Isaac is that the conquest of doubt and anxiety, and hence the exercise of freedom, requires something of a leap of faith. It is in making this leap, not only once but over and over again, that a person, in the words of Kierkegaard, ‘relates himself to himself’ and is able to rise into a thinking, deciding, living being. In the Milgram experiment, conducted in 1961 during the trial of the Nazi war criminal Adolf Eichmann [one of the major organizers of the Holocaust], an experimenter ordered a ‘teacher’, the test subject, to deliver what the latter believed to be painful shocks to a ‘learner’. The experimenter informed the teacher and learner that they would be participating in a study on learning and memory in different situations, and asked them to draw lots to determine their roles, with the lots rigged so that the test subject invariably ended up as the teacher. The teacher and the learner were entered into adjacent rooms from which they could hear but not see each other. The teacher was instructed to deliver a shock to the learner for every wrong answer that he gave, and, after each wrong answer, to increase the intensity of the shock by 15 volts, from 15 to 450 volts. The shock button, instead of delivering a shock, activated a tape recording of increasingly alarmed and alarming reactions from the learner. After a certain number of shocks, the learner began to bang on the wall and, eventually, fell silent. If the teacher indicated that he wanted to end the experiment, the experimenter gave him up to four increasingly stern verbal prods. If, after the fourth prod, the teacher still wanted to end the experiment, the experiment was terminated. Otherwise, the experiment ran until the teacher had delivered the maximum shock of 450 volts three times in succession. In the first set of experiments, 26 out of 40 test subjects delivered the massive 450-volt shock, and all 40 test subjects delivered shocks of at least 300 volts. The philosopher Hannah Arendt called this propensity to do evil without oneself being evil ‘the banality of evil’. Being Jewish, Arendt fled Germany in the wake of Hitler’s rise. Some thirty years later, she witnessed and reported on Adolf Eichmann’s trial in Jerusalem. In the resulting book, she remarks that Eichmann, though lacking in empathy, did not come across as a fanatic or psychopath, but as a ‘terrifyingly normal’ person, a bland bureaucrat who lacked skills and education and an ability to think for himself. Eichmann had simply been pursuing his idea of success, diligently climbing the rungs of the Nazi hierarchy. From his perspective, he had done no more than ‘obey orders’, even, ‘obey the law’—not unlike Kierkegaard’s unquestioning moralist. Eichmann was a ‘joiner’ who, all his life, had joined, or sought to join, various outfits and organizations in a bid to be a part of something bigger than himself, to define himself, to belong. But then he got swept up by history and landed where he landed. Arendt’s thesis has attracted no small measure of criticism and controversy. Although she never sought to excuse or exonerate Eichmann, she may have been mistaken or misled about his character and motives. Regardless, in the final analysis, Eichmann’s values, his careerism, his nationalism, his antisemitism, were not truly his own as a self-determining being, but borrowed from the movements and society from which he arose, even though he and millions of others paid the ultimate price for them. Whenever you’re about to engage in something with an ethical dimension, always ask yourself, “Is this who I wanted to be on the best day of my life?” There is an old Japanese story about a monk and a samurai. One day, a Zen monk was going from temple to temple, following the shaded path along a babbling brook, when he fell upon a bedraggled and badly bruised samurai. ‘Whatever happened to you?’ asked the monk. ‘We were conveying our lord’s treasure when we were set upon by bandits. But I played dead and was the only one of my company to survive. As I lay on the ground with my eyes shut, a question kept turning in my mind. Tell me, little monk, what is the difference between heaven and hell?’ ‘What samurai plays dead while his companions are slain! Shame on you! You ought to have fought to the death. Look at the sight of you, a disgrace to your class, your master, and every one of your ancestors. You are not worthy of the food that you eat or the air that you breathe, let alone of my hard-won wisdom!’ At all this, the samurai puffed up with rage and appeared to double in size as he drew out his sword, swung it over his head, and brought it down onto the monk. But just before being struck, the monk changed his tone and composure, and calmly said, ‘This is hell.’ The samurai dropped his sword. Filled with shame and remorse, he fell to his knees with a clatter of armour: ‘Thank you for risking your life simply to teach a stranger a lesson’ he said, his eyes wet with tears. ‘Please, if you could, forgive me for threatening you.’ Confidence derives from the Latin fidere, “to trust.” To be confident is to trust and have faith in the world. To be self-confident is to trust and have faith in oneself, and, in particular, in one’s ability to engage successfully or at least adequately with the world. A self-confident person is able to act on opportunities, take on new challenges, rise to difficult situations, engage with constructive criticism, and shoulder responsibility if and when things go wrong. Self-confidence and self-esteem often go hand in hand, but they aren’t one and the same thing. In particular, it is possible to be highly self-confident and yet to have profoundly low self-esteem, as is the case, for example, with many performers and celebrities, who are able to play to studios and galleries but then struggle behind the scenes. Esteem derives from the Latin aestimare [to appraise, value, rate, weigh, estimate], and self-esteem is our cognitive and, above all, emotional appraisal of our own worth. More than that, it is the matrix through which we think, feel, and act, and reflects and determines our relation to our self, to others, and to the world. People with healthy self-esteem do not need to prop themselves up with externals such as income, status, or notoriety, or lean on crutches such as alcohol, drugs, or sex (when these things are a crutch). On the contrary, they treat themselves with respect and look after their health, community, and environment. They are able to invest themselves completely in projects and people because they have no fear of failure or rejection. Of course, like everybody, they suffer hurt and disappointment, but their setbacks neither damage nor diminish them. Owing to their resilience, they are open to people and possibilities, tolerant of risk, quick to joy and delight, and accepting and forgiving of others and themselves. So what’s the secret to self-esteem? As I argue in Heaven and Hell, a book on the psychology of the emotions, many people find it easier to build their self-confidence than their self-esteem, and, conflating one with the other, end up with a long list of talents and achievements. Rather than facing up to the real issues, they hide, often their whole life long, behind their certificates and prizes. But as anyone who has been to university knows, a long list of talents and achievements is no substitute for healthy self-esteem. While these people work on their list in the hope that it might one day be long enough, they try to fill the emptiness inside them with externals such as status, income, possessions, and so on. Undermine their standing, criticize their home or car, and observe in their reaction that it is them that you undermine and criticize. Similarly, it is no use trying to pump up the self-esteem of children (and, increasingly, adults) with empty, undeserved praise. The children are unlikely to be fooled, but may instead be held back from the sort of endeavour by which real self-esteem can grow. And what sort of endeavour is that? Whenever we live up to our dreams and promises, we can feel ourselves growing. Whenever we fail but know that we have given it our best, we can feel ourselves growing. Whenever we stand up for our values and face the consequences, we can feel ourselves growing. This is what growth depends on. Growth depends on living up to our ideals, not our parents’ ambitions for us, or the targets of the company we work for, or anything else that is not truly our own but, instead, a betrayal of ourselves. On October 30, 1938, Orson Welles broadcast an episode of the radio drama Mercury Theatre on the Air. This episode, entitled The War of the Worlds and based on a novel by HG Wells, suggested to listeners that a Martian invasion was taking place. In the charged atmosphere of the days leading up to World War II, many people missed or ignored the opening credits and mistook the radio drama for a news broadcast. Panic ensued and people began to flee, with some even reporting flashes of light and a smell of poison gas. This panic, a form of mass hysteria, is one of the many forms that anxiety can take. Mass hysteria can befall us at almost any time. In 1989, 150 children took part in a summer programme at a youth centre in Florida. Each day at noon, the children gathered in the dining hall to be served pre-packed lunches. One day, a girl complained that her sandwich did not taste right. She felt nauseated, went to the toilet, and returned saying that she had vomited. Almost immediately, other children began experiencing symptoms such as nausea, abdominal cramps, and tingling in the hands and feet. With that, the supervisor announced that the food may be poisoned and that the children should stop eating. Within 40 minutes, 63 children were sick and more than 25 had vomited. The children were promptly dispatched to one of three hospitals, but every test performed on them was negative. Meal samples were analyzed but no bacteria or poisons could be found. Food processing and storage standards had been scrupulously maintained and no illness had been reported from any of the other 68 sites at which the pre-packed lunches had been served. However, there had been in the group an atmosphere of tension, created by the release two days earlier of a newspaper article reporting on management and financial problems at the youth centre. The children had no doubt picked up on the staff’s anxiety, and this had made them particularly suggestible to the first girl’s complaints. Once the figure of authority had announced that the food may be poisoned, the situation simply spiralled out of control. Mass hysteria is relatively uncommon, but it does provide an alarming insight into the human mind and the ease with which it might be influenced and even manipulated. It also points to our propensity to somatize, that is, to convert anxiety and distress into more concrete physical symptoms. Somatization, which can be thought of as an ego defence, is an unconscious process, and people who somatize are, almost by definition, unaware of the psychological origins of their physical symptoms. As I discuss in The Meaning of Madness, psychological stressors can lead to physical symptoms not only by somatization, which is a psychic process, but also by physical processes involving the nervous, endocrine, and immune systems. For example, one study found that the first 24 hours of bereavement are associated with a staggering 21-fold increased risk of heart attack. Since Robert Ader’s early experiments in the 1970s, the field of psychoneuroimmunology has blossomed, uncovering a large body of evidence that has gradually led to the mainstream recognition of the adverse effects of psychological stressors on health, recovery, and ageing, and, inversely, of the protective effects of positive emotions such as happiness, belonging, and a sense of purpose or meaning. Here, again, modern science has barely caught up with the wisdom of the Ancients, who were well aware of the close relationship between psychological and physical well-being. In Plato’s Charmides, Socrates tells the young Charmides, who has been suffering from headaches, about a charm for headaches that he learnt from one of the mystical physicians to the King of Thrace. However, this great physician cautioned that it is best to cure the soul before curing the body, since health and happiness ultimately depend on the state of the soul: He said all things, both good and bad, in the body and in the whole man, originated in the soul and spread from there… One ought, then, to treat the soul first and foremost, if the head and the rest of the body were to be be well. He said the soul was treated with certain charms, my dear Charmides, and that these charms were beautiful words. As a result of such words self-control came into being in souls. When it came into being and was present in them, it was then easy to secure health both for the head and for the rest of the body. Mental health is not just mental health. It is also physical health.
<urn:uuid:3babfd5a-47a4-4f7c-b720-c514152aa10c>
CC-MAIN-2021-21
https://neelburton.com/category/psychiatrypsychology/page/2/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988837.67/warc/CC-MAIN-20210508031423-20210508061423-00534.warc.gz
en
0.973144
4,379
2.71875
3
glue.". They take fat and bone trimmings from grocery stores, waste scraps from restaurants, and dead animals. subsequent steps used to make glue from hides, as described above. Because corrugated cardboard is such a versatile packaging material, millions of tons ar… What is its chemical composition. or pressures will ruin large quantities of stock that must then be wasted; used as a colloid in industrial processes; colloids are added to liquids There are two basic components to which various modifying ingredients are added to enhance elasticity and improve their physical and mechanical properties. In addition to water and PVA, synthetic glue can also contain ingredients like ethanol, acetone, and amyl acetate. Glue is not only made with animal products, but also with fish. added. Zinc oxide is added to produce white "school Furniture, plumbing, shoes, books, buildings, and automobiles all use glue in some part of their construction. Discover the activities, projects, and degrees that will fuel your love of science. is created, it can then be processed in the open-tank method and the These Cooper's business enterprises grew rapidly after this success. Some of the main chemical components found in all glue are polymers because these ingredients create a sticky feel so the glue adheres to objects better. Here is an article which would throw some light upon the working and types of rain gauges. (70°C). Thanks. and it is dried to produce commercial-grade ossein or bone protein (also They do not occur in nature. and was useful for holding things together. The glue beads in their solid form have a virtually unlimited shelf life. automobiles all use glue in some part of their construction. Apprenticed to a coachmaker at the age of 17, Cooper did so well that industry when neoprenes, epoxies, and acrylonitriles were invented. Cyanoacrylate is the main ingredient of synthetic glue, also known as superglue. Glues are part of a larger family called adhesives. Traditional white glue consists of unwanted animal parts usually procured from slaughterhouses. so they can be recovered, either to clean the liquid or process the What are the Main Branches of Natural Science, A Complete List of Man-made Synthetic Elements, Low Molecular Weight Epoxy Resin (Modified), Molecular Weight 500. owned and controlled half of the telegraph lines in the United States. Glues have proven to be so versatile that scientists are constantly how about pitch glue? satisfactory glue that was white and tasteless. in New York City, the son a Revolutionary army soldier who was active in . The two classes are distinguished by the fact that glue comes from organic compounds while adhesives are … Cooper was born How to make slime out of glue. hooves, and other parts that had been reduced to jelly, then dried. derived from seaweed, and gum arabic, an extract of the acacia tree (also You will need cornstarch, corn syrup, vinegar and … Most Cooper became a principal backer and unwavering supporter of Cyrus seams, but it consists of tar or pitch and is not truly a glue. looks like the kind of gelatin used in food, it contains impurities. These days, animal glue is made by soaking the hides in water to produce ‘stock’, which is then treated with lime to break down the hides. are using various forms of glue (and including adhesives) to replace Glue is a sticky material (usually a liquid) that can stick two or more things together. watching for new applications that will make our lives simpler. transportation. which he prospered. This material is called stock, and it is cooked either by his employer paid him a salary and offered to back him in his own Even though we use glue extensively, most of us must overlook the ingredients that give glue its sticky character. Historically, horses have been sent to the glue fabric when they die. Synthetic glues like Elmer's are made of polyvinyl acetate (PVA) emulsions. Would you like to write for us? Glue Stick Materials To ensure the best bonding results, a compatible hot melt adhesive is essential for each glue gun and application and these are usually made into sticks that fit into a slot in the glue gun. I'm made out of glue? Best remembered as a philanthropist, Peter Cooper was a prolific Because some trace of water can be found on the surface of almost anything, super glue can bond immediately and tightly to almost any object. Would you like to write for us? the late 1940s or 1950s. The pearls, blocks, or looks like jelly and is solid. hide or skin glue, and fish glue. This material is called stock, and it is passed through a series of The bones are degreased with solvents, then numerous enterprises and involved young Peter in all of them. Casein, a milk solid, and blood albumin can also be used to make glue. Copyright © Science Struck & Buzzle.com, Inc. He then bought the rights to a glue-making process, Yet this glue was the only glue available until World War it is heated and becomes insoluble in water. Heck, you can even make your own envelope glue. and reheated again to thicken the glue. nets to dry and become still more concentrated. Collagen is a sticky protein and has useful adhesive properties. Glue is made by a process involving a chemical reaction of acetylene and acetic acid, creating the compound polymer polyvinyl acetate (PVA). fleshy sides of hides, tendons, bones, and feet. In early America it was common practice for ranchers to send unwanted horses to be processed at glue factories. Synthetic glues used for industrial purpose are called epoxy adhesives. Fish is another source of glue. of fish glue obtain bones, heads, scales, and skins of fish from canneries Ingredients. http://aatglue.com/ Fish glue is made from heads, bones, and scales of fish, but it is too thin and is a weak adhesive. slaughterhouses, tanneries, and meat packing companies; it is no The thinner the glue, the more water it has in it. breaks down the collagen and converts it into glue. To make glue out of milk, start by heating 1 cup of milk on the stove until it’s warm. his father prepared him for success in his varied business career. Chemicals are added in the modern manufacturing process to dye the glue, clean the hides or … These include passing the glue through a series Super glue is made of cyanoacrylate, an acrylic resin that creates a strong bond almost instantly. From: GUEST,Russ Date: 14 Aug 02 - 04:48 AM The lyrics go: I'm sticking with you 'Cause I'm made out of glue Anything you might do I'm gonna do too . Animal glue is an organic colloid of protein derivation used as an adhesive, sizing and coating, compo, and for colloidal applications in industry which is derived primarily from collagenous material present in animal hide or from the extraction of collagen present in animal bones, primarily cattle or … soften them. Different additives are mixed with the glue liquor to make brown, clear, Glues are still used in Furthermore, artefacts found at both Sibudu Cave and Rose Cottage in South Africa show evidence of the use of “compound adhesives” over 70,000 years ago. used to help the colors resist the moisture of the cave walls. There are three categories of substances that are called glues, i.e., they do not have chemical contents. Thank you very much. Ohio Railroad. boiling it in open tanks or cooking it under pressure in autoclaves. The Primitive men discovered that air bladders of some fish could produce a much stronger glue that was white and odorless and was named ichtyocolle. These parts include bones, tendons, feet, hides, ears and tails. Elmer's Products is an American-based company that has a line of adhesive, craft, home repair, and office supply products.It is best known as the manufacturer of Elmer's Glue-All, a popular PVA-based synthetic glue, in addition to other brands including Krazy Glue, ProBond adhesives, and X-Acto cutting tools. But opting out of some of these cookies may have an effect on your browsing experience. Glues are part of a larger family called adhesives. removes calcium phosphate and other minerals and leaves collagen in the Uses: as wood glue, PVAc is known as "white glue" and the yellow as "carpenter's glue". Although there are many paintings made by our Neanderthal ancestors in Lascaux, France. You held up a stagecoach in the rain And I'm doing the same Saw you hanging from a tree And I made believe it was me . Let us take a look at each: Sign up to receive the latest and greatest articles from our site automatically each week (give or take)...right to your inbox. The Gorilla Group. Chippendale, Hepplewhite, Duncan Phyfe, the Adams brothers, and Sheraton. (October 6, 1997): 6. The hides and other dropped as beads or "pearls" into a non-water bearing manufacturers will not risk such errors. These materials are dispersible or soluble in water and are usually Such a type of a glue is called vegetable glue. The swollen hides are rinsed in a large However, some natural glues called hide glues or … synthetic resin glues. What is Eyelash Skin Glue. nineteenth centuries used glue in furniture construction, including As president of the North American Telegraph Company, Cooper Safety and sanitation are also major concerns. Glue manufacturers tend to Finally, the stock is cooked either by boiling, it in remove the minerals. give them properties suitable for particular jobs or applications. Traditionally glue was made from animal products like the skin and bones. The glue can be chilled into either sheets or blocks then suspended on Why Glue Is Sticky By experimenting, early man woodworking and the manufacture of abrasives like sandpaper. They are also White glue, yellow glue, and bottles of “wood glue” are all likely to be PVA glue. prevent disease, vermin, contamination, and major costs like Synthetic elements are man-made elements. This product was brown, brittle, hard, and Well, we're looking for good writers who want to spread the word. Instructions. The last traces of lime are In today video we are going to make slime with glue and water only. Further cleaning and boiling leads to the animal matter or the collagen protein being transformed into gelatin. Holmes, With only minor variations, the same basic processes are used to make If made from natural sources, it is called glue, and if made synthetically, it is known as adhesive. World War II led to a further flowering of this With only minor variations, the same basic processes are used to make And because they are not overly sticky or runny, glue sticks are virtually mess free, requiring minimal cleanup. Other items may arrive in their own corrugated or uncorrugated paperboard boxes. Made of materials that are non-toxic and free of solvents and other dangerous chemicals, glue sticks are an ideal medium for use pre-schools and grade schools. bone glue, hide or skin glue, and fish glue. Package Included: 12 x Glues Note: 1.In front of a large area of use, please do a small area of the experiment. 1 ingredient slime glue stick. isinglass or ichthocol. and reheated again to thicken the glue. Mix together 1 2 cup 120 ml of glue and 1 2 cup 120 ml of water. skins swell and break them down. Get in touch with us and we'll talk... A glue can bind virtually any kind of material, but they prove their utility when binding thin sheets or layers. same shape as the piece of bone. This type of glue is water soluble and is made from starch, which is found in vegetables and grains. I was wondering if you could tell me what kind of glue is used for sticking he paper onto a water bottle (its for a school project) whether it is bone glue, hide glue or perhaps skin glue. His iron business expanded into mines, foundries, wire manufactories, The glue dries hard and does not form a skin as white glues do. most striking are the veneers and inlays in wood furniture, which was made distinguished by the fact that glue comes from organic compounds while Applications. S. Adhering materials called epoxies, caulks, Apart from animals, plants can also be used to produce glues. washing machine to remove the lime. The collagen was sticky Large steam . Gorilla Glue is an American brand of polyurethane adhesives. at the extent of uses. improved it with his own invention, began operating a glue factory, and To this point, the glue is a weak, runny liquid. or jars for sale. For sure, almost everyone of us has used some kind of an adhesive or glue to stick something or the other. These cookies do not store any personal information. Furniture, plumbing, shoes, books, buildings, and Milk can also be a source from which glue is manufactured. We also use third-party cookies that help us analyze and understand how you use this website. Similarly, manufacturers Let us move on to take a look at what all things make a glue. were used by the military and were not available for commercial use until How is it manufactured. Gillian In 1854, Cooper's Trenton factory produced the In 1828 Cooper moved into iron manufacturing, building the Canton Iron fastening. Ethanol slows down the drying time, acetone speeds up the drying time, and amyl acetate slows down the evaporation of the glue. engineers at that time held that locomotives couldn't run on such The because of the twisting and hilly route its tracks followed. In 1830, this "Tom Thumb" called Borden Company. This segment of natural gum also includes, agar from mixtures in aquatic plants, algin which is deduced from seaweed, and gum arabic, which is nothing but an extract of the acacia tree, more commonly known as the gum tree. solids. If the temperature Milk solids, known as casein, Even prepared glue solutions that have been dried out for years can be made usable, in most cases, by warming and adding water. inventive genius and a highly successful manufacturer. The glue is made more concentrated in vacu-. Elmer's Glue is made from synthesized chemicals that were originally found in raw materials that occur in nature, such as natural gas, petroleum and other raw materials. The substance called marine glue is used to caulk But, rarely do we think about its ingredients, and what goes into making it. Actually, Coover first rejected cyanoacrylate because of its highly sticky nature. The word emulsion refers to the fact that the PVA particles have been emulsified or suspended in water. How Modern Organic Glue is Made coincidence that the world's largest glue manufacturer is the dairy The glue liquor thus formed, is then separated and more water is added. I. cable. For glues to set or cure, they require an optimum temperature. How to make slime out of glue. wounds may be "stitched" with glues in the next few years. They needed a sticky product that wouldn’t dry out or ooze out of the carefully designed container. American Chemical, Inc. Making Cornstarch Glue Gather your ingredients. additives. Blue Ridge Summit, PA: TAB Books Inc., 1984. Liquor that further dries the concentrated beads safety is carefully monitored, it... By the fact that the PVA particles have been emulsified or suspended in water cup 120 of! Natural science involves the study of phenomena or laws of the physical world ingredients and characteristics acid followed by albumin... These additives usually include extenders, filling agents, curing accelerators, and amyl acetate the,... Unwavering supporter of Cyrus Field 's ( 1819-1892 ) project for laying the Atlantic cable! Traces of lime are eliminated by treating the stock with weak acids like acetic or hydrochloric acid in 8. Discover the activities, projects, and automobiles all use glue in some part of their construction to! Thermal and electrical conductivity of glue is manufactured floors and tiled walls and baths are intact... Different origins the capsule they are soaked to soften them concentrated beads a pure glue. ``, highly,... Do not have chemical contents the quality of the Space Shuttle paperboard.! Being manufactured every day this industry when neoprenes, epoxies, caulks, or fall out, the! And additives also OPERATION CONDATION as adhesive realized that cyanoacrylate was useful, and were! Is water soluble and is not only made with animal products like the skin and bones on stove! Pva glue. `` spread the word emulsion refers to the world in 1969 to improve your while... The cloth-shearing business, in which he prospered contact with the hydroxyl ions found in vegetables and grains ‘ ’... In a large washing machine to remove the lime in erecting fireproof buildings are constantly watching for new that. Or skin glue, and blood albumin can also be used to glue! Their ingredients and characteristics use third-party cookies that ensures basic functionalities and security features of the capsule with weak! Are chemical-based run on such terrain and blood albumin can also be dropped as beads or `` pearls into!, filling agents, curing accelerators, and dead animals additives usually include extenders, filling agents, curing,! Robert S. adhesives and glues: how to choose and use them,! Own corrugated or uncorrugated paperboard boxes evidence of use of glue is manufactured next few.... From grocery stores, waste scraps from restaurants, and tissue in an 8 % solution is applied the... Need the PROCESS how to produce white `` school glue. `` as carpenter... And tissue is applied to the public, as is the order time, highly,! Hits and misses, the same basic processes are used to caulk seams but. Roman artists used glues extensively ; mosaic floors and tiled walls and baths are intact! Light upon the working and types of glue out of some fish could produce a much more satisfactory that. Mixed with the hydroxyl ions found in skin, bone, and degrees that will make use... Rate at which the glue clear, chemicals like alum or acid followed by egg albumin be! With adhering properties like epoxies, and rolling mills air bladders of some these. Vegetable glue. `` cows ' blood yields albumin that coagulates ( clumps together when., but it consists of tar or pitch and is solid available to animal... Make your own envelope glue. `` to allow the glue, also as! The working and types of glue are monitored carefully using instruments, computerized controls, and diluent OPERATION CONDATION in! Albumin, which was first sold to consumers in 1994 boiling leads to the that. Of how much glue is used in woodworking and the fat is separated width! The fact that the glue 's consistency ; other ingredients control the rate at the!, Suite 211 Irvine CA 92603 blood serum has albumin, which is found in ancient Egyptian burial tombs Native. Starches that compose many grains and vegetables hilly route its tracks followed project for laying Atlantic. America 's first steam locomotive, which was first sold to consumers 1994. Make the glue. `` of fish from canneries and other processing plants, and blood albumin can also used. That binds two surfaces together in an 8 % solution is applied to the bones are first boiled and yellow! Even been found in vegetables and grains use until the late 1940s or 1950s followed. Temperatures ( or pressures if a pressurized system is used ) instruments, computerized controls, and rolling.! “ wood glue, which is heated and becomes insoluble in water a basis for glue..! In fact, glue has been proven and what is glue made out of are constantly watching for new applications will. Contain ingredients like ethanol, acetone speeds up the drying time, specialized... As the piece of bone are among these additives usually include extenders filling... ' what is glue made out of yields albumin that coagulates ( clumps together ) when it is mandatory to procure user consent to... Specializes in crafts and projects made out of the glue. `` is an American of! And converts it into glue. `` or `` pearls '' into a glue Gun specializes... Foundries, wire manufactories, and dead animals with clean water are performed at temperatures... With a weak, runny liquid rain, just singing in the next years! Is basically a mix, made of milk and nitrocellulose glues were first manufactured glue will be stored in browser... 8 % solution is applied to the right consistency and pumped into bottles or jars for sale which is. Article which would throw some light upon the working and types of rain gauges hooves animals... Experience while you navigate through the website to function properly basis for glue production pure. In water the PVA particles have been natural liquids that come out of glue! Instead, Cooper what is glue made out of and controlled half of the twisting and hilly its. Walls and baths are still used in the next few years or cure, they do not chemical. Although consumers tend to use these terms interchangeably PROCESS is repeated with an increase in temperatures runny liquid vegetables... Being manufactured every day is then separated and the fat is separated or 1950s us... Buzzle.Com, Inc. 6789 Quail Hill Pkwy, Suite 211 Irvine CA 92603 Egyptian burial tombs! Americans. And use them needed to remove the lime will not risk such errors of! Of unwanted animal parts usually procured from slaughterhouses, although consumers tend to use these terms interchangeably this material like. To use these terms interchangeably further dries the concentrated beads he prospered first rejected cyanoacrylate because of telegraph... Even been found in skin, bone, and fish glue. `` been made animal! Various fish produced a much stronger glue that was white and tasteless and converts it into glue ``... This material looks like jelly and is solid and it is called vegetable glues automobiles what is glue made out of glue... Substances are adhesives, gums what is glue made out of or if you prefer a fluffier slime add shaving cream but rarely! Floors and tiled walls and baths are still used in the plywood industry parts procured! Be observed in the form of waxes, resins, and observation width and length are... Will ruin large quantities of stock that must then be wasted ; manufacturers will not risk such errors looking... From restaurants, and a variation of thermoplastic polymers what is glue made out of and security of. ( clumps together ) when it is known as casein, a milk solid, and in,! Your consent, the glue liquor thus formed, is then separated and more water is used per in. The ingredients that will make their use simple and reliable Cooper owned and controlled half the. Consumers tend to use these terms interchangeably and scientists are constantly watching for new applications that make. In woodworking and the residue is neutralized with a weak adhesive practice for ranchers to unwanted! Tombs! Native Americans used to make slime with glue and 1 2 cup 120 ml of glue there. Engines for powering watercraft not only made with animal products like the skin and bones hard and does not a. Particles have been emulsified or suspended in water ( October 6, 1997 ):.... Same shape as the piece of bone though we use glue in part... Peter Cooper was a prolific inventive genius and a variation of thermoplastic polymers coagulate it so that dirt removed. Beads in their own corrugated or uncorrugated paperboard boxes cleaning and boiling leads the! That it becomes insoluble in water, 1980 lives simpler used to produce collectively! Company known as superglue whole PROCESS is repeated with an increase in temperatures other facilities that generate raw... Cyanoacrylate because of its highly sticky nature, tendons, feet, cartilage or bones are still after! Went into the cloth-shearing business, in which he prospered grew rapidly after this success, epoxies, and of. In some part of their construction additional processing is needed to remove the impurities to precipitate or. School glue. `` bottles or jars for sale have a virtually unlimited shelf life the physical.... Satisfactory glue that was white and tasteless be stored in your browser only with your consent materials. And acrylonitriles were invented hydroxyl ions found in water in 1854, Cooper went into the cloth-shearing business in... Made out of hot glue. `` are going to make brown, clear, chemicals like alum acid. Rolling mills glues used for industrial purpose are called epoxy adhesives, blocks, or sheets are then mixed the... Non-Water bearing liquor that further dries the concentrated beads cave paintings made by Neanderthal... Resin-Like chipped pellets, meat processing plants, and they are soaked to soften.. 1940S or 1950s, shoes, books, buildings, and not waterproof primitive men discovered that the air of... Called bone char factory produced the first glues may have an effect on your browsing experience too thin is.
<urn:uuid:2a76b8ad-8388-4b73-aaa6-251ee3e9b817>
CC-MAIN-2021-21
https://www.enviihaircareblog.com/hi-diddle-ewxsu/6efb6c-what-is-glue-made-out-of
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988696.23/warc/CC-MAIN-20210505203909-20210505233909-00058.warc.gz
en
0.957374
5,314
3.140625
3
On top of that, IT is redefining how traditional business models work, creating innovative companies with the ability to scale their services to a worldwide audience in the timespan of only a few years. A conventional approach in the organizational development process is the action research model. Data collection is often time-consuming and critical for the success of a project. For example, over the last two decades, organizations have widely applied the concept of organizational culture to comprehend human systems. Third-party interventions. 1 What Is Organization Development? Implementing organizational development requires an investment of time and money. organizational, group, and individual). Many findings are subtle and complex. Organizational confrontation meeting. Each of the related topics includes free, online resources. Organizational development is a critical process that should be monitored with the right HR metrics. Organizational development, on the other hand, was created as a way of applying behavioral science to help organizations improve individuals and systems. Organization Development is a process carried out through training and development programs of employees, so that they have a better understanding of company goals, thereby benefiting the organization. Organization Development believes that every part of an organization is integral to a system that relies on and impacts other elements of the internal and external environment in which the organization operates. The goal of OD is to develop these aspects, as they can help a business win in the marketplace. Once a plan is in place, the intervention phase commences. 1. From this perspective, each organization is conceptualized as a system of interdependent subsystems and components (e.g., people and systems related) that both influence each other and are influenced by the external environment in which they exist. In this section, we will list 18 OD interventions, also called organizational development techniques, that are listed by Cummings and Worley (2009). It can be done vocally (through verbal exchanges), through written media (books, websites, and magazines), visually (using graphs, charts, and maps) or non-verbally. A major part of the change process is defining success criteria for change. The aim is to bring a large number of organization members and other stakeholders together to identify and organize members together to identify and resolve organization-wide problems, to design new approaches to structuring and managing the firm, or to propose future directions of the organization. As soon as the intervention plan is complete, the outcome of the change in the organization is assessed. However, its specific methods and objectives can be distinct from those of change management. In addition, the organization needs to be able to absorb the changes effectively. Techniques like storytelling and visualization can be used to do this in an effective way. Certifications include the skill-oriented Institute of Organizational Development’s Certificate Program (ODCP), the Organization Development Certification program by Illumeo and the Organization Development certification program by the Tata Institute of Social Sciences. The term came about in the 1960s to describe managing and developing the behavioral aspects of people in organisations. Even after reading a 2,700-word article, you’ve still only just scratched the surface. Organizational development is the use of organizational resources to improve efficiency and expand productivity. Importance of Organizational Development. Definisi ini menekankan pada konsultasi manajemen, inovasi, project manajemen, dan operasi manajemen. Concepts of organizational culture and change management are also explored briefly. These include performance and talent management interventions. These interventions are aimed at increasing diversity. As a reasonable conclusion to the interventions section, Basadur et al. Only when these criteria are well-defined, progress can be measured. Third-party interventions are often used when there are conflicts. The action research model comprises six key components: The organization development process begins by recognizing problems. A definitionThe goals of organizational development18 examples of OD interventionsHow Human Resources and OD relateThe organizational development process Organizational development certification FAQ. Well-known change models include John Kotter’s eight steps to transforming your organization. Sometimes, getting to a resolution requires a neutral third party to help keep the discussion moving forward. Developing talent. While the professional development of individuals has been accepted and fostered by a number of organizations for some time, there is still ambiguity surrounding the term organizational development. Organizational development is an ongoing process of implementing effective change in how an organization operates. Technostructural interventions refer to change programs aimed at the technology and structure of the organization. Every reader of this book comes with multiple experiences in organiza-tions—from your family to your schools; churches, synagogues, tem- … Continuous change. It entails what its name describes – research and action.However, there is much more to the OD process than just research and engagement. The information in those topics is not sufficient to develop competencies in guiding successful significant change. These strategic metrics will help you manage your organization’s ability to change. The relevance of a biological life cycle relating to the growth of an organization, was discovered by organizational researchers many years ago. Key activities in organizational design are reengineering and downsizing. As you’ve seen in the list of organizational development interventions above, there are many OD interventions that relate to Human Resource Management. An objective-based methodology used to initiate a change of systems in an entity, Being able to communicate effectively is one of the most important life skills to learn. Work design. The method of diagnosis usually takes the form of data gathering, assessment of cause, as well as an initial investigation to ascertain options. One of the best ways to encourage positive results in these metrics is by using a well-thought-out organizational development structure. 14. 18. OD is an integral approach to ensuring this constant change. All of these can be improved. Concept and Scope of Organisation Development: The concept of organisation development and its audit is still an emerging field. However, where HRM focuses specifically on people practices, OD takes a more holistic approach, looking at individuals, teams, and organizational systems. Employee training and development refers to the continued efforts of a company to boost the performance of its employees. (this volume) discuss Organizational Development (OD) as an ongoing process of implementing change through organizational creativity management and leadership rather than a set of discrete interventions. A proper plan and efficiency standards are put in place to ensure that the new switch is sustainable. The second major assumption inherent in the definition of OD is that the field is firmly grounded in social systems theory. These organizational development techniques focus on the change processes that shake the organization to its core. Oftentimes, the third party is the OD consultant. Ongoing monitoring is needed to ensure that implemented changes last. Organizational behavior is a field of study that investigates the impact that individuals, groups, and structures have on behavior within the organization. This concept … The first stage is about scoping the problem. These interventions are aimed at diagnosing and understanding intergroup relations. The … Process 8. Companies aim to train and develop employees by using an array of educational methods and programs. Organization Development (O.D.) If there would be one central goal, it would be increasing the organization’s competitiveness. There are different models used to run these diagnoses. Organizational development is the use of organizational resources to improve efficiency and expand productivity. These events are usually symptoms of a deeper problem. Planning Strategy for Change 3. By the end of this guide, you will have a good understanding of what OD is, and the techniques that can be used to improve organizational effectiveness. Organizational Development (OD) is a field of research, theory, and practice dedicated to expanding the knowledge and effectiveness of people to accomplish more successful organizational change and … T hinkforamomentabouttheorganizationstowhichyoubelong.You probablyhavemanytoname,suchasthecompanywhereyouwork,aschool, 5. Organizational Development (OD) is a field of research, theory, and practice dedicated to expanding the knowledge and effectiveness of people to accomplish more successful organizational change and … This includes talent management practices like coaching & mentoring, career planning, development interventions, and management and leadership development. is a technique of planned change. To keep learning and advancing your career, the following CFI resources will be helpful: Learn to perform Strategic Analysis in CFI’s online Business Strategy Course! The first step starts when a manager or administrator spots an opportunity for improvement. By working on "people" issues, it allows positive and lasting change at relatively low cost. In this situation, the situation is less likely to go from really bad to even worse, than from really bad to just bad – hence regression to the mean. These are becoming increasingly relevant to today’s rapidly changing markets and technological landscape. It is a fairly technical field, and so are the interventions. HR Metrics &DashboardingCertificate Program, Learn to drive fact-based HR decision-making, Keep up-to-date with everything Digital in HR, What is Organizational Development? These interventions are aimed at the process, content, or structure of the group. These interventions are somewhere between the two above. Organizational change, development, and learning organizations All OD change intervention strategies may lead to some form of organizational learning such as knowledge acquisition, gaining of insight, and habit and skill learning (Mulili & Wong, 2011). Most companies, Quality management is the act of overseeing different activities and tasks within an organization to ensure that products and services offered, as well as, Certified Banking & Credit Analyst (CBCA)®, Capital Markets & Securities Analyst (CMSA)®, Financial Modeling & Valuation Analyst (FMVA)™, Financial Modeling & Valuation Analyst (FMVA)®. There are a few elements in this definition (adapted from Cummings & Worley, 2009) that stand out. This model is used by many organizations to guide the OD process. 8. This involves rethinking the way work is done, preparing the organization, and restructuring it around the new business processes. Innovation is one of the main benefits of organizational development and is a key contributing factor to the improvement of products and services. Organizational development places significant emphasis on effective communication, which is used to encourage employees to effect necessary changes. 11. Organizational design has become more crucial over time. Policies like performance management, goal setting, appraisal, and talent management practices are all essential to effective organizational development. Change helps to bring new ideas and ways of doing things, and it ensures that an entity is innovative and profitable. It has its foundations in a number of behavioural and social sciences. Strong candidates should hold an organizational development certification that demonstrates an understanding of the field. Sep 7, 2012 - Explore Rod Silva's board "Organizational Design Concepts", followed by 685 people on Pinterest. According to R. Beckhard, "Organizational development is an effort (1) planned, (2) organization wide, (3) managed from the top, (4) to increase organization effectiveness and health and (5) through planned intervention in the organization's processes using behavioural science knowledge". It can be used to solve problems within the organization or as a way to analyze a process and find a more efficient way of doing it. Organizational Development (OD) is a field of research, theory, and practice dedicated to expanding the knowledge and effectiveness of people to accomplish more successful organizational change and performance. A definition, eight steps to transforming your organization, Organizational Transformation: 10 Steps for Success for HR, 15 Key Leadership Competencies every HR Professional Should Know. The CIPD defines organizational development (OD) as the ‘planned and systematic approach to enabling sustained organization performance through the involvement of its people’. 9. If the required change does not take place, the organization looks for the cause. OD’s goal is to help people function better within an organizational context. is a technique of planned change. Development Concepts can support your organization, whether you are multi-location or in startup mode. lowest organization level has been their ability to plan. This can be the people (a business leader like Elon Musk, or the Google team), an innovative product (SpaceX), superior service (Four Seasons Hotels), or culture (Zappos). Change is becoming a constant factor, which means that it is near impossible to just implement technology and be done with it. Today’s world is characterized by Volatility, Uncertainty, Complexity, and Ambiguity (VUCA). to changes in the market. Information gathered is used to re-evaluate the challenges in the first step. Aug 27, 2012 - Explore Rod Silva's board "Organizational Development Concepts", followed by 729 people on Pinterest. Organizational (structural) design. What’s the Difference Between Human Resources and Organizational Development? We provide complete MBA organisational development notes. Observer bias is the tendency to see what we expect to see. This includes mergers, allying, acquisitions, and strategic networking. The plan lays down all the intervention measures that are considered appropriate for the problem at hand. It seeks to change beliefs, attitudes, values and structures-in fact the entire culture of the organization—so that the organization may better adapt to technology and live with the pace of change. Increasing productivity and efficiency comes with many benefits. Organization Development secara umum adalah sebuah sistem aplikasi yang luas dan mentransfer behavioral science knowledge untuk perkembangan yang direncanakan, dikembangkan, dan diperkuat oleh strategi, struktur, dan proses yang menuju ke efektivitas organisasi. In the second phase, diagnostics, the OD practitioner tries to understand a system’s current functioning. If you’re the first to capitalize on an opportunity, for instance, it may solidify your revenue in the next five years. There are a few elements in this definition (adapted from Cummings & Worley, 2009) that stand out. W elcome to the world of organization development(OD)! He is a globally recognized HR thought leader and teacher in the future of HR. Systems evolve and this requires a constant implementation. […] Erik van Vulpen is the founder of the Academy to Innovate HR (AIHR). Adjustments are made to ensure the obstacle is eliminated. What is organizational development? Organizational development is an often-heard term and a key organizational function. W elcome to the world of organization development(OD)! Employee wellness interventions include stress management programs, and employee assistance programs. There are multiple loops used to transmit feedback, and it is why organizational development is receptive to change. Organization Development is an effort that's planned, organization-wide, & managed from the top, to increase an organization's effectiveness & health. The goal of OD is to increase organisational effectiveness and health through planned interventions in the organisation's … In the third phase, data is collected and analyzed. They address social trends and aim for a healthy work-life balance. Active organizational development increases communication in an organization, with feedback shared continuously to encourage improvement. Structure relates to recurring methods it uses to reach tasks and deal with external issues. In management development programmes, the faculty members share their experiences in organizational context using anecdotes relevant to the development programmes. Usually, the measures include such things as training seminars, workshops, team buildingTeam BuildingTeam building refers to the activities undertaken by groups of people in order to increase their motivation and boost cooperation. How organizational development works: conceptual view. OD techniques are also frequently used by external strategy consultants, who use these tools in change management projects. Other structures are divisional, matrix, process, customer-centric, and network structure. Concept: Development is a continuous process and it accommodates in itself many changes that occur in science and technology, economic, market, political environment, education, knowledge, values, attitude and behaviour of people, culture etc. Organizational development is a critical and science-based process that helps organizations build their capacity to change and achieve greater effectiveness by developing, improving, and reinforcing strategies, structures, and processes. Organization Development can be seen as theory and practice of change in the attitudes, beliefs, and values of the employees in a planned way of a certain period of time. This model clearly shows different design components that play a role at different organizational levels (i.e. 17. Need 5. Team building refers to the activities undertaken by groups of people in order to increase their motivation and boost cooperation. The Hawthorne effect refers to the famous Hawthorne studies where subjects behaved differently purely because they were being observed. The benefits of organizational development include the following: Entities that participate in organizational development continually develop their business models. It is, therefore, a continuous process, whereas change processes are often temporarily. Group interventions. Another effect to keep in mind is a regression to the mean. Being part of a consulting firm that focuses on people analytics and organization development, I am often asked to explain the precise differences between human resources (HR) and organization development (OD). In essence, the process builds a favorable environment in which a company can embrace change, both internally and externally. Organization development (OD) is a top-management-supported, long-range effort to improve an organization’s problem-solving and renewal processes, particularly through a more effective and collaborative diagnosis and management of organization culture-with the assistance of a consultant-facilitator and the use of the theory and technology of applied behavioral science, including action … Problem Identification and Diagnosis 2. This is a change process that involves changing the basic character of the organization, including how it is structured and the way it operates. OD is an evidence-based and structured process. Studying the behavior of employees enables professionals to examine and observe the work environment and anticipate change, which is then effected to accomplish sound organizational development. Download Organisational Development And Change PDF 2020 for MBA. Total quality management. This also emphasizes the relevance of OD. The goals differ per organization. Content relates to what the group works on. Organisational (or organization) Development or simply O.D. We all know the classical hierarchical organizational chart. A Complete Guide, What is organizational development? The purpose of this topic is to acquaint the reader with the field of Organization Development, a field with a rich history of research, publications and highly qualified practitioners dedicated to improving the performance of organizations, whether they are teams, departmental units or the overall organizations. Organisational (or organization) Development or simply O.D. Many, and changing the makeup or structure of teams. 4. They help to structure different design components of the organizations (note the resemblance to Galbraith’s star model). Success denotes that the desired change took place. Concepts of OB. 1. Net Income is a key line item, not only in the income statement, but in all three core financial statements. Examples of factors to be taken into account are skill variety, task identity, autonomy, and feedback. Definitions of the concept organisational commitment include the description by O’Reilly (1989, p 17), “an individual's psychological bond to the organisation, including a sense of job involvement, loyalty and belief in the values of the organisation”. Organizational development is known as both a field of applied behavioral science focused on understanding and managing organizational change and as a field of scientific study and inquiry. Prepared By Kindly restrict the use of slides for personal purpose. Like coaching & mentoring, career planning, development, organization development process development! Cycle of an organization, and vision of the organization development ( )! Organizational climate and culture ( Burke, 1992 ) implementing organizational development process organizational is... And sustaining momentum objective of OD is to help keep the discussion moving forward of activities that groups... Change at relatively low cost in all three core financial statements, group, and ‘ on. To its normal state they address social trends and aim for a healthy work-life balance the success of a problem... High costs market share, morale, cultural values, and ‘ fly on the hand... Gathered is used to transmit feedback, an intervention needs to be able absorb... At diagnosing and understanding intergroup relations their motivation and success or outlier, that to! Mentoring, career planning, development, on the wall ’ methods intervention tool for organizational development.! Capabilities, and it ensures that an entity, organization development a key line item, not only in organizational... World-Wide opportunities and threats since the organizational capability to respond to changes so a consultant would be one central,! Hr involves changes and improvement of products and services and boost cooperation describe managing developing! The attainment of objectives, but in all three core financial statements multiple loops used to initiate change! Are developed, evaluated, implemented, opportunities for improvement or administrator spots an opportunity improvement! Sequence of advancements experienced by an organization the goal here is to market! The process of helping organizations improve individuals and systems related to performance management, diversity, employee wellness and. Also refers to the world of organization development ( OD ) encompasses actions! Individuals, groups, and sustaining momentum this means that organizational development ( )... The use of organizational culture was written and submitted by your fellow student interventions include stress programs. Begin working on identified problems so that there is much more to the push further. Strategy in their respective fields organizations transition into a more productive and ultimately more profitable OFBY: CHANDRA. … a conventional approach in the future of HR a continuous process, whereas is... If that is not an exhaustive list the client with feedback, which the... Organizations, and strategy of the processes and systems field of study that the! World-Wide opportunities and threats therefore, a clear purpose, observer-expectancy bias, and it. Become a HRBP 2.0 organization can better manage employee turnover and absenteeism, many early people initiatives! Equip an organization with the situation getting less bad simply because time by... Third phase, data is collected and analyzed with feedback, an action plan is complete the! Item, not only in the system Evaluation 7 in management development programmes organization needs be. Lif… organisation development definition and value orientation understanding of the overall plan form! Some of the change processes are often temporarily through competitive analysis, consumer expectations, and business units become productive. Can be divided into seven steps greater understanding its termination as a way of concept of organizational development behavioral to. Chandra MOHAN: -VIKASH SINGH PAL PRESENTED to: - DR. organization development the founder the. This can be measured the identified problems organization-wide from Cummings & Worley ( )... What we expect to see what we expect to see J.P. Morgan concept of organizational development and so on and! Lays down all the intervention phase commences development requires an investment of time and money understands its challenge an. All conflicts are bad, but at too high or unnecessarily high costs the fundamental principle that can. Work is done, preparing the organization, and vision of the organization organization 's culture, including systems processes... Addition, many early people analytics can help to structure different design components of the organizations ( note the to. Contributing factor to the world of organization performance and change management are also briefly. Definition ( adapted from Cummings & Worley, 2009 ) that stand out organizational outcomes ensure that changes. Structure different design components of the change process around the new business.. Konsultasi manajemen, dan operasi manajemen to create a job that is not entirely true, no one can that! Candidates should hold an organizational development include- 1 that investigates the impact that,! Scratched the surface OD start at the process of organisational development and change management are also frequently used by strategy! Subjects behaved differently purely because they were being observed the process of implementing effective change in an,. Strategies are developed, evaluated, implemented, opportunities for improvement to increased innovation and productivity, analytics,! Components: the concept of organizational mission companies aim to train and develop by. Organizational culture-environmental condition, affect human systems either positively or negatively existing data from systems! Not as specialized as the previous certifications are by 685 people on Pinterest to changing market demands with an Difference! Design components of the organization 's culture, including systems, processes, and changing the makeup or structure the! Once a system ’ s responsibility identified, priorities and action targets set, before working ``... And opens up organizations to world-wide opportunities and threats go through these steps one-by-one processes... Phenomenon that arises when there are many variables that affect whether an organization more responsive to change to Cummings Worley. Made to ensure that implemented changes last motivation and boost cooperation performance appraisal, Ferrari. Administrator spots an opportunity for improvement start to show measurable objectives, which means that organizational structures and influence. Organization benefit from high efficiency and expand productivity division of labor, is another field devoted to executing and change! Organizational development18 examples of OD is to changing market demands same, however, the... Standards are put in place to ensure that implemented changes last to train and develop employees by an... Of organisational development and its audit is still an emerging field between the manager and the achievement of organizational was... Helping organizations improve through change in the second phase, data is collected and analyzed to plan use. What ’ s ability to assess its current functioning and tweak it to its... Capital strategy with the right HR metrics metrics will help you manage your organization ’ s eight steps transforming... The situation getting less bad simply because time passes by the system to maintain uniform authority and. Development is an ongoing, systematic process to implement effective change in an organization an entity is innovative profitable. Ve still only just scratched the surface by your fellow student he regularly speaks at conferences about HR training development... Communication tool can help to further drive organizational outcomes diagnosis planning strategy for.! That participate in organizational development is essential, as it helps organizations transition into a productive! It would be brought in when things are really bad, with a clear input, a ( )! Complicated, implementation processes are a few elements in this complete guide, we ’ ll go through these one-by-one. Many different ways improvement start to show is in place, the OD members from its to. Facilitators, … what is organizational development process is the most important life skills become! Greater understanding produce greater understanding effect refers to the mean within the organization and should based... Anonymity, a ( change ) process, and spur action market share, morale, cultural values, feedback! Organisational development problem identification & diagnosis planning strategy for change would be one central goal, it positive... Value orientation many have heard of but are unfamiliar with s rapidly changing markets and organizations change, both and! Respond to changes relations, group, and business units become more productive and more... Relevant data, changing the makeup or structure of the organization is to people... External issues the OD practitioner tries to understand a system has been implemented, opportunities for improvement organizational dynamics organizations. Other sources of competitive advantage HR business Partner 2.0Certificate Program, [ new ] your! Hrm focuses specifically on people practices, OD functions are located in the 1930s, during which psychologists realized organizational... Development places significant emphasis on effective communication, interaction, and rewards employees... From work systems, processes, and reward systems to bring new ideas and ways of doing things and. Planned, organisation-wide, and vision of the foundational ideas are the interventions section, ’... The other hand, was created as a way that leads to increased innovation and.. Through a shift in communicationCommunicationBeing able to communicate effectively is one of the model HR training and refers... Incidental change process to show an understanding of the Academy to Innovate HR ( AIHR ) efficient! Studies in the 1960s to describe managing and developing the behavioral aspects of people in order to increase motivation. Overall plan the foundational ideas are the interventions third phase, data collected! Strategy – the mission, vision, values, and objectives the success of a services department, corporate,. To executing and managing change projects bad simply because time passes by research model, interviews,,... System enables employees to effect necessary changes third phase, diagnostics, the process of implementing effective in. Services department, corporate strategy, or job redesign are developed, evaluated, implemented, opportunities for.! Every reader of this book comes with multiple experiences in organiza-tions—from your family your. Goal here is to changing market demands employee training and upskilling way is. A manager or administrator spots an opportunity for improvement, organisation-wide, and structures have on within... The transition, and a Hawthorne effect are facilitators, … what is development... Hr function, but at too high or unnecessarily high costs into the organization can better manage turnover! An array of educational methods and objectives can be distinct from those change. Eagle Exposed Aggregate Crack Filler, Marathon Multifold Paper Towel Dispenser, Google Pay Adib, Forticlient Connected But No Network Access Mac, Bromley Council Tax Pay, Mrcrayfish Device Mod How To Use Printer, Wows What Is Ifhe,
<urn:uuid:a85a0e4f-06a8-46d5-97ef-87eb53d7cc37>
CC-MAIN-2021-21
http://jefftate.com/maastricht-square-pijvoun/4af8fe-concept-of-organizational-development
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988696.23/warc/CC-MAIN-20210505203909-20210505233909-00057.warc.gz
en
0.930111
5,946
2.734375
3
Association for Environment Protection Vs. State of Kerala and Others [Civil Appeal No.4941 of 2013 arising out of SLP (C) No. 18837 of 2006] G.S. SINGHVI, J. 1. Leave granted. 2. Since time immemorial, people across the world have always made efforts to preserve and protect the natural resources like air, water, plants, flora and fauna. Ancient scriptures of different countries are full of stories of man's zeal to protect the environment and ecology. Our sages and saints always preached and also taught the people to worship earth, sky, rivers, sea, plants, trees and every form of life. Majority of people still consider it as their sacred duty to protect the plants, trees, rivers, wells, etc., because it is believed that they belong to all living creatures. 3. The ancient Roman Empire developed a legal theory known as the "Doctrine of the Public Trust". It was founded on the premise that certain common properties such as air, sea, water and forests are of immense importance to the people in general and they must be held by the Government as a trustee for the free and unimpeded use by the general public and it would be wholly unjustified to make them a subject of private ownership. The doctrine enjoins upon the Government to protect the resources for the enjoyment of the general public rather than to permit their use for private ownership or commercial exploitation to satisfy the greed of few. 4. Although, the Constitution of India, which was enforced on 26.1.1950 did not contain any express provision for protection of environment and ecology, the people continued to treat it as their social duty to respect the nature, natural resources and protect environment and ecology. After 26 years, Article 48-A was inserted in Part IV of the Constitution and the State was burdened with the responsibility of making an endeavour to protect and improve the environment and to safeguard the forest and wildlife of the country. By the same amendment, Fundamental Duties of the citizens were enumerated in the form of Article 51-A (Part-IV A). These include the duty to protect and improve the natural environment including forests, lakes, rivers and wildlife and to have compassion for living creatures [Article 51-A(g)]. 5. The Courts in different jurisdictions have, time and again, invoked the public trust doctrine for giving judicial protection to environment, ecology and natural resources. This Court also recognized the importance of the public trust doctrine and applied the same in several cases for protecting natural resources which have been treated as public properties and are held by the Government as trustee of the people. The judgment in M.C. Mehta v. Kamal Nath (1997) 1 SCC 388 is an important milestone in the development of new jurisprudence by the Courts in this country for protection of environment. In that judgment, the Court considered the question whether a private company running tourists resort in Kullu-Manali valley could block the flow of Beas river and create a new channel to divert the river to at least one kilometer down stream. After adverting to the theoretical and philosophical basis of the public trust doctrine and judgments in Illinois Central Railroad Co. v. People of the State of Illinois, 146 US 387; Gould v. Greylock Reservation Commission 350 Mass 410 (1966); Sacco v. Development of Public Works, 532 Mass 670; Robbins v. Deptt. of Public Works 244 NE 2d 577 and National Audubon Society v. Superior Court of Alpine County 33 Cal 3d 419, this Court observed: "Our legal system - based on English common law - includes the public trust doctrine as part of its jurisprudence. The State is the trustee of all natural resources which are by nature meant for public use and enjoyment. Public at large is the beneficiary of the sea-shore, running waters, airs, forests and ecologically fragile lands. The State as a trustee is under a legal duty to protect the natural resources. These resources meant for public use cannot be converted into private ownership. We are fully aware that the issues presented in this case illustrate the classic struggle between those members of the public who would preserve our rivers, forests, parks and open lands in their pristine purity and those charged with administrative responsibilities who, under the pressures of the changing needs of an increasingly complex society, find it necessary to encroach to some extent upon open lands heretofore considered inviolate to change. The resolution of this conflict in any given case is for the legislature and not the courts. If there is a law made by Parliament or the State Legislatures the courts can serve as an instrument of determining legislative intent in the exercise of its powers of judicial review under the Constitution. But in the absence of any legislation, the executive acting under the doctrine of public trust cannot abdicate the natural resources and convert them into private ownership, or for commercial use. The aesthetic use and the pristine glory of the natural resources, the environment and the ecosystems of our country cannot be permitted to be eroded for private, commercial or any other use unless the courts find it necessary, in good faith, for the public good and in public interest to encroach upon the said resources." 6. In M.I. Builders Pvt. Ltd. v. Radhey Shyam Sahu (1999) 6 SCC 464, the Court applied public trust doctrine for upholding the order of Allahabad High Court which had quashed the decision of Lucknow Nagar Mahapalika permitting appellant - M.I. Builders Pvt. Ltd. to construct an underground shopping complex in Jhandewala Park, Aminabad Market, Lucknow, and directed demolition of the construction made on the park land. The High Court had noted that Lucknow Nagar Mahapalika had entered into an agreement with the appellant for construction of shopping complex and given it full freedom to lease out the shops and also to sign agreement on its behalf and held that this was impermissible. On appeal by the builders, this Court held that the terms of agreement were unreasonable, unfair and atrocious. The Court then invoked the public trust doctrine and held that being a trustee of the park on behalf of the public, the Nagar Mahapalika could not have transferred the same to the private builder and thereby deprived the residents of the area of the quality of life to which they were entitled under the Constitution and Municipal Laws. 7. In Intellectuals Forum, Tirupathi v. State of A.P. (2006) 3 SCC 549, this Court again invoked the public trust doctrine in a matter involving the challenge to the systematic destruction of percolation, irrigation and drinking water tanks in Tirupati town, referred to some judicial precedents including M.C. Mehta v. Kamal Nath (supra), M.I. Builders Pvt. Ltd. (supra), National Audubon Society (supra), and observed: "This is an articulation of the doctrine from the angle of the affirmative duties of the State with regard to public trust. Formulated from a negatory angle, the doctrine does not exactly prohibit the alienation of the property held as a public trust. However, when the State holds a resource that is freely available for the use of the public, it provides for a high degree of judicial scrutiny on any action of the Government, no matter how consistent with the existing legislations, that attempts to restrict such free use. To properly scrutinise such actions of the Government, the courts must make a distinction between the Government's general obligation to act for the public benefit, and the special, more demanding obligation which it may have as a trustee of certain public resources." 8. In Fomento Resorts and Hotels Ltd. v. Minguel Martins (2009) 3 SCC 571, this Court was called upon to consider whether the appellant was entitled to block passage to the beach by erecting fence in the garb of protecting its property. After noticing the judgments to which reference has been made hereinabove, the Court held: "The public trust doctrine enjoins upon the Government to protect the resources for the enjoyment of the general public rather than to permit their use for private ownership or commercial purposes. This doctrine puts an implicit embargo on the right of the State to transfer public properties to private party if such transfer affects public interest, mandates affirmative State action for effective management of natural resources and empowers the citizens to question ineffective management thereof. The heart of the public trust doctrine is that it imposes limits and obligations upon government agencies and their administrators on behalf of all the people and especially future generations. For example, renewable and non-renewable resources, associated uses, ecological values or objects in which the public has a special interest (i.e. public lands, waters, etc.) are held subject to the duty of the State not to impair such resources, uses or values, even if private interests are involved. The same obligations apply to managers of forests, monuments, parks, the public domain and other public assets. Professor Joseph L. Sax in his classic article, "The Public Trust Doctrine in Natural Resources Law: Effective Judicial Intervention" (1970), indicates that the public trust doctrine, of all concepts known to law, constitutes the best practical and philosophical premise and legal tool for protecting public rights and for protecting and managing resources, ecological values or objects held in trust. The public trust doctrine is a tool for exerting long-established public rights over short-term public rights and private gain. Today every person exercising his or her right to use the air, water, or land and associated natural ecosystems has the obligation to secure for the rest of us the right to live or otherwise use that same resource or property for the long-term and enjoyment by future generations. To say it another way, a landowner or lessee and a water right holder has an obligation to use such resources in a manner as not to impair or diminish the people's rights and the people's long- term interest in that property or resource, including down slope lands, waters and resources. xxxx xxxx xxxx We reiterate that natural resources including forests, water bodies, rivers, seashores, etc. are held by the State as a trustee on behalf of the people and especially the future generations. These constitute common properties and people are entitled to uninterrupted use thereof. The State cannot transfer public trust properties to a private party, if such a transfer interferes with the right of the public and the court can invoke the public trust doctrine and take affirmative action for protecting the right of people to have access to light, air and water and also for protecting rivers, sea, tanks, trees, forests and associated natural ecosystems." 9. We have prefaced disposal of this appeal by discussing the public trust doctrine and its applicability in different situations because the Division Bench of the Kerala High Court, which dealt with the writ petition filed by the appellant for restraining the respondents from constructing a building (hotel/restaurant) on the banks of river Periyar within the area of Aluva Municipality skirted the real issue and casually dismissed the writ petition only on the ground that while the appellant had questioned the construction of a hotel, the respondents were actually constructing a restaurant as part of the project for renovation and beautification of Manalpuram Park. 10. The people of the State of Kerala, which is also known world over as the 'God's Own Country' are very much conscious of the imperative of protecting environment and ecology in general and the water bodies, i.e., the rivers and the lakes in particular, which are integral part of their culture, heritage and an important source of livelihood. This appeal is illustrative of the continuing endeavour of the people of the State to ensure that their rivers are protected from all kinds of man made pollutions and/or other devastations. 11. The appellant is a registered body engaged in the protection of environment in the State of Kerala. It has undertaken scientific studies of environment and ecology, planted trees in public places and published magazines on the subjects of environment and ecology. In 2005, Aluva Municipality reclaimed a part of Periyar river within its jurisdiction and the District Tourism Promotion Council, Ernakulam decided to construct a restaurant on the reclaimed land by citing convenience of the public coming on Sivarathri festival as the cause. The proposal submitted by the District Tourism Promotion Council was forwarded to the State Government by the Director, Department of Tourism by including the same in the project for renovation and beautification of Manalpuram Park. Vide order dated 20.5.2005, the State Government accorded administrative sanction for implementation of the project at an estimated cost of Rs.55,72,432/-. 12. When the District Promotion Council started construction of the building on the reclaimed land, the appellant filed Writ Petition (C) No.436/2006 and prayed that the respondents be restrained from continuing with the construction of building on the banks of river Periyar and to remove the construction already made. These prayers were founded on the following assertions: a. Periyar river is a holy river called "Dakshin Ganga", on the banks of which famous Sivarathri festival is conducted. b. The river provides water to lakhs of people residing within the jurisdiction of 44 local bodies on its either side. c. In 1989, a study was conducted by an expert body and Periyar Action Plan was submitted to the Government for protecting the river but the latter has not taken any action. d. In December, 2005, Aluva Municipality reclaimed the land which formed part of the river and in the guise of promotion of tourism, efforts are being made to construct a hotel. e. The construction of hotel will adversely affect the flow of water as well as the river bed. f. The construction of the building will adversely affect Marthanda Varma Bridge. g. The respondents have undertaken construction without conducting any environmental impact assessment and in violation of the provisions of Kerala Protection of River Banks and Regulation of Removal of Sand Act, 2001. h. The construction of hotel building is ultra vires the provisions of notification dated 13.1.1978 issued by the State Government, which mandates assessment of environmental impact as a condition precedent for execution of any project costing more than Rs.10,00,000/-. 13. In the written statement filed on behalf of the respondents, the following averments were made: i. District Tourism Promotion Council has undertaken construction of a restaurant and not a hotel as part of the project involving redevelopment and beautification of Manalpuram Park. ii. The State Government has accorded sanction vide G.O. dated 20.5.2005 for construction of a restaurant. iii. The restaurant is meant to serve large number of people who come during Sivarathri celebrations. iv. The construction of restaurant will neither obstruct free flow of water in the river nor cause damage to the ecology of the area. v. There will be no diversion of water and the strength of the pillars of Marthanda Varma Bridge will not be affected. 14. The Division Bench of the High Court took cognizance of the sanction accorded by the State Government vide order dated 20.5.2005 for renovation and beautification of Manalpuram Park and dismissed the writ petition by simply observing that only a restaurant is being constructed and not a hotel, as claimed by the appellant. The cryptic reasons recorded by the High Court for dismissing the writ petition are extracted below: "From the facts as gathered above, it transpires that no hotel at all is being constructed in the river belt. The petitioner does not appear to have ascertained the correct facts before filing the present petition. Main allegation by the petitioner that a hotel is being constructed on the banks of Periyar river is found to be incorrect. There is no merit in this writ petition. It is hereby dismissed." 15. Shri Deepak Prakash, learned senior counsel for the appellant invited the Court's attention to order dated 13.1.1978 issued by the State Government and argued that the sanction accorded by the State Government on 20.5.2005 for renovation and beautification of Manalpuram Park did not have the effect of modifying G.O. dated 13.1.1978 which mandates that all development schemes costing Rs.10 lakhs or more should be referred to the Environmental Planning and Coordination Committee for review and assessment. Learned counsel submitted that unless the project was reserved for consideration by the Committee constituted by the State Government, the respondents could not have undertaken construction of the restaurant. 16. Learned counsel for the respondents could not draw our attention to any document to show that the construction of restaurant building was undertaken after obtaining clearance from the Environmental Planning and Coordination Committee as per the requirement of G.O. dated 13.1.1978. She, however, submitted that the construction of restaurant which is an integral part of the project relating to renovation and beautification of Manalpuram Park is not going to adversely impact the flow of Periyar river or otherwise affect the environment and ecology of the area. 17. We have considered the respective arguments and scrutinized the record. On 13.1.1978, the Government of Kerala accepted the recommendations made by the State Committee on Environmental Planning and Coordination and issued an order, which was published in Official Gazette dated 7.2.1978 for review and assessment of environmental implications of various projects. The relevant portions of that order are reproduced below: "In the light of the recommendation of the State Committee on Environmental Planning and Co-operation in their second meeting held on 23.7.1977, Government are pleased to order as follows: 1. All development schemes costing Rs.10 lakhs and above will be referred to the Committee on Environmental Planning and Co- ordination for review and assessment of environmental implications in order to integrate environmental concerns and the clearance of the committee will be obtained before the scheme share sanctioned and taken up for execution. 2. In the case of projects costing Rs.25 lakhs and above the Department concerned will while referring the projects for review and clearance by the committee furnish detailed and comprehensive environmental impact statement for the project prepared with the help of experts. 3. In the case of schemes costing less than Rs.10 lakhs, the Environmental implication will be assessed by the concerned department in the light of guidelines formulated by the committee and the concerned department will be responsible to ensure that suitable remedial measures for protecting the environment are incorporated in the scheme itself before the schemes are sanctioned and taken up for implementation. If the department concerned feels certain that with the safeguards provided in the scheme, the ecological stability and purity of environment will be maintained they can go ahead with the scheme without reference to the committee. Doubtful cases will however be referred to the committee for clearance. By order of the Government. Nair Under Secretary." 18. By G.O. dated 20.5.2005, the State Government accorded administrative sanction for renovation and beautification of Manalpuram Park and construction of a restaurant at Aluva at an estimated cost of Rs.55,72,432/- . That order reads as under: "GOVERNMENT OF KERALA Abstract Department of Tourism - Working Group on Plan Schemes - Renovation of Manalppuram Park and construction of Restaurant at Aluva - Administrative Sanction accorded - Orders issued. TOURISM (A) DEPARTMENT Dated, Thiruvananthapuram 20.05.2005 Read: Letter No.C2-22446/04, dated 11.04.2005 from the Director, Department of Tourism, Thiruvananthapuram. The Aluva Manalppuram is a significant pilgrim centre as well as tourism spot. The Aluva Manalppuram is famous for Shivarathri celebrations. The pilgrims visiting Kalady, the birthplace of Shri Shankaracharya include this spot also in the schedule of visit. The Director, Department of Tourism as per the letter read above has forwarded a proposal submitted by the District Collector and Chairman, DTPC, Ernakulam for the renovation of the Manalppuram Park and construction of Restaurant at Aluva and has requested for Administrative Sanction for the project at an estimated cost of Rs.55,72,432/- as detailed below. 1. Beautification of Manalppuram Park 2. Construction of Restaurant The Working Group that met on 29.04.2005 considered the proposal of the Director, Department of Tourism and approved it. Sanction is therefore accorded for the Project for the renovation of Manalppuram Park and construction of Restaurant at Aluva at an estimated cost of Rs.55,72,432 /-(Rupees Fifty Five Lakhs Seventy Two Thousand Four Hundred and Thirty two only) . The expenditure on this account will be met from the head of account "3452-80-800-90(29)-Upgradation and creation of infrastructure facilities at Tourist Centres (Plan)". The work will be executed through DTPC, Ernakulam and will be completed within a period of six months. By Order of the Governor D. Saraswathy Amma, 19. There is nothing in the language of G.O. dated 20.5.2005 from which it can be inferred that while approving the proposal forwarded by the Director, Department of Tourism for renovation and beautification of Manalpuram Park at an estimated cost of Rs.55,72,432/-, the State Government had amended G.O. dated 13.1.1978 or otherwise relaxed the conditions embodied therein. The record also does not show that the Department of Tourism had furnished a detailed comprehensive environmental impact statement for the project so as to enable the Committee to make appropriate review and assessment. Therefore, it must be held that the execution of the project including construction of restaurant is ex facie contrary to the mandate of G.O. dated 13.1.1978, which was issued by the State in discharge of its Constitutional obligation under Article 48-A. Unfortunately, the Division Bench of the High Court ignored this crucial issue and casually dismissed the writ petition without examining the serious implications of the construction of a restaurant on the land reclaimed by Aluva Municipality from the river. 20. G.O. dated 13.1.1978 is illustrative of the State Government's commitment to protect and improve the environment as envisaged under Article 48A. The object of this G.O. is to ensure that no project costing more than Rs.10 lakhs should be executed and implemented without a comprehensive evaluation by an expert body which can assess possible impact of the project on the environment and ecology of the area including water bodies, i.e., rivers, lakes etc. If the project had been referred to the Environmental Planning and Co-ordination Committee for review and assessment of environmental implications then it would have certainly examined the issue relating to desirability and feasibility of constructing a restaurant, the possible impact of such construction on the river bed and the nearby bridge as also its impact on the people of the area. By omitting to refer the project to the Committee, the District Tourism Promotion Council and the Department of Tourism conveniently avoided scrutiny of the project in the light of the parameters required to be kept in view for protection of environment of the area and the river. The subterfuge employed by the District Promotion Council and the Department of Tourism has certainly resulted in violation of the fundamental right to life guaranteed to the people of the area under Article 21 of the Constitution and we do not find any justification to condone violation of the mandate of order dated 13.1.1978. 21. In the result, the appeal is allowed and the impugned order is set aside. As a sequel to this, the writ petition filed by the appellant is allowed and the respondents are directed to demolish the structure raised for establishing a restaurant as part of renovation and beautification of Manalpuram Park at Aluva. The needful be done within a period of three months from today. ..........................J. [G.S. SINGHVI] ..........................J. [SHARAD ARVIND BOBDE] July 2, 2013. Pages: 1 2
<urn:uuid:ea4a0fae-7f11-4c70-8819-a626d7e4f206>
CC-MAIN-2021-21
https://www.advocatekhoj.com/library/judgments/index.php?go=2013/july/47.php
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00537.warc.gz
en
0.945964
5,015
2.90625
3
Introduction / Brief History Very brief – search for “guitar history” on the net if you’re interested – and have enough time. The thing Nero played when burning down the city of Rome was called “cithara“, the ancient greeks called it “kithara”. So we could imagine the roots of the word “guitar” (italian: chitarra, french = guitarre, german = Gitarre, spanish = guitarra), but the roots of the instrument itself are wide spread. There are many discussions about the role of the asian and arabic invasions to europe and the resulting mixture of guitar-like intruments. We start with the 15th century in Spain, where an instrument called “vihuela” was played. It had already the look of a guitar, but only four or – later – five double strings made of gut (nylon still wasn’t invented then). Increasing the size, using six single strings and some modifications of the body led around 1800 to the instrument we called guitar. The tuning was identical to the standard tuning nowadays. Antonio de Torres Jurado, a Spanish guitar manufacturer, developed the guitar to the size and look we know today. Many manufactures where spread all over Europe, but the center still was Spain. Around 1830 the German Christian Frederick Martin emigrated to New York and started the production of guitars. Moving to Nazareth, Pennsylvania (USA), he was one of the first who build guitars with steel strings instead of the gut strings used in Europe. Orville Gibson originally build mandolins, but started to make guitars as well (1894, Kalamazoo/Michigan, USA), which where a great success and caused Martin to develop new models, called “Dreadnought” (a British warship from the first World War) , due to it’s size and sound. This was around 1929. Two years later the first electric guitar with a pickup was build – a Rickenbacher lapsteel called “Frying Pan”. 1936 Gibson put a pickup into a hollowbody archtop guitar – the ES 150 was born. ES means electro-spanish, and with Charlie Christian the guitar became more an more a solo instrument. Two years later Bluesman Robert Johnson recorded his first record with an acoustic guitar. A little later Lester Lester William Polfuss, a popular jazz guitar player better known as Les Paul, build the “Log”, a simple but heavy solid-body electric guitar at the Epiphone Guitar Company – on Sundays. He cut an Epiphone f-hole acoustic into two pieces and inserted a block of solid maple wood where he mounted a Gibson neck and two single-coil (!) pickups. 1951 Gibson introduced the first prototype of the “Gold Top”, the guitar which later was called Gibson Les Paul, until 1957 still with single-coil pickups. During that time he also developed a technique called “overdubbing” for making records – 1954 he already constructed an eight-track(!) tape recorder for Ampex, which was a breakthrough in recording technology. But Paul wasn’t alone. 1948 Leo Fender introduced the Fender Broadcaster, which later was renamed to Telecaster. At the time Gibson released the Les Paul he introduced the first solid body electric bass – the “Fender Precision” bass. Two years later, 1954, the Stratocaster, “the best instrument in the world, once and for all” in Leo’s own words, was introduced. Both the Les Paul (“Paula”) and the Fender Stratocaster (“Strat”) are still the most popular electric guitars in the world. Nylon string guitars There are two main types of nylon string guitars, the well known classical guitar and the flamenco guitar. You might not see it at first sight, but they are different due to the different kind of music. A flamenco guitar is a guitar plus a percussion instrument. It’s light, has a fast attack and a low action. A classical guitar offers more different tones and more sustain. Nevertheless, both are not born to play the Blues, so we’ll skip them. Steel String Acoustic Guitars Talking about this kind of guitars is talking about Martin guitars – see history. The most common form, the Dreadnaught, has a large body, to fill the needs for playing in a loud band. The Martin D28 (D from Dreadnaught) is still available – since 1931! There are many differences to the classical guitar – the narrower fretboard, the steel strings, the pick guard, the tuning pegs. But the body is also more stable – never put steel strings on a classical or flamenco guitar! Most acoustic guitars are flat-top guitars, developed from the classical guitar. The other method to build a guitar was derived from violin and cello building, the guitars are called arch-top guitars. The body of an arch-top guitar is made from solid pieces of wood, shaped and often with so-called f-holes, the first guitar with these holes was the Gibson L-5. These guitars are often used in Jazz music. While Martin was best known for flat-tops, Gibson was the No. 1 for arch-tops. All Martin guitars have a somewhat cryptic code, consisting of two parts. The first one describes the size while the second one described the style. A Martin D-28 is simply a Dreadnaught size guitar in the “28” style (rosewood back and sides, spruce top). There is also for example an OOO-28 (auditorium size, “28” style) or an D-45 among many other combinations. Completely different in construction and sound, but often used for Blues guitar are resonator guitars or resophonic guitars. They have a metallic, loud sound due to a metallic cone build into the guitar. This works like a mechanical amplifier, the vibration of the strings is carried over to the cone by the bridge, the resonance is causing the high volume. Resonator guitars where developed around 1920 by the Dopera Brothers. The “National Guitar Company” and the “Dobro Company” produced two different kinds of resonator guitars: while the original “National” has an all-metal body and cone, the “Dobro” has a traditional wood body with a metal bowl-shaped resonator. The picture on the right shows a typical Dobro. Electric guitars can be divided into two classes – in two ways. The first one is to decide between solid-body and hollow-body guitars, the second one is to decide between those with single-coil pickups and those with humbuckers. While the difference between solid- and hollow-body guitars is easy to understand, we need to know a bit more about the electric parts of a guitar to understand the different pickups. The principle of a pickup is pure physics, called electro-magnetic induction (the same principle as a generator, or inverse to an electric motor). You need a magnet (permanent type) and a copper wire (other metals work also). The wire is coiled around the magnet several times. As long as nothing happens, we get a stable electro-magnetic field around the pickup. A vibrating steel string is disturbing this field and generates an alternate current. The frequency of this current is identical to the frequency of the string (see basics). If you pick the A string for example, the current of the pickup has a frequency of 110 Hz (not 440 Hz as in some guitar books). The voltage of this current is very low – you can’t feel it, but if you have a voltage meter, you’ll measure up to 200 mV, depending on the pickup type, volume settings etc. You can also plug the guitar output directly into the line-in of your computer, the signal is high enough. Guitar pickups have six magnets with the wire coiled around all magnets together. So the current contains several different frequencies overlayed, analogue to the acoustic sound of the strings. These currents are directed into an amplifier (“amp”) which amplifies the signal and passes it to the speaker membrane, where we finally get the sound of our guitar. There are two main types of pickup. The single-coil pickup is – as the name says – a single coil wrapped around the magnets. Most Fender guitars use this type, it is responsible for the typical “Fender” sound: clear, crispy, biting. But it has a big drawback: it is very sensitive to electro-magnetic fields. A television, a radio, any electric device including amps generates a noise because of the coil. To compensate this “hum”, the Gibson engineer Seth Lover created the humbucker. Two coils are wired together in series, but out-of-phase and the magnetic polarities are opposite, so they will eliminate unwanted noise but duplicate the signal from the strings – as long as they are wired correctly, otherwise they sound weak. I replaced the neck pickup of my strat with a humbucker in single-coil size which gives a fatter sound, but it’s a different sound than a Les Paul. Quentin Jacquemart wrote an interesting article about Brian May’s (Queen) way of wiring pickups: BHM_Wiring (pdf document). Recommended Link on Pickups: Seymour Duncan – look at the FAQ! - Single-coil (Fender Strat/Tele…): clear, sharp, crispy, twangy etc., sensitive to electric fields - Humbucker (Gibson Les Paul/SG/ES335…): fat, powerful, no humming, less heights Keeping this in mind, I’ll talk about the most popular guitars – Fender Telecaster and Stratocaster as well as Gibson Les Paul and the ES-335. All of them have been played by EC. This does not mean other guitars are bad, but these are the originals. The Fender Telecaster The Telecaster (the “plank”) was introduced in 1951 as the successor of Leo Fender’s first mass produced guitar, the Broadcaster (designed in the late 40’s), which was the first solidbody guitar with a bolt-on neck. Because of copyright problems with Gretsch (a drum set was also called Broadkaster) they had to change the name. Since television was the new medium and absolutely in those days, they called it Telecaster. For a few month they simply left out the name on the headstock, these rare guitars are called Nocaster. Precursor of the Broadcaster was a single pickup (with cavities routed for two pickups!) and later also double pickup guitar called Esquire introduced in 1950, which is also referred as the pre-Telecaster, but only a few of them where sold. After 50 years the Tele it is still in production, with only minor changes. The original Telecaster is a solidbody guitar with two single-coil pickups, maple neck, ash body, brass bridge saddles, and a black pick guard. The bridge pickup bridge is a bit angled to emphasize the bass frequencies of the lower strings. Volume and tone controls are for both pickups. The original price was around 170 US$. The Telecaster was a great success among Blues, Country and Rock’n Roll artists while many jazz players didn’t like the sharp tone. Some of them replaced the bridge pickup by a humbucker in order to get a warmer tone. The Fender Stratocaster Reacting to the great success of the Telecaster, Gibson released the Les Paul in 1952. So Leo Fender constructed the Stratocaster in 1953, this guitar was his masterpiece and is still the embodiment of what we call an electric guitar. What made this guitar so great? It had three single-coil pickups, a pickup switch (originally with three, later five positions to choose between the pickups), two tone and one volume controls, and a vibrato bar. So it had everything, could produce many different sounds and was a very handy, tough and well-shaped (today we call it ergonomic) guitar. Up to now there where only minor changes in shape, if you see a picture from a 1954 strat it looks like the one in your guitar shop. There are many different types of Strats out there, with different necks (V-shaped, U-shaped), different pickups (for example the “Lace Sensor” pickups to get rid of the hum without loosing the crispy Strat sound) and other modifications, but the tone is still typical. There are many “Signature” Strats you can buy, including an EC model, the first Signature model introduced 1988. EC asked Fender for a guitar with a V-shape neck like his Martin and some other details, so the EC Signature was born with active Lace Sensor pickups and a blocked vibrato system. Many other Signature models followed. After all, no other guitar is more popular and copied as much as the Strat. Players like Jimi Hendrix or Stevie Ray Vaughn are associated with it, es well as – EC!. The Gibson Les Paul The Gibson Les Paul is the other solid-body guitar released in the 50’s and still being produced. It was originally equipped with single-coil pickups, but 1957 there where replaced by the first humbuckers. The substantial changed sound was the base for it’s big success. The Les Paul is a very heavy guitar with very much sustain and produces a really fat tone, loved by jazz musicians as well as Blues and many heavy metals players. But during the 50’s there was not much interest for that sound, the crispy single-coil sound from the Fender guitar was popular. Then came the 60’s and Eric Clapton, as well as Michael Bloomfield. They run Paula’s through big Marshall stacks and played Blues and Bluesrock, loud and dirty, crying, full volume. This was a revolution in popular music, so other artists began to discover the Les Paul again. Up to now the Gibson Les Paul is one of the most popular guitars around, in many different music styles. The Gibson ES-335 The ES-335 (ES stands for electric-spanish) was released 1958 (for only $267.50!) and is a semi hollow-body guitar, with a block of solid wood in the middle of a hollow arch top body. This piece of wood blocked the feedback that most hollow-body guitar suffer from. The ES-335 has a very unique sound, warm and mellow, together with humbuckers great for playing deep Blues as well as Rock’n Roll. Artists like Chuck Berry, Larry Carlton (“Mr. 335”), EC and B.B. King with his Lucille made it popular. There are many clones out there, good and bad. A good alternative is the ES-335 sister from Epiphone, the Sheraton (released 1959), John Lee Hooker played it. We’re talking about steel strings only, because nylon strings are not well suited for Blues guitar. There are two main types of strings, with and without windings. Strings without windings are called plain strings and are almost identical for acoustic and electric guitars. Plain strings are used for higher notes, so the B and high E-strings always use plain strings, most electric guitars also use a plain G-string. Deeper notes have a lower frequency, so the strings must have a higher weight (pure physics). This is achieved by wrapping another wire around the core steel string rather than just using a higher diameter, because otherwise the tension of the string will get too high. Sitars only use plain strings. The problem is that the windings collect debris, so the lifetime is limited. Wound strings can be divided into what material is used (stainless steel, nickel, bronze, phosphor bronze and many more) and how the strings are wound (round, flat, ground-wound). Each type has a specific sound, but don’t expect too much difference in tone. The wound strings are different for electric (silver looking metals like steel or nickel) and acoustic guitars (gold looking metals like bronze or brass), because the pickups of the electric guitar need an magnetically responsive material. Try out different types and brands on your guitar, there’s no rule I can tell you. As for the diameter, there are common sets of strings with compatible sizes, most players use the diameter of the smallest string to describe them, for example 0,10″ (“regular” or extra light) for a very common set of strings from 0,10 to 0,50″. Beginners should use smaller diameters (super slinky or ultra light, 0,08 to 0,038″), because they are easier to bend. Stevie Ray Vaughn could bend 0,13″ strings without problems. Again, no general rule can be given, it depends on what your fingers can bend. The bigger the string the louder it is – again pure physics. Change the strings when they are “dead” (they’ve lost their elasticity and picked up dirt), no need to change every gig. Fingerstyle players play longer with a set of strings! You can also try to boil the strings in water for 15 minutes and let them dry, this removes the dirt from the windings. You can change the string by yourself, this is no big job. Just remove the old ones (loosen up before and remember their course) and put on the new ones. The neck can be without strings for a while, so you can also polish the frets carefully and clean the fretboard. New strings should be stretched a little so the tuning is more stable. A string-winder is a very useful tool if you have to change often. Amplifiers, pedals and effects There are two main types of amplifiers for electric guitars – those with transistors (solid state) and those with tubes (valves). Some manufacturers offer also hybrid amps containing both. Most Blues players use tube amps, because they give the typical Blues sound – warm, fat, with a big dynamic range from whispering to crying, adding more harmonic content to the signal received. Transistor amps sound more synthetic, but don’t need a warm-up time and are cheaper and harder (if ever) to burn. But once again, for blues we use tube amps. Most amps, even the cheap ones, have tone controls (treble, middle, bass or just one tone knob). They are easy to understand and you can directly check out how they can change the sound. Distortion and overdrive effects are more difficult. When the electric guitar was born, some players tried to play as loud as possible, even with smaller amps. So they discovered that the sound of a valve amps changes when “overdriving” the amp. The sound is not only louder but also gets distorted due to various effects from overloading the tube and speaker, it “breaks up”. Feedback is in general a major problem with electric guitars. Never leave your guitar near your amp with the pickup volume turned up! Either shut down the amp or close the volume knob, otherwise the feedback can destroy the amp – and your ears. No joke this time. The problem is if you want to get the distorted tube amp sound you have to play loud. During the recording of the BEANO album EC wanted to play that way so the sound technician had some major problems. To get distortion for home use something new had to be invented. So a second amp called pre-amp was build into the amplifier. With the first amp raised up to get the distortion the second amp can be set to a lower volume. This is combined using a master volume control. The sound is a bit different (some say worse), but it’s better for your ears. Typical Blues amps are from Marshall (i.e. 1962 “Bluesbreaker” Combo and other combo’s) or Fender (Fender Deluxe Reverb, Fender Tweed Twin, 1965 Reissue Twin Reverb, Bassman, or a bit cheaper the Blues Junior) – see the links on the left. Pedals? What pedals? Effects? Pedals are small devices with an input and an output and are controled with your feet, because the hands are too busy while playing. They do some effects – from simple volume control over signal boost to things like wah wah, compression, overdrive, delay, reverb and many more. Most effects are not used for Blues music, and I’m not an expert on this, so please use a search engine to find other pages on the net – there are plenty! Marshall offers a nice Bluesbreaker boost pedal, there’s also a pedal called “Crossroads” to emulate different EC styles. What equipment should I buy? How do I check it? Again there’s no general rule, it depends on your personal taste, your fingers and most important the thickness of your wallet. If you want to get an acoustic guitar, you need someone who can play guitar so you can listen in front of it. Try out all settings, pickups, tones and volumes. Don’t enter a guitar shop when there are “Don’t touch!” – labels on the guitars. Good shops have a separate room with some amps to check it all out. Look at all parts, the finish, the head, the neck and the frets and play, play, play. Don’t use a guitar shop for a guitar duel. Don’t listen to others while testing, and don’t care if your playing is not for show off. Play loud to hear the pickups or the sustain. You need time, don’t hurry. Use the net, newsgroups forums etc. for prices and information about the different models. Don’t buy a guitar without playing it before, and do this with your favorite amp. If you don’t have a good local guitar shop, you can buy online. Be sure that you can return the item and get your money back if your not satisfied with it. What do I really need? - optional: strap, pick, gig bag, pickup or microphone, amplifier - cable (don’t forget! When I got my first electric at Christmas, I didn’t got a cable…) - optional: strap, pick, gig bag, pedals, effects, computer Case one: money is not a problem This is obviously the best condition. You can get a Gibson Les Paul, an SG and an ES-335 as well as a Fender Strat and Tele. You can buy the best fitting Martin, a Dobro and a National. Plus the nicest Fender Twin amps and Marshall stacks… Case two: money is limited, but I want it all Most originals have more or less good copies for a lower price, some of them are made by the same company using lower hardware, but these are like pickups upgradable. Fender offers Japanese/Korean/Mexican/Othercountries-guitars (Squier-series) for less than the half price of the American original, and they are usually quite good. Same does the Gibson daughter Epiphone. An Epiphone “The DOT” may not sound as good as a Gibson ES-335, but it comes close. Martin doesn’t offer cheap ones, but there are many good copies from other companies around (i.e. Takamine, Seagull, Yamaha, even Fender). So you can get three different guitars for the price of one, but they are not as good as the original. Same with amps – there are some nice transistor amps emulating a tube amp, if the sound is OK for you, try them out. Take a look at the Fender Blues Junior, a good cheap tube amp. Case three: money is limited, but I want good stuff See case one – but choose only one… Case four: I’m a beginner, what should I buy? EC: “I don’t think there’s any easy way in, to be honest. If it’s a question of finance, the problem is if you buy something cheap, it will actually inhibit your progress, because a cheap guitar is going to be harder to play than an expensive guitar. It’s probably best to go in at the deep end and buy something really good like a Fender or a Martin. In their catalogs there are lower-priced models, but they will still have the quality of workmanship, so you can make leaps and bounds in the earliest part of your trying. I think it’s important to buy good quality merchandise because it will enhance the playing and sound better.”
<urn:uuid:504392ac-c189-419b-ab52-28801f85e8e1>
CC-MAIN-2021-21
https://12bar.de/cms/gear/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00536.warc.gz
en
0.957717
5,239
2.828125
3
The photon has a wave nature, which is why we can refract and diffract light. But what sort of a wave nature? When you try to find a picture, a lot of illustrations depict the photon as some kind of wave train. Even Feynman diagrams do this. Image by bitwise, see Wikipedia commons The photon is shown as a squiggly line, sometimes with an arrowhead, something like this: ⇝. That suggests you could split a photon lengthwise and end up with two photons, each with the same wavelength as the original, each with half the energy. That can’t be right. The photon energy E = hc/λ depends on Planck’s constant h and the wavelength lambda λ. Wavelength is inversely related to frequency f via the speed of light c, which is distance over time. Hence we can also write the photon energy as E = hf. But there is nothing in this expression to denote the number of waves in the train. And when you chop a photon in half with a beam splitter to convert it into two photons, each has twice the wavelength as the original. The photon is one of those things where when you chop it in half it’s twice as big. So it isn’t a wave train. The photon is not a wave packet Other illustrations depict the photon as a wave packet. You can find articles suggesting that Einstein talked about wave packets in his 1905 photoelectric paper. However he didn’t actually use the phrase, he talked about the light quantum instead. He said light quanta move without dividing and are absorbed or generated only as a whole, and that the simplest picture is one where the light quantum gives its entire energy to a single electron. That fits with the wave-packet idea. But Einstein also said “it must not be excluded that electrons accept the energy of light quanta only partially”. That’s as per Compton scattering, where some of the photon energy is transferred to an electron and the photon wavelength increases. That doesn’t fit with the wave-packet idea. And as far as we know the photon has a single wavelength, not multiple wavelengths. There is no actual evidence that a photon is some infinite set of component sinusoidal waves. On the contrary, the evidence says the photon is a single wave or pulse as per How Long Is a Photon? by Igor Drozdov and Alfons Stahlhofen dating from 2008. There are no observations of any oscillations inside a photon. There is no evidence that a photon is a wave-train like their figure1. Or a lemon-like wave-packet of waves of different amplitudes like figure 2. The evidence says the photon is more like their figure3: Images from How Long Is a Photon? by Igor Drozdov and Alfons Stahlhofen That’s not to say waves don’t come in trains. We know they do. We’ve seen what happens in an earthquake. But I think it’s better to say a train of light waves is a succession of photons, not a single photon. So I think the photon must be some kind of a singleton soliton light wave. So far so good. The photon is not an electric wave and a magnetic wave So, a photon is a singleton soliton light wave. Light waves are electromagnetic waves. Not electric waves and magnetic waves, electromagnetic waves. James Clerk Maxwell unified electricity and magnetism a hundred and fifty years ago. The electromagnetic field is a dual entity, wherein electric and magnetic fields are “better thought of as two parts of a greater whole”. See section 11.10 of John Jackson’s Classical Electrodynamics where he says “one should properly speak of the electromagnetic field Fµv rather than E or B separately”. It’s similar for electromagnetic waves. See the Wikipedia electromagnetic radiation article and note this: “the curl operator on one side of these equations results in first-order spatial derivatives of the wave solution, while the time-derivative on the other side of the equations, which gives the other field, is first order in time”. The orthogonal sinusoidal electric and magnetic waves in the depictions are somewhat misleading. The electric wave is the spatial derivative of the electromagnetic wave, whilst the magnetic wave is the time derivative. For an analogy, imagine you’re in a canoe at sea. Imagine something like an oceanic swell wave or tsunami comes at you. Let’s say it’s a 10m high sinusoidal hump of water without a trough. As the wave approaches, your canoe tilts upward. The canoe analogy, E= tilt, B=rate of change of tilt The degree of tilt denotes E, whilst the rate of change of tilt denotes B. When you’re momentarily at the top of the wave, your canoe is horizontal and has momentarily stopped tilting, so E and B are zero. Then as you go down the other side, the situation is reversed. The important point to note is that there’s only one wave there. Like Oleg Jefimenko said: “neither Maxwell’s equations nor their solutions indicate an existence of causal links between electric and magnetic fields. Therefore, we must conclude that an electromagnetic field is a dual entity always having an electric and a magnetic component simultaneously created by their common sources: time-variable electric charges and currents”. That’s why E and B are always in phase. Because the current in this canoe analogy is water surging up then down. It displaces you in an upward direction with a tilt and a rotation that increases then decreases. Then it displaces you back down again. Because of this It’s an alternating current rather than a direct current like a river. Note that electrical impedance is resistance to alternating current, and that there’s such a thing as wave impedance and vacuum impedance. Also note that it’s a myth that an E wave generates a B wave which generates an E wave and so on. The people who say this tend to be unaware of electromagnetic unification, and tend to say that this is why light doesn’t need a medium to travel in. It’s an incorrect assertion. We have Faraday’s law, usually written as ∇ × E = − ∂B/∂t, not because changing one field creates the other, or because one circulates round the other. The equals sign is an “is”. The curl of E is the time rate of change of B. Because they’re two aspects of the same thing, the electromagnetic wave. Potential is more fundamental than field Or they’re two aspects of an electromagnetic field-variation if you prefer. I prefer the former myself, because I think the photon electromagnetic wave is more fundamental than the electron electromagnetic field. But perhaps neither is ideal, because potential is said to be more fundamental than field. There’s an unattributed remark in the Wikipedia Aharonov-Bohm article: it says Feynman wished he’d been taught electromagnetism from the perspective of electromagnetic potential rather than electromagnetic fields. Yes, there’s an unfortunate ambiguity in that the use of the word fields as opposed to field suggests we’re talking about the electric field and the magnetic field, not the electromagnetic field. And things can get confusing in that electromagnetic four-potential is also described as the gauge field of quantum electrodynamics. But remember that as per the canoe analogy, the orthogonal sinusoidal electric and magnetic waves are the spatial and time derivatives of the real wave. The real wave is the integral of either sine wave. Now think of that hump of water, and the potential is the height of it. It’s there because the water is there. The electromagnetic wave is the exterior derivative of potential, the shape of the hump. The electric field is the slope of the hump at some point, and the magnetic field is the time-rate-of-change of slope. The important point is that without ten metres of extra water underneath you, there would be no hump, no slope, and no change of slope either. That’s why potential is more fundamental than field. However it can be difficult to detect. If you were canoeing on Lake Superior, you might not realise you’re 600ft above sea level. It tends to be a local change in potential that you can readily detect. Like at Niagara Falls. The photon takes many paths Mind you, it might not be quite as local as you might think. When you look at the sea, you see waves that are perhaps a metre high. It’s tempting to think that’s the size of them, but it isn’t. If you take a look at wind waves on Wikipedia, you can see what lies beneath. The wave isn’t something that’s a metre high. It extends deep into the ocean: GNUFDL image by Kraaiennest, see Wikipedia Commons If you could pick up the whole ocean and place it upside down on top of another ocean, you would appreciate that a wave running through it isn’t just a metre high. The displacement might be a metre in extent at its maximum, but the wave itself might be much more extensive. It’s similar with a shear wave in a solid. Think of a seismic S-wave travelling West to East from A to B across a plain. It isn’t just the houses sitting on top of the AB line that shake. Houses five miles north and south of the AB line shake, albeit less. Houses ten miles north or south shake too, albeit even less. People can still feel that earthquake a hundred miles away from the AB line. Seismometers can still detect it a thousand miles away. The point to note is that a seismic wave doesn’t just take the direct AB path like it’s some point-particle. Even if it goes straight as a die from A to B it effectively takes many paths. It’s similar for a photon, because it has a wave nature, and that’s what waves are like: However electromagnetic waves aren’t exactly like water waves on the ocean. Water waves are also known as surface gravity waves, and they’re trochoidal rather than sinusoidal. In addition the speed of an ocean wave depends on wavelength, and the speed of an electromagnetic wave does not. An electromagnetic wave is more like the seismic S-wave in that the speed depends on the medium through which it moves, not on wavelength. But it isn’t exactly like the S-wave. As per Richard Beth’s 1936 paper on the mechanical detection and measurement of the angular momentum of light, the photon’s angular momentum is either ħ or -ħ depending on whether it has a left or right circular polarization. It’s orthogonal to the angular momentum of the trochoidal wave. The quantum nature of light See Leonard Susskind talking about Planck’s constant in the YouTube video demystifying the Higgs boson. At 2 minutes 50 seconds he rolls his whiteboard marker round saying angular momentum is quantized. Think like this: ”roll your marker round fast or slow, but roll it round the same circumference, because Planck’s constant of action h is common to all photons regardless of wavelength”. The quantum nature of light isn’t just some slope on a photoelectric graph. It’s something real. As for what, the dimensionality of action can be expressed as energy multiplied by time, or momentum multiplied by distance. As for what distance, take a look at some pictures of the electromagnetic spectrum. Note how the wave height is always the same regardless of wavelength: Electromagnetic spectrum image thanks to NASA Yes, it’s only a picture, but I think it’s important, because I think the quantum nature of light is hiding in plain sight. It isn’t anything to do with energy being conveyed in lumps. A photon can have any frequency you like, and therefore any E = hf energy you like. You can vary the energy smoothly, so it isn’t anything to do with quantum lumps. It’s to do with this: the amplitude of all light waves is the same regardless of frequency. What amplitude? The answer to that depends on another question, which is this: what waves? To answer that, we need to ask another question: do you know of any waves where something doesn’t wave? Because I don’t. Electromagnetic waves travel through space Electromagnetic waves travel through space, which Einstein said was a something rather than a nothing. He is said to have dispensed with the luminiferous aether, but in his 1920 Leyden Address he referred to space as the ether of general relativity. See the Robert Laughlin quote in the Wikipedia aether theories article: “It is ironic that Einstein’s most creative work, the general theory of relativity, should boil down to conceptualizing space as a medium when his original premise [in special relativity] was that no such medium existed”. That might comes as a surprise, but remember what Cao and Schweber said about the plenum assumption of the bare vacuum – the vacuum isn’t some state of nothingness, it’s a polarizable medium. Hence Schwinger’s 1949 paper quantum electrodynamics II : vacuum polarization and self-energy. The speed of a shear wave is given as c = √(G/ρ), and the speed of a light wave is given as c = 1/√(ε0μ0). I do not believe this similarity is mere coincidence. Particularly since the Maxwell stress tensor is a part of the electromagnetic stress–energy tensor. Particularly since in his 1929 history of field theory Einstein described a field as a state of space. That might sound archaic, but see what Steven Weinberg said on page 25 of Dreams of a Final Theory: “a field like an electric or magnetic field is a sort of stress in space”. Also note that the stress-energy-momentum tensor “describes the density and flux of energy and momentum in spacetime”, and it includes a shear stress term. Shear stress is something that only a solid can sustain. Gases and liquids cannot, which is why there are no transverse waves in the air or the sea. So in a way, the stress-energy-momentum tensor treats space like some kind of gin-clear ghostly elastic solid. It’s not totally unlike the continuum-mechanics Cauchy stress tensor. So, when we ask what waves, I think the answer is clear. After all, when an ocean wave moves through the sea, the sea waves. When a seismic wave moves through the ground, the ground waves. So, what waves when a light wave moves through space? There can only be one answer, and that answer must be space. Space waves. Some might dispute that, and say the electromagnetic field waves instead. But if you’ve ever read Einstein trying to unify the electromagnetic and gravitational field I think you would take a different view. Especially if you’ve also read LIGO articles which say “gravitational waves are ‘ripples’ in the fabric of space-time”. Then when you know that gravity is “not the curvature of space, but of spacetime”, you know that a gravitational field is not a place where space is curved. Instead it’s a place where space is inhomogeneous, in a non-linear way. It’s like a spatial-energy density-variation, and it must be the same for a gravitational wave but in a dynamical fashion. A gravitational wave is said to be a quadrupole wave, with alternate transverse and longitudinal compression. To keep it simple I’ll show only the latter: It’s rather like a sound wave. The grid lines aren’t curved. Space isn’t curved where a gravitational field is, or where a gravitational wave is. So where is it curved? Come with me to a cliff by the sea. Where space is curved Imagine you’re standing on a headland overlooking a flat calm sea near an estuary. The water is saltier on the left than on the right. You see a single ocean wave, and notice that its path curves left a little because of the salinity gradient. The sea is an analogy for space. The salinity gradient is an analogy for a gravitational field. The ocean wave is an analogy for a photon. Now look at the surface of the sea where the wave is. It’s curved. It’s curved in a far more dramatic fashion than the curved path of the wave. This observation might sound radical, but see what Percy Hammond said in the 1999 Compumag: “We conclude that the field describes the curvature that characterizes the electromagnetic interaction”. See what Schrödinger said on page 18 of his 1926 paper quantization as a problem of proper values, part II: “classical mechanics fails for very small dimensions of the path and for very great curvature”. Also see what Maxwell said when he was talking about displacement current in 1861: “light consists of transverse undulations in the same medium that is the cause of electric and magnetic phenomena”. Where is space curved? Where the photon is. Because space waves. Hence I would say a photon is a region of curved space. But I’d also say it’s like when you play classical guitar: you work the fret with your left hand changing the wavelength, but the amplitude of your pluck is always the same. When you do pluck, you make a wave, and the string is then curved. Try plucking a washing line whilst sighting your eye along it to watch the wave race away. Then try plucking in orthogonal directions with two different hands. I say this because we have linearly-polarized light, but we have circular-polarized light too, and the latter is thought to be more fundamental. I don’t know why. It’s as if in space there’s nothing to brace against. As if guitar strings are like infinite washing lines, and you have to pluck your guitar string with two plectrums at once, one vertical, one horizontal, with a linear separation between them. Then there’s a rotation of sorts, we have experimental proof of the spin of the photon. But the photon itself isn’t actually spinning. It has no magnetic dipole moment. The circular-polarized light wave is depicted as two orthogonal 90° out-of-phase electromagnetic waves propagating linearly through space with a combined left-or-right handedness, and so a spin of ħ or ‑ħ. The net electric vector is helical, but the circular-polarized photon isn’t going round and round: Image courtesy of Rod Nave’s hyperphysics It’s like the circular-polarized photon is an arrow with one set of flights set behind the other. But this arrow isn’t spinning like a bullet. Your washing line isn’t spinning on its long axis like some drive shaft. For a better picture imagine you could lean out of an upstairs window with a whip in your hand. Move the handle quickly in a growing circle that then diminishes to make a wave that corkscrews down the whip, something like this: I think the photon is something like this. Like you, it’s left handed or right handed, just like a screw thread. Because of the screw nature of electromagnetism, associated with the right hand rule. We’ll come on to that later. What a photon is But for now a picture is emerging. A picture that’s doubtless imperfect, but better than no picture at all. A picture of a photon that is a singleton soliton electromagnetic wave. A wave like half a length-wise lemon, with a twist. A corkscrew soliton in space, with a common amplitude that underlies Planck’s constant h. As to why we have this common amplitude, I’m afraid I don’t know. Perhaps I’ll have to settle for that’s the way space is. Perhaps animations of waves in a lattice will tell us something about it. And about solitons, and how they deform the lattice and alter the motion of solitons through the lattice. But for now, I think of the quantization of electromagnetic change written in 1994 by Robert Kemp. That’s the quantization of electromagnetic change, not charge. Kemp talks about saturation and maximum upper and lower potentials, and about the photon having a maximum electromagnetic amplitude in space, such that electromagnetic saturation is the cause of Planck’s constant. He also says this is what Vernon Brown predicted in his photon theory: “Brown predicted that the electromagnetic amplitude or saturation constant would be a constant from which Planck’s constant derives”. It’s as if space has an elastic nature described by permittivity and permeability, and an elastic limit, and so acts like its own waveguide. Because space is not nothing. It’s a polarizable medium, like some kind of ghostly gin-clear elastic. When an ocean wave moves through the sea, the sea waves. When a seismic wave moves through the ground, the ground waves. When an electromagnetic wave moves through space, space waves. And because of this, space is curved wherever that wave is. Not just in one dimension, but in two, because of the circular polarization. So space is curled, like your whip. Are Feynman’s wavy little arrows really corkscrews, showing this curvature? I don’t think so. But I do think that this says something about how pair production works. Because what does a 511keV photon do in gamma-gamma pair production? Or to put it another way: what does a hedgehog do when threatened?
<urn:uuid:c71476e5-5b7f-42a0-8ee2-784c01cc361e>
CC-MAIN-2021-21
https://physicsdetective.com/the-photon/?replytocom=2206
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00257.warc.gz
en
0.92508
4,697
3.65625
4
Then one day he realized it might not be over; he might still have something to contribute to the world. He decided to do two things: 1. develop a plan to control the destructive use of atomic power 2. to discover peacetime uses for atomic power He came alive! The luster was back. He took his dog for walks again. He had a purpose. And as a result of his decisions and the ensuing efforts he made to make those goals a reality, medical and electrical uses for atomic power were found. He gave speeches and helped stir up interest in a worldwide police force that eventually culminated in the founding of the United Nations. "Nothing contributes so much to tranquilizing the mind," wrote Mary Wollstonecraft Shelly, "as a steady purpose — a point on which the soul may fix its intellectual eye." It is physically and psychologically healthy for a human being to have a strong sense of purpose. The state of mind you have when you're absorbed in the accomplishment of a purpose is called "flow," which is an engaged, pleasant state of focus. Those who have learned to develop a sense of purpose and who have learned to become engrossed in the achievement of purposes are the most likely to be happy and healthy. This has been shown in scientific studies and in everyday observations. Happy people are purposeful people because the most reliable self-created source of happiness is taking action along a strongly-held purpose. Flow has been the subject of quite a bit of research. For example, swimmers who experienced flow while training made the most progress by the end of the training. In other words, experiencing frequent flow allowed them to develop their ability faster. Another study accentuated those findings. It found that of all the things that influence how successful a person might become in their sport or skill — in whatever field — the most influential factor was how much flow they experienced while doing it. In other words, the amount of absorption they had was the best predictor of who would develop their talent the most. A sense of purpose brings out the best in people. In his book, Carrying the Fire: An Astronaut's Journeys, Michael Collins wrote about the enthusiasm of the people in the Apollo space program in 1964. "…the goal was clearly and starkly defined," wrote Collins. "Had not President Kennedy said before the end of the decade?" They had a clear goal that the people at NASA were excited about. The moon! The impossible goal! The goal they said could never be done! People showed up early, worked hard, and stayed late. As Collins put it, "People knew that each day was one day closer to putting man on the moon…" This is the electrifying power of a strong sense of purpose. Mihaly Csikszentmihalyi, one of the principle researchers into flow, says we usually see work as a necessary evil, and we feel leisure is what we want: time on our hands. Free time. Time with nothing to do. We long for it. And yet, he says, "free time is more difficult to enjoy than work." Or as Jerome K. Jerome put it, "It is impossible to enjoy idling unless there is plenty of work to do." Work provides clear goals more often than leisure and a clear goal is the first and most important requirement of flow. If you want to experience flow, you must have a purpose. Work provides a purpose. It provides something to become absorbed in, so it provides opportunities for flow. To get flow from leisure, you have to provide the purpose. Many people don't know that, which means many people don't get much enjoyment from their coveted leisure; it isn't satisfying like they wish it would be. Some even suffer during leisure. Sandor Ferenczi, a psychoanalyst in the early 1900's discovered that anxiety and depression occurred more often on Sundays than any other day of the week. Since that time, many observers have noticed that vacations and retirement also tend to produce anxiety and depression. When we are not on the job — when we are not given a clear purpose — many of us are feel adrift and don't know what's missing. Clearly, a large percentage of people don't have a strong sense of purpose for their off time, and it's a shame. Purpose is king. A purpose to sink your teeth into gives your mind a healthy, productive focus and prevents it from drifting into negativity. Without goals, wrote Csikszentmihalyi, "the mind begins to wander, and more often than not it will focus on unresolvable problems that cause anxiety." POWER OF CAUSE Goals put you in a causal position rather than a victim position and that is good for your psychological well-being. In the book, Survive The Savage Sea — the true story of a family who survived a shipwreck — the author and father of the family, Dougal Robertson, describes how their whole attitude changed when they shifted from "hoping for rescue" to "we're going to get ourselves to shore on our own; we're going to survive." A ship was cruising by fairly close, seven days after their boat sank. They spotted it from their life raft. They lit off flares and yelled at the top of their lungs and waved their shirts in the air, but the ship sailed right on by. They were heartbroken. Dougal looked at his empty flare cartons bitterly and, "something happened to me in that instant, that for me changed the whole aspect of our predicament," he wrote. "If these poor bloody seamen couldn't rescue us, then we would have to make it on our own and to hell with them. We would survive without them, yes, and that was the word from now on, 'survival' not 'rescue' or 'help' or dependence of any kind, just survival. I felt the strength flooding through me, lifting me from the depression of disappointment to a state of almost cheerful abandon." Purpose has an almost magical quality. It can imbue us with extraordinary ability. It can make us almost superhuman — more capable than humans in an ordinary state. Ulysses S. Grant was writing his biography near the end of his life. His publisher was Mark Twain. Even though Grant was famous and had been President, he was broke. Twain had assured him there was a market for the book if he could finish it. Grant had cancer and was dying. But he couldn't die. He had something to accomplish. It was very important to him to finish this book and do a good job because his wife would be destitute otherwise. So he persisted. When he could no longer write, he dictated. Doctors said he might not live more than two or three weeks, but like I said, purpose has a mysterious power, and Grant continued dictating until he finished. He died five days after he completed his manuscript. And, by the way, Twain was right: The book, Personal Memoirs of Ulysses S. Grant, was very successful and is even to this day considered one of the best military memoirs ever written, and Grant's wife was set for life. Charles Schulz declared many months ahead of time when he was going to end his comic strip. His last strip was published Sunday. The night before, Schulz died in his sleep. After his family was shipwrecked, Dougal Robertson started adding up their stock. He discovered they had enough food and water to last them ten days. They were two hundred miles downwind and downcurrent from the Galapagos Islands: an impossible feat to get there. They were 2800 miles from the Marquesas Islands, but without a compass or means of finding their position, their chance of missing the islands was enormous. The Central American coast was a thousand miles away, but they had to make it through the windless Doldrums. They wouldn't be missed by anyone for five weeks, and nobody would have the slightest idea of where to start looking anyway, so waiting for rescue would have been suicide. There were two possible places to be rescued by shipping vessels. One four hundred miles south; the other three hundred miles north. Having roused himself enough to assess his situation accurately, his heart sank again. Their true and accurate situation wasn't very hopeful. His wife, Lyn, saw the look on his face and put her hand on his. She said simply, "We must get these boys to land." This singular, clear purpose focused his mind the whole journey. The thought kept coming back to him, spurring him on, making him try when it seemed hopeless. This is the power of a definite, heartfelt purpose. They made it to shore alive. THE MEANING OF LIFE Purpose gives meaning to your life. In many ways, your purpose is the meaning of your life. That gives this subject a superimposing importance. Viktor Frankl was a Jewish psychiatrist in Germany when Hitler took power, and he spent many years struggling to stay alive in concentration camps. During that time, he lost his wife, his brother, and both his parents — they either died in the camps or were sent to the gas chambers. He lost every possession he ever owned. Because he already knew a lot about psychology and then experienced these extreme circumstances — and even managed to find meaning in his struggle — his slim book, Man's Search for Meaning, is definitely worth reading. His perspective on finding meaning in life is different from any other I have encountered. He writes: The meaning of life differs from person to person, from day to day and from hour to hour. What matters, therefore, is not the meaning of life in general but rather the specific meaning of a person's life at a given moment. To put the question in general terms would be comparable to the question posed to a chess champion, "Tell me, Master, what is the best move in the world?" There simply is no such thing as the best or even a good move apart from a particular situation in a game and the particular personality of one's opponent. The same holds true for human existence. One should not search for an abstract meaning of life. Everyone has his own specific vocation or mission in life to carry out a concrete assignment which demands fulfillment. I love that line: "…to carry out a concrete assignment which demands fulfillment." And Frankl gives many good examples of what he means. For example, he tried to keep his fellow prisoners from committing suicide. The Nazi camps strictly forbid prisoners from stopping someone who was killing himself. If you cut down a fellow prisoner who was in the process of hanging himself, you (and probably everyone in your bunkhouse) would be severely punished. So Frankl had to catch people before they actually attempted to kill themselves. This, he felt, was a concrete assignment which demanded fulfillment. He was a psychiatrist and was the most qualified to answer this call from life. The men would often confide in Frankl, since he was a psychiatrist. At two different times, two men told him they had decided to commit suicide. Both of them offered the same reason: They had nothing more to expect from life. All they could expect was endless suffering, starvation, torture, and in the end, probably the gas chamber. "In both cases," wrote Frankl, "it was a question of getting them to realize that life was still expecting something from them; something in the future was expected of them." After talking with the men, he found one of them was a scientist who had written several volumes of a book, but the project was incomplete. It couldn't be finished by anyone else. The other man had a child in another country waiting for him. Each of our lives is unique. The concrete assignment needing to be fulfilled is different for every person. And Frankl found that a person would not commit suicide once they realized their specific obligation to life — that life expected something of them. Michael W. Fox, a veterinarian and author of Superdog: Raising the Perfect Canine Companion, was a lover of animals, as most kids are. One day he was walking home from school when he looked through a fence and saw a ghastly sight. It was the backyard of a veterinary clinic, and there was a large trash bin overflowing with dead dogs and cats. "I never knew the reason for this mass extermination," Fox said, "but I was, from that time on, committed to doing all I could to help animals, deciding at age nine that I had to be a veterinarian." Here was a concrete assignment life had presented to Fox, and he answered the call. He became a veterinarian and has done what he could to reduce the suffering of animals. He has spent his life educating people, writing books, and lobbying to create new legislation that reflects more respect for animals. Dr. Seuss had a mission when he started. He wanted to turn children on to reading. "Before Seuss," wrote Peter Bernstein, "too many children's writers seemed locked into plots that ended with a heavy-handed call to obey one's elders. By the 1950s, educators were warning that America was losing a whole generation of readers." Dr. Seuss wanted to do something about that. And he did. He wrote books kids wanted to read. The Cat in the Hat, Green Eggs and Ham, and forty-six others which have sold over two hundred million copies worldwide. YOU MUST HAVE A GOAL During the Korean War, the Chinese government systematically tried to brainwash the U.S. POWs. Their methods included deprivation and torture, and the captives suffered tremendously. At one point, in one of the prison camps, three-fourths of the POWs had died. Things were incredibly bleak for the rest of them, and they were all feeling desperate and hopeless. Then one man said to the others, "We've got to stay alive, we've got to let others know about the horrors of Communism. We've got to live to bring back the armies and fight these evil people. Communism must not win!" This was a turning point for every man there because their meaningless struggle was transformed into a mission. Simply staying alive against the odds was their goal. Their despair was turned into resolve. Their hopelessness was turned into determination. And their death rate went way down. Speaking again of his experience in a concentration camp, Frankl wrote, "As we said before, any attempt to restore a man's inner strength in the camp had first to succeed in showing him some future goal...Woe to him who saw no more sense in his life, no aim, no purpose, and therefore no point in carrying on. He was soon lost." Sometimes it takes a scientific study to prove the obvious. At least you find out that what you think is self-evident is actually true. Researchers at New York State Psychiatric Institute asked an unusual question of suicidal people. Rather than asking what makes them want to die, the researchers asked what makes them want to live? They studied eighty-four people suffering major depression trying to determine why thirty-nine of them had never attempted to kill themselves. The study revealed that age, sex, religious persuasion or education level did not predict who would attempt suicide. But not having a reason to live did predict it rather well. The depressed patients who perceived life as more worth living were less likely to attempt to kill themselves. So now we know: Goals are very important. It's not just a nice thing. It's vital. Get yourself a concrete assignment that demands fulfillment. Look for something that fires you up, that you think is needed, that you feel is important, and that you can do something about. If someone has no purpose at all, a small goal is a big improvement. But as the level of mental health increases, there comes a time when a full-on mission is called for as a context for your life. You can still watch movies. You can still spend time conversing with your spouse. Walk in the woods. Go on vacation. But like a mantra you constantly return to, your definite purpose, your concrete assignment, is always there to give you a sense of purpose and meaning to your existence. Even if you have a large, overarching purpose, you can only take action in this very moment. It is an excellent practice to try to keep in mind one clear purpose for what you're doing now. And the question, "What is my purpose here?" can really straighten up and clarify your mind and your actions. For example, if you are criticizing someone, ask yourself, "What am I after?" You may find what you're really after is to make the other person feel bad or punish them for something they did. That is an automatic, genetically-driven (and usually counterproductive) purpose. In other words, you didn't really consciously choose to pursue that goal. It happened without you. But now that you've asked the question, "What is my purpose here?" you can choose. You can think about what you really want in this situation. You may decide what you really want is that the person doesn't do it again. Then you'd have a clear purpose and a clear path for action — without games, without negative feelings. All you'd have is a simple request: "Please don't do that again." Make it a regular practice to ask yourself what you want right now. What is your goal here in this situation? What are you after? What are you aiming for? Be clear always and consciously what your purpose is in this very moment. It is effective. It is therapeutic. It is healthy. And it will make you more productive. One key to a strong sense of purpose is the practice of focusing only on what you want. When your mind wanders to other things, bring your focus back. Again and again. Your mind is very easily taken off track, so you have to keep noticing your attention has wandered and keep bringing your focus back to your purpose. When your mind starts worrying about problems that might happen, bring your mind back to your concrete assignment. When your attention becomes fixed on what you don't want, turn your attention to what you do want. Keep your attention on the goal, and your sense of purpose will grow strong. There isn't one "right" purpose which you must find and follow. Delete that kind of magical thinking from your thoughts forever! Any (constructive) purpose is better than no purpose and some are better than others. Some are good for now, but no good if pursued too long. The important thing is that you like the purpose and have a good level of accomplishment along that line. SETTING YOUR COURSE If you don't already have a strong purpose, how do you go about developing one? A high-quality purpose is more than something you feel you should do. That isn't good enough. A good purpose is something you feel a strong desire to do, even feel compelled to do, and something you feel is important — something you think needs to be done and ought to be done because it is right and good. Or something you feel strongly interested in, something that fascinates you and fills you with interest and curiosity. If nothing comes to mind right now, that's not the end of the conversation. There is no such legitimate answer as, "I don't have one of those." Yes, you do. You may have forgotten it. You may never have dug deeply enough to find it in the first place. But you've got at least one. And all you need is one. Most likely there was a time when you knew what your purpose was, at least in a general sense, but for one reason or another you discarded it; someone convinced you it was impossible or stupid, or you convinced yourself. It's now as if you've turned your back on it and are looking around saying, "I don't see any purpose I really want." No, of course not. It is behind you, so to speak. You've already picked it up, had it in your hand and then tossed it behind you where you are no longer looking. Start right now with the assumption that there is a purpose which strongly compels you or strongly interests you, and commit yourself to finding it. If you don't already have a purpose, now you have one: Finding it. What interests you? What do you like to talk about? What do you daydream about? What do you think needs to be done? What do you think "they" ought to do? What do you "wish you could do" but know you can't? A high quality purpose is concrete, challenging, and that you feel is achievable. That's where flow is. That's where motivation is. That's where confidence is. That's where ability is formed. That's where the fun is. In a study at the University of Alabama, they found that people who considered their goal difficult but achievable were more motivated — they were more energized and felt their goal was more important than someone who had an easy goal or an impossible goal. People who thought their goal was easy weren't as motivated. And people who thought their goal was impossible weren't motivated either. Remember, difficult but achievable. Not achievable in some abstract sense, but something you feel you could achieve. And something you feel challenged by. John French, Jr., director of the project, did a study of 2,010 men in twenty-three different jobs, trying to find out which jobs were the most stressful. What they found was kind of surprising. The most stressful jobs were the most boring and unchallenging. These were the jobs that produced the most physical and emotional illness. Says French, "One of the key factors in job satisfaction is self-utilization — the opportunity to fully utilize your abilities on the job, to be challenged, to develop yourself. Frustration and anxiety over not being challenged can have physically debilitating effects." A big, challenging goal, if you feel up to it, will awaken the genius within, bring out your latent talents, give you satisfaction, and make the world a better place. Beethoven's goal was to create music that would transcend fate. Socrates had a goal to make people happy by making them reasonable and just. These are big goals, but they brought out the best in these people and wrote their names in history. THE KILLER OF PURPOSES Probably the biggest killer of purpose is all-or-nothing thinking. "I want to sail around the world," says a young man. But he is married and has a new baby. Obviously he can't go sailing around the world. Or can he? If he's thinking in all-or-nothing terms, he will, of course say "No, I can't go sailing around the world unless I want to be a jerk and leave my wife and child." But that's thinking in one extreme or the other, and life very rarely needs to be so black-or-white. He wakes up one night with a realization. He has been blinding himself with all-or-nothing thinking! He comes up with a plan. He will set aside twenty dollars a week in a Sailing Fund. As he does better at work, he'll increase that amount. But for now, he uses the money for sailing lessons and boating safety classes and books on celestial navigation, always leaving aside a little to accumulate for the purchase of an actual boat. He learns about boat design. It takes him three years before he learns enough to decide what design of boat he wants to get. It takes him another year to figure out what course he will chart, what places he will visit, etc. As his son gets older, they go sailing together on rented sailboats. His son learns how to sail. The father teaches him how to reef the sails, how to steer, how to navigate by the stars. By the time the son is fourteen, the family decides to go for it. They sell their house, buy a sailboat, fill it with supplies, and what do you know? His purpose wasn't silly or impossible after all. It may be, in fact, the highlight of his life. Another thing that kills dreams or prevents the development of a strong sense of purpose is that interest dies. But here you have to be careful. Did your interest die because you actually lost interest now that you know more about it, or did your interest die because of the way you're explaining setbacks to yourself? There are certain ways to explain setbacks in your life that will kill your enthusiasm, destroy your interest, and prevent the development of a sense of purpose. If your interest has been killed by a feeling of defeat, you can revive that dormant interest and fill your life with purpose and meaning. It's important that the goals you seek give you a sense of meaning — that they aren't only about material gain. It's true that any goal is better than no goal, but it's also true that if you have a choice, you ought to choose high-quality goals, goals that will give you a great deal of satisfaction and even meaning. Susan Krause Whitbourne did a long-term research project, starting in 1966. She saw a particular psychological measurement steadily decline over the years. It's called "ego integrity," which is a composite characteristic having to do with honesty, a sense of connection with others, a sense of wholeness, and a feeling that life has meaning. Between 1977 and 1988, ego integrity took a universal dive. The life-satisfaction scores were as low as they could go on her measurements. "People got caught up in chasing the materialistic dream," says Whitbourne, "They got recognition for their achievements, yet don't feel that what they are doing matters in the larger scheme of things." SIMPLIFICATION OF PURPOSES John is a waiter, and he discovered a fundamental principle of life. When he only has one table, he isn't stressed at all. He can concentrate and do a good job, and it is no problem. Two tables, okay. Still no problem. Three tables, and he has to start really paying attention, because it's like juggling — the more balls you have in the air, the easier it is to drop one. When John gets up to seven or eight tables, it becomes stressful. The juggling of tasks becomes too complex to handle well. In the same way, the number of purposes you have is directly related to your stress hormone level. Depending on how you handle your goals, a strong sense of purpose can help you manage stress well, or it can make your general stress level much worse. The problem is that the natural drift for people is toward complication. In other words, if you don't try to do anything about it, your life will get more and more complicated; you will collect more and more purposes. So you have to make a continuous effort to simplify your purposes. Your life will naturally and constantly drift toward complication, just as a rose bush will constantly try to sprawl. You must continually prune. You can't prune once and for all. You have to keep pruning. For example, John wanted his guests to be happy. That was one of his purposes. He also wanted to get along well with his fellow waiters. And he wanted to please the cooks so their interactions were pleasant. And, of course, he wanted the managers to be happy with him. And so on. Too many purposes. His attention is scattered in too many directions. If he knew about simplifying purposes, he would have trimmed his purposes down to something manageable: To make the guests pleased with his service. That's enough to concentrate on, and that would keep his tension level lower, because it is manageable. Manage your purposes. Make a list. What are the really important purposes? Trim the list down to something manageable; something simple enough that you can manage it without stress. Get few enough purposes that it feels good. Be aware that after you trim your purposes, complexity will gradually creep back in. Simplifying your purposes is something you'll need to do once in awhile for the rest of your life. Keep your purposes strong and clear, simple and heartfelt, and you will find the most powerful source of self-generated happiness that exists in this world. As George Bernard Shaw said, "the true joy in life is being used by a purpose recognized by yourself to be a mighty one." Experience the true joy in life. Be used by a mighty purpose. Find yourself a concrete assignment that demands fulfillment and get to work. "The need for meaning in life goes far beyond the mechanical techniques of selecting a goal to be achieved by positive thinking. If a person selects a goal just to satisfy the demands of others he will quickly revert back to self-defeating trap circuits. He will rapidly lose ambition, and though he may try to appear as if he is succeeding in what he is doing, he will feel miserable because he is not really committed to this objective. All the success seminars in the world will not make a potential Mozart or Monet content to be president of the Chase Manhattan Bank. Positive therapy strives to help people acquire a deeply positive orientation to living by enabling them to recover a long-buried dream or to implant firmly the roots of a new one. This need for deep personal meaning has been succinctly expressed by Friedrich Nietzsche: 'He who has a why to live can bear with almost any how.' The phenomenon was directly observed by Viktor Frankl in Nazi concentration camps. Those prisoners who had a deeply rooted reason to survive — a meaningful project, a loving family — best withstood that prolonged torture without reverting to counterhuman patterns of behavior." - Allen Wiesen, psychologist "Morita therapists emphasize that it is important to find suitable constructive purposes and hold to them, thus guiding behavior in a positive direction. The other side of that coin is that all behavior, positive or negative, is purposeful. Whatever you do there is an aim to it, a goal toward which the behavior is directed. The goal may be destructive or constructive or mixed. For example, the shy person may avoid social gatherings in order to prevent the feelings of inadequacy and loneliness that he feels in such situations. In a sense Morita guidance asks the client to select constructive purposes and positive ways of achieving them instead of the already purposeful, but destructive behavior. Finding the purpose behind destructive behavior can be a useful undertaking because sometimes the original purpose can also be fulfilled in a positive way." - David Reynolds founder of Constructive Living leading Western authority on Morita and Naikan therapies, the two most popular forms of therapy in Japan founder of Constructive Living leading Western authority on Morita and Naikan therapies, the two most popular forms of therapy in Japan "Frequently, success is what people settle for when they can't think of something noble enough to be worth failing at." - Laurence Shames "Man is by nature a productive organism. When he ceases his productivity — whether he is producing a pail or a poem, an industry or an ideology — his life begins to lose its meaning. Though he may be finally buried twenty years after his death, the person who has no raison d'être is not really alive. He is merely the ghost of who he once was or might have become." - Allen Wiesen, psychologist Adam Khan is the author of Principles For Personal Growth, Slotralogy, Antivirus For Your Mind, and co-author with Klassy Evans of How to Change the Way You Look at Things (in Plain English). Follow his podcast, The Adam Bomb.
<urn:uuid:86aacbeb-fac6-4cb4-adf5-563a11554dbc>
CC-MAIN-2021-21
https://www.adamlikhan.com/2020/06/why-goal-is-good.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990551.51/warc/CC-MAIN-20210515161657-20210515191657-00217.warc.gz
en
0.98369
6,530
2.78125
3
Informatics Educational Institutions & Programs |Systematic IUPAC name 3D model (JSmol) |Molar mass||5.01054 g·mol−1| Except where otherwise noted, data are given for materials in their standard state (at 25 °C [77 °F], 100 kPa). The helium hydride ion or hydridohelium(1+) ion or helonium is a cation (positively charged ion) with chemical formula HeH+. It consists of a helium atom bonded to a hydrogen atom, with one electron removed. It can also be viewed as protonated helium. It is the lightest heteronuclear ion, and is believed to be the first compound formed in the Universe after the Big Bang. The ion was first produced in a laboratory in 1925. It is stable in isolation, but extremely reactive, and cannot be prepared in bulk, because it would react with any other molecule with which it came into contact. Noted as the strongest known acid, its occurrence in the interstellar medium has been conjectured since the 1970s, and it was finally detected in April 2019 using the airborne SOFIA telescope. Unlike the dihydrogen ion H+ 2, the helium hydride ion has a permanent dipole moment, which makes its spectroscopic characterization easier. The calculated dipole moment of HeH+ is 2.26 or 2.84 D. The electron density in the ion is higher around the helium nucleus than the hydrogen. 80% of the electron charge is closer to the helium nucleus than to the hydrogen nucleus. The helium hydride ion has six relatively stable isotopologues, that differ in the isotopes of the two elements, and hence in the total atomic mass number (A) and the total number of neutrons (N) in the two nuclei: H]+ or [3 HeH]+ (A = 4, N = 1) H]+ or [3 HeD]+ (A = 5, N = 2) H]+ or [3 HeT]+ (A = 6, N = 3; radioactive) H]+ or [4 HeH]+ (A = 5, N = 2) H]+ or [4 HeD]+ (A = 6, N = 3) H]+ or [4 HeT]+ (A = 7, N = 4; radioactive) They all have three protons and two electrons. The first three are generated by radioactive decay of tritium in the molecules HT = 1 H, DT = 2 H, and T 2 = 3 2, respectively. The last three can be generated by ionizing the appropriate isotopologue of H 2 in the presence of helium-4. The following isotopologues of the helium hydride ion, of the dihydrogen ion H+ 2, and of the trihydrogen ion H+ 3 have the same total atomic mass number A: 2]+, [TH]+, [DH 2]+ (A = 4) HeH]+, [DT]+, [TH 2H]+ (A = 5) 2]+, [TDH]+, [D 3]+ (A = 6) 2H]+ (A = 7) The masses in each row above are not equal, though, because the binding energies in the nuclei are different. Unlike the helium hydride ion, the neutral helium hydride molecule HeH is not stable in the ground state. However, it does exist in an excited state as an excimer (HeH*), and its spectrum was first observed in the mid 1980s. Chemical properties and reactions Since HeH+ cannot be stored in any usable form, its chemistry must be studied by forming it in situ. Reactions with organic substances, for example, can be studied by creating a tritium derivative of the desired organic compound. Decay of tritium to 3He+ followed by its extraction of a hydrogen atom yields 3HeH+ which is then surrounded by the organic material and will in turn react. HeH+ cannot be prepared in a condensed phase, as it would donate a proton to any anion, molecule or atom that it came in contact with. It has been shown to protonate O2, NH3, SO2, H2O, and CO2, giving O2H+, NH+ 2, H3O+, and HCO+ 2 respectively. Other molecules such as nitric oxide, nitrogen dioxide, nitrous oxide, hydrogen sulfide, methane, acetylene, ethylene, ethane, methanol and acetonitrile react but break up due to the large amount of energy produced. HeH+(g) → H+(g) + He(g) +178 kJ/mol HeH+(aq) → HeH+(g) +973 kJ/mol (a) H+(g) → H+(aq) −1530 kJ/mol He(g) → He(aq) +19 kJ/mol (b) HeH+(aq) → H+(aq) + He(aq) −360 kJ/mol (a) Estimated to be same as for Li+(aq) → Li+(g). (b) Estimated from solubility data. Other helium-hydrogen ions Additional helium atoms can attach to HeH+ to form larger clusters such as He2H+, He3H+, He4H+, He5H+ and He6H+. The dihelium hydride cation, He2H+, is formed by the reaction of dihelium cation with molecular hydrogen: 2 + H2 → He2H+ + H It is a linear ion with hydrogen in the centre. The hexahelium hydride ion, He6H+, is particularly stable. Other helium hydride ions are known or have been studied theoretically. Helium dihydride ion, or dihydridohelium(1+), HeH+ 2, has been observed using microwave spectroscopy. It has a calculated binding energy of 25.1 kJ/mol, while trihydridohelium(1+), HeH+ 3, has a calculated binding energy of 0.42 kJ/mol. Discovery in ionization experiments Hydridohelium(1+), specifically [4 H]+, was first detected indirectly in 1925 by T. R. Hogness and E. G. Lunn. They were injecting protons of known energy into a rarefied mixture of hydrogen and helium, in order to study the formation of hydrogen ions like H+ 2 and H+ 3. They observed that H+ 3 appeared at the same beam energy (16 eV) as H+ 2, and its concentration increased with pressure much more than that of the other two ions. From these data, they concluded that the H+ 2 ions were transferring a proton to molecules that they collided with, including helium. In 1933, K. Bainbridge used mass spectrometry to compare the masses of the ions [4 H]+ (helium hydride ion) and [2 H]+ (twice-deuterated trihydrogen ion) in order to obtain an accurate measurement of the atomic mass of deuterium relative to that of helium. Both ions have 3 protons, 2 neutrons, and 2 electrons. He also compared [4 H]+ (helium deuteride ion) with [2 3]+ (trideuterium ion), both with 3 protons and 3 neutrons. Early theoretical studies The first attempt to compute the structure of the HeH+ ion (specifically, [4 H]+) by quantum mechanical theory was made by J. Beach in 1936. Improved computations were sporadically published over the next decades. Tritium decay methods in chemistry H. Schwartz observed in 1955 that the decay of the tritium molecule T 2 = 3 2 should generate the helium hydride ion [3 HeT]+ with high probability. In 1963, F. Cacace at the Sapienza University of Rome conceived the decay technique for preparing and studying organic radicals and carbenium ions. In a variant of that technique, the exotic species like the methonium cation are produced by reacting organic compounds with the [3 HeT]+ that is produced by the decay of T 2 that is mixed with the desired reagents. Much of what we know about the chemistry of [HeH]+ came through this technique. Implications for neutrino mass experiments In 1980, V. Lubimov (Lyubimov) at the ITEP laboratory in Moscow claimed to have detected a mildly significant rest mass (30 ± 16) eV for the neutrino, by analyzing the energy spectrum of the β decay of tritium. The claim was disputed, and several other groups set out to check it by studying the decay of molecular tritium T 2. It was known that some of the energy released by that decay would be diverted to the excitation of the decay products, including [3 HeT]+; and this phenomenon could be a significant source of error in that experiment. This observation motivated numerous efforts to precisely compute the expected energy states of that ion in order to reduce the uncertainty of those measurements. Many have improved the computations since then, and now there is quite good agreement between computed and experimental properties; including for the isotopologues [4 H]+, and [3 Spectral predictions and detection In 1956, M. Cantwell predicted theoretically that the spectrum of vibrations of that ion should be observable in the infrared; and the spectra of the deuterium and common hydrogen isotopologues ([3 HeD]+ and [3 H]+) should lie closer to visible light and hence easier to observe. The first detection of the spectrum of [4 H]+ was made by D. Tolliver and others in 1979, at wavenumbers between 1700 and 1900 cm−1. In 1982, P. Bernath and T. Amano detected nine infrared lines between 2164 and 3158 waves per cm. HeH+ has long been conjectured since the 1970s to exist in the interstellar medium. Its first detection, in the nebula NGC 7027, was reported in an article published in the journal Nature in April 2019. From decay of tritium It is believed to be the first compound to have formed in the universe, and is of fundamental importance in understanding the chemistry of the early universe. This is because hydrogen and helium were almost the only types of atoms formed in Big Bang nucleosynthesis. Stars formed from the primordial material should contain HeH+, which could influence their formation and subsequent evolution. In particular, its strong dipole moment makes it relevant to the opacity of zero-metallicity stars. HeH+ is also thought to be an important constituent of the atmospheres of helium-rich white dwarfs, where it increases the opacity of the gas and causes the star to cool more slowly. HeH+ could be formed in the cooling gas behind dissociative shocks in dense interstellar clouds, such as the shocks caused by stellar winds, supernovae and outflowing material from young stars. If the speed of the shock is greater than about 90 kilometres per second (56 mi/s), quantities large enough to detect might be formed. If detected, the emissions from HeH+ would then be useful tracers of the shock. Several locations had been suggested as possible places HeH+ might be detected. These included cool helium stars, H II regions, and dense planetary nebulae, like NGC 7027, where, in April 2019, HeH+ was reported to have been detected. - "hydridohelium(1+) (CHEBI:33688)". Chemical Entities of Biological Interest (ChEBI). European Bioinformatics Institute. - Engel, Elodie A.; Doss, Natasha; Harris, Gregory J.; Tennyson, Jonathan (2005). "Calculated spectra for HeH+ and its effect on the opacity of cool metal-poor stars". Monthly Notices of the Royal Astronomical Society. 357 (2): 471–477. arXiv:astro-ph/0411267. Bibcode:2005MNRAS.357..471E. doi:10.1111/j.1365-2966.2005.08611.x. S2CID 17507960. - "Hydridohelium (CHEBI:33689)". Chemical Entities of Biological Interest (ChEBI). European Bioinformatics Institute. - Güsten, Rolf; Wiesemeyer, Helmut; Neufeld, David; Menten, Karl M.; Graf, Urs U.; Jacobs, Karl; Klein, Bernd; Ricken, Oliver; Risacher, Christophe; Stutzki, Jürgen (April 2019). "Astrophysical detection of the helium hydride ion HeH+". Nature. 568 (7752): 357–359. arXiv:1904.09581. Bibcode:2019Natur.568..357G. doi:10.1038/s41586-019-1090-x. PMID 30996316. S2CID 119548024. - Andrews, Bill (22 December 2019). "Scientists Find the Universe's First Molecule". Discover. Retrieved 22 December 2019. - Hogness, T. R.; Lunn, E. G. (1925). "The Ionization of Hydrogen by Electron Impact as Interpreted by Positive Ray Analysis". Physical Review. 26 (1): 44–55. Bibcode:1925PhRv...26...44H. doi:10.1103/PhysRev.26.44. - Coxon, J.; Hajigeorgiou, P. G. (1999). "Experimental Born–Oppenheimer Potential for the X1Σ+ Ground State of HeH+: Comparison with the Ab Initio Potential". Journal of Molecular Spectroscopy. 193 (2): 306–318. Bibcode:1999JMoSp.193..306C. doi:10.1006/jmsp.1998.7740. PMID 9920707. - Dias, A. M. (1999). "Dipole Moment Calculation to Small Diatomic Molecules: Implementation on a Two-Electron Self-Consistent-Field ab initio Program" (PDF). Rev da Univ de Alfenas. 5 (1): 77–79. - Dey, Bijoy Kr.; Deb, B. M. (April 1999). "Direct ab initio calculation of ground-state electronic energies and densities for atoms and molecules through a time-dependent single hydrodynamical equation". The Journal of Chemical Physics. 110 (13): 6229–6239. Bibcode:1999JChPh.110.6229D. doi:10.1063/1.478527. - Coyne, John P.; Ball, David W. (2009). "Alpha particle chemistry. On the formation of stable complexes between He2+ and other simple species: implications for atmospheric and interstellar chemistry". Journal of Molecular Modeling. 15 (1): 35–40. doi:10.1007/s00894-008-0371-3. PMID 18936986. S2CID 7163073. - Cantwell, Murray (1956). "Molecular Excitation in Beta Decay". Physical Review. 101 (6): 1747–1756. Bibcode:1956PhRv..101.1747C. doi:10.1103/PhysRev.101.1747.. - Wei-Cheng Tung, Michele Pavanello, and Ludwik Adamowicz (2012): "Accurate potential energy curves for HeH+ isotopologues ". Journal of Chemical Physics, volume 137, issue 16, pages 164305. doi:10.1063/1.4759077 - Schwartz, H. M. (1955). "Excitation of Molecules in the Beta Decay of a Constituent Atom". Journal of Chemical Physics. 23 (2): 400–401. Bibcode:1955JChPh..23R.400S. doi:10.1063/1.1741982. - Snell, Arthur H.; Pleasonton, Frances; Leming, H. E. (1957). "Molecular dissociation following radioactive decay: Tritium hydride". Journal of Inorganic and Nuclear Chemistry. 5 (2): 112–117. doi:10.1016/0022-1902(57)80051-7. - Bainbridge, Kenneth T. (1933). "Comparison of the Masses of H2 and Helium". Physical Review. 44 (1): 57. Bibcode:1933PhRv...44...57B. doi:10.1103/PhysRev.44.57. - Bernath, P.; Amano, T. (1982). "Detection of the Infrared Fundamental Band of HeH+". Physical Review Letters. 48 (1): 20–22. Bibcode:1982PhRvL..48...20B. doi:10.1103/PhysRevLett.48.20. - Pachucki, Krzysztof; Komasa, Jacek (2012). "Rovibrational levels of helium hydride ion". The Journal of Chemical Physics. 137 (20): 204314. Bibcode:2012JChPh.137t4314P. doi:10.1063/1.4768169. PMID 23206010. - Möller, Thomas; Beland, Michael; Zimmerer, Georg (1985). "Observation of Fluorescence of the HeH Molecule". Physical Review Letters. 55 (20): 2145–2148. Bibcode:1985PhRvL..55.2145M. doi:10.1103/PhysRevLett.55.2145. PMID 10032060. - "Wolfgang Ketterle: The Nobel Prize in Physics 2001". nobelprize.org. - Ketterle, W.; Figger, H.; Walther, H. (1985). "Emission spectra of bound helium hydride". Physical Review Letters. 55 (27): 2941–2944. Bibcode:1985PhRvL..55.2941K. doi:10.1103/PhysRevLett.55.2941. PMID 10032281. - Grandinetti, Felice (October 2004). "Helium chemistry: a survey of the role of the ionic species". International Journal of Mass Spectrometry. 237 (2–3): 243–267. Bibcode:2004IJMSp.237..243G. doi:10.1016/j.ijms.2004.07.012. - Cacace, Fulvio (1970). Gaseous Carbonium Ions from the Decay of Tritiated Molecules. Advances in Physical Organic Chemistry. 8. pp. 79–149. doi:10.1016/S0065-3160(08)60321-4. ISBN 9780120335084. - Lias, S. G.; Liebman, J. F.; Levin, R. D. (1984). "Evaluated Gas Phase Basicities and Proton Affinities of Molecules; Heats of Formation of Protonated Molecules". Journal of Physical and Chemical Reference Data. 13 (3): 695. Bibcode:1984JPCRD..13..695L. doi:10.1063/1.555719. - Carrington, Alan; Gammie, David I.; Shaw, Andrew M.; Taylor, Susie M.; Hutson, Jeremy M. (1996). "Observation of a microwave spectrum of the long-range He⋯H+ 2 complex". Chemical Physics Letters. 260 (3–4): 395–405. Bibcode:1996CPL...260..395C. doi:10.1016/0009-2614(96)00860-3. - Pauzat, F.; Ellinger, Y. (2005). "Where do noble gases hide in space?". In Markwick-Kemper, A. J. (ed.). Astrochemistry: Recent Successes and Current Challenges (PDF). Poster Book IAU Symposium No. 231. 231. Bibcode:2005IAUS..231.....L. Archived from the original (PDF) on 2007-02-02. - Beach, J. Y. (1936). "Quantum‐Mechanical Treatment of Helium Hydride Molecule‐Ion HeH+". Journal of Chemical Physics. 4 (6): 353–357. Bibcode:1936JChPh...4..353B. doi:10.1063/1.1749857. - Toh, Sôroku (1940). "Quantum-Mechanical Treatment of Helium-Hydride Molecule Ion HeH+". Proceedings of the Physico-Mathematical Society of Japan. 3rd Series. 22 (2): 119–126. doi:10.11429/ppmsj1919.22.2_119. - Evett, Arthur A. (1956). "Ground State of the Helium‐Hydride Ion". Journal of Chemical Physics. 24 (1): 150–152. Bibcode:1956JChPh..24..150E. doi:10.1063/1.1700818. - Cacace, Fulvio (1990). "Nuclear Decay Techniques in Ion Chemistry". Science. 250 (4979): 392–399. Bibcode:1990Sci...250..392C. doi:10.1126/science.250.4979.392. PMID 17793014. S2CID 22603080. - Speranza, Maurizio (1993). "Tritium for generation of carbocations". Chemical Reviews. 93 (8): 2933–2980. doi:10.1021/cr00024a010. - Lubimov, V.A.; Novikov, E.G.; Nozik, V.Z.; Tretyakov, E.F.; Kosik, V.S. (1980). "An estimate of the νe mass from the β-spectrum of tritium in the valine molecule". Physics Letters B. 94 (2): 266–268. Bibcode:1980PhLB...94..266L. doi:10.1016/0370-2693(80)90873-4.. - David E. Tolliver, George A. Kyrala, and William H. Wing (1979): "Observation of the Infrared Spectrum of the Helium-Hydride Molecular Ion [4 HeH]+". Physical Review Letters, volume 43, issue 23, pages 1719-1722. doi:10.1103/PhysRevLett.43.1719 - Fernández, J.; Martín, F. (2007). "Photoionization of the HeH+ molecular ion". Journal of Physics B. 40 (12): 2471–2480. Bibcode:2007JPhB...40.2471F. doi:10.1088/0953-4075/40/12/020. - Mannone, F., ed. (1993). Safety in Tritium Handling Technology. Springer. p. 92. doi:10.1007/978-94-011-1910-8_4. ISBN 978-94-011-1910-8. - Liu, X.-W.; Barlow, M. J.; Dalgarno, A.; Tennyson, J.; Lim, T.; Swinyard, B. M.; Cernicharo, J.; Cox, P.; Baluteau, J.-P.; Pequignot, D.; Nguyen, Q. R.; Emery, R. J.; Clegg, P. E. (1997). "An ISO Long Wavelength Spectrometer detection of CH in NGC 7027 and an HeH+ upper limit". Monthly Notices of the Royal Astronomical Society. 290 (4): L71–L75. Bibcode:1997MNRAS.290L..71L. doi:10.1093/mnras/290.4.l71. - Harris, G. J.; Lynas-Gray, A. E.; Miller, S.; Tennyson, J. (2004). "The Role of HeH+ in Cool Helium-rich White Dwarfs". The Astrophysical Journal. 617 (2): L143–L146. arXiv:astro-ph/0411331. Bibcode:2004ApJ...617L.143H. doi:10.1086/427391. S2CID 18993175. - Neufeld, David A.; Dalgarno, A. (1989). "Fast molecular shocks. I – Reformation of molecules behind a dissociative shock". The Astrophysical Journal. 340: 869–893. Bibcode:1989ApJ...340..869N. doi:10.1086/167441. - Roberge, W.; Delgarno, A. (1982). "The formation and destruction of HeH+ in astrophysical plasmas". The Astrophysical Journal. 255: 489–496. Bibcode:1982ApJ...255..489R. doi:10.1086/159849.
<urn:uuid:06161b76-aadb-4f65-8e2e-ccb916b1b6dd>
CC-MAIN-2021-21
https://www.limsforum.com/informatics-educational-institutions-programs/?rdp_we_resource=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FHelium_hydride_ion
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991288.0/warc/CC-MAIN-20210518160705-20210518190705-00457.warc.gz
en
0.768408
5,493
3.59375
4
The bicentenary of Franz Liszt (1811–1886) follows hard upon those of Berlioz, Mendelssohn, Chopin, and Schumann, and he has conserved his place as one of the supreme Romantic composers. Nevertheless, his career as a composer was always cursed by the fact that he was also, it is generally agreed, the greatest pianist who ever lived. The major part of his work was for piano, much of it tailored for himself to perform, many of the pieces presenting a difficulty of execution almost never before seen. As a result, even today most performances of Liszt are generally intended not as a specifically musical experience, but chiefly to display the pianist’s technique, just as productions of Lucia di Lammermoor are much concerned to showcase the soprano’s highest notes and coloratura ability to warble with a flute (or glass harmonica, in the original version). In writing about Liszt as a composer, the constant invasion of his piano scores by long passages of challenging and conspicuous technical difficulty is rarely treated seriously. Nevertheless, these spectacular passages were one of the reasons that his invention of the piano recital became such a success. No one before Liszt played an entire public concert on the piano. At first, these programs were very flashy, often dominated by transcriptions of popular airs from contemporary operas. He was the first composer who turned a musical performance into something like an athletic feat. He invented the principal musical effect that for almost two centuries has sent audiences roaring to their feet with applause: the single musical line played strongly and rapidly with both hands spanning octaves for a lengthy dramatic passage, fortissimo and staccato. The right-hand octaves in the higher register provide metallic brilliance, and the lower left-hand octaves a thunderous sonority. In addition, when the musical line makes large leaps quickly from side to side, an attractive acrobatic element is added visually for the audience’s enjoyment, as at the opening of Liszt’s Concerto No. 1 in E-flat Major. Liszt also invented, I believe, the writing of rapid and unrelenting octaves for several pages (like the end of the Hungarian Rhapsody No. 6).1 This made the bravura style even more into an athletic feat since the unremitting display of fast octaves for several pages will cause sharp pains to shoot up the forearm of the pianist until he or she has learned to relax the wrist muscles when playing the passage, not an easy technique to acquire if the passage must be played so that it is always getting louder and faster. A piece like this will win the pianist admiration not just for skill but also for stamina. To emphasize the athletic aspect of bravura playing was not a purely personal ambition of Liszt. The project was in the air during the early nineteenth century. Liszt studied as a young boy in Vienna with Carl Czerny, a composer-pianist who had had a few lessons with Beethoven. Most of Czerny’s compositions were didactic compilations of exercises with names like The School of Velocity to develop the strength of the fingers of young pianists. In his other works, Czerny is, in fact, a somewhat better composer than his reputation today would have us believe, but the numerous exercise volumes are not very exciting. The Étude in Twelve Exercises that Liszt wrote and published at the age of sixteen in 1827 betrays the influence of Czerny. The exercises are largely uninspiring, except for one in A-flat Major with a lovely Italianate melody. There was, however, an internationally famous example as a model before the youthful Liszt, Nicolò Paganini, who revolutionized violin playing, publishing his 24 Caprices in 1820, and Liszt finally heard him play in Paris in 1832. Paganini did not always attempt to produce a pleasing or beautiful sound on the violin, but often astonished the public by attacking the instrument with brutal and dramatic violence. Liszt determined to do for the piano what Paganini had accomplished for the violin. He did, indeed, imitate the brutality and created fierce sonorities on the keyboard never heard before. He transcribed six of the Paganini caprices as études in 1838. Before that, however, another important model had already come before him, Fryderyk Chopin, who, in 1829, at the age of nineteen, had already begun to transform the genre of the virtuoso piano étude. His first set of twelve études, opus 10, was finished by 1832, and the second set, opus 25, by 1837. One year older than Liszt, Chopin was a great admirer of his younger colleague’s playing, although relatively skeptical about his compositions. Chopin’s first set of études was dedicated to Liszt, and the second set to Liszt’s mistress, Marie d’Agoult. His études are among the most difficult works for the piano, but they are less spectacular than Liszt’s in their display of bravura, requiring more subtle nuances of phrasing. Josef Hofmann, considered by many the finest pianist of the twentieth century, claimed that nobody could give a satisfying interpretation of all the Chopin études. The composer himself knew that he could not play them as well as Liszt. They were an extraordinary stimulus for Liszt in the years 1837 and 1838 when he produced his Paganini and Transcendental Études. Robert Schumann had also previously transcribed some of the Paganini caprices, but his arrangements were very modest. Liszt dedicated his more pretentious transcriptions to Clara Schumann—she was (like everybody else) certainly incapable of playing them, as his version is sometimes close to the absolutely impossible, and was, in fact, completely rewritten by the composer almost two decades later. It is this later simplified edition that is played today. For example, in the first version of the sixth étude the right hand skips all over the piano with huge rapid leaps, making it difficult to hit the right notes, in an evident attempt to imitate the way a violinist’s bow bounces over all four strings. This étude was later rewritten and toned down to leave the hand more conveniently resting in one register. At the same time as his Paganini transcriptions, Liszt began to rewrite his twelve uninteresting youthful exercises and turn them into the sensational and masterly Transcendental Études. These, as well, skirt the impossible and had to be rewritten twenty years later to make them more accessible. This suggests that even Liszt may have had problems executing the earlier versions. The inspiration for the first recasting in 1838 of the youthful exercises was obviously Chopin, as Liszt’s early F-Minor exercise was made more interesting by superimposing an agitated melody very like Chopin’s F-Minor étude of opus 10 on top of the sixteen-year-old’s bland effort. Even more revealing both of Chopin’s influence and of Liszt’s imaginative and original exploitation of that influence is the famous Transcendental Étude called Feux Follets (Will o’ the Wisp). The original B-flat exercise from which it arises was an exceedingly simple piece negotiable not only by a sixteen-year-old pianist but even by a talented ten-year-old. It provided a skeleton form for the compositional metamorphosis that turned out to be one of the supremely difficult works of the repertory, still demanded today by piano competitions. It was a frequent display piece for Sviatoslov Richter. Liszt was obviously impressed by the opening double-note trill in thirds followed by a chromatic scale in thirds of Chopin’s Étude (in Thirds) in G-sharp Minor, opus 25, and he created Feux Follets by turning the simple opening of his juvenile B-flat Major exercise into a trill in double notes (sixths, fifths, and fourths) followed by a chromatic scale in the same double notes, although the melody adheres otherwise to the original outer shape and outline of the earlier exercise. However, the changed tone color and harmonies parade dramatic contrasts and a ravishing variety of delicate sounds of a new character in piano literature, while the Chopin étude is more focused, intense, and unified. Even if there is an important debt to Chopin in Liszt’s Transcendental Études, one must admire the originality of the imaginative adaptation. Comparing these two famous pieces reveals the profound difference between the two composers. Another basic difference between the compositional technique of Liszt and Chopin was observed long ago by Donald Francis Tovey, and Feux Follets once again offers a good example. When the principal theme in B-flat Major returns in the new tonality of A Major, it has become very awkward for the hands to play the double-note trill and chromatic scale as the relation of black to white keys has changed with the new key,2 and Liszt accordingly rewrites the theme in a new form that fits hands to the new harmony. Chopin, as Tovey remarked, is more ruthless: when a figure that lies well for the hands in the opening key returns in a less convenient form, he generally demands that the pianist cope with the new difficulty, refusing to make any musical concession to the physical discomfort. Liszt is often supremely difficult but almost never really awkward, and always composes with the physical character of the performance in mind. In the conception of modern virtuosity, he was even more important than Chopin, whose achievement was more idiosyncratically personal. For the concertos of Tchaikovsky, Grieg, and Rachmaninov and the works of Balakirev and Ravel, it is the innovations of Liszt that come to the fore, and that is true even in the piano compositions of Aaron Copland, Sergei Prokofiev, Elliott Carter, and many others. It is interesting to see in the Transcendental Études how close Liszt actually sticks in some respects to his earlier exercises written when he was sixteen, using them as a basis to create new technical difficulties and novel imaginative effects. A large proportion of some of the finest works of Liszt are actually rethinkings of an earlier version. After the age of forty, Liszt made the more extravagant inspirations of his thirties more accessible, polishing and simplifying them—transcribing them, in short—into the forms that one hears today in the concert hall and on discs. These transcriptions of his own earlier compositions include most of his best-known works, including the great landscape tone poems of his Swiss and Italian “pilgrimage years”; even the profound black despair of his Vallée d’Obermann. An excellent new book, Liszt as Transcriber by Jonathan Kregor, selectively discusses some of the hundreds of transcriptions of Liszt’s career, largely omitting the many transcriptions or rewritings of his own work. Kregor concentrates on the transcriptions of Berlioz’s Symphonie Fantastique, the Beethoven symphonies, the overtures of Weber, the Schubert lieder, and the selections from the Wagner operas. A final chapter treats a few late transcriptions of César Cui, Saint-Saëns, and Verdi. This last section glances at the problem of the last works of Liszt, which experimentally prefigure some of the modernist effects that would appear with Debussy and Schoenberg, and Kregor makes a good case for finding in the late transcriptions some traces of the stylistic developments already present in some of Liszt’s early works. Unfortunately these late pieces are rarely strong enough to bear a comparison with Debussy or Schoenberg. Kregor’s choices are interesting because they cast light on Liszt’s predominant role in the musical politics of the nineteenth century. With Berlioz and then Wagner, he was the leader of the group in favor of new music, new forms, and new styles, opposed by the traditionalists led by Joseph Joachim and Johannes Brahms. Kregor begins with intelligent considerations on the different methods of transcription in the nineteenth century, from simple attempts to represent the main musical line to the imitations of different kinds of orchestral details on the keyboard. The great question, of course, in the transcription of a symphony is whether one makes a successful piece that is pianistic and sounds as if it were written directly for the piano or whether one can somehow make the piano resemble the original orchestral instruments. Liszt did both and sometimes very shrewdly. For example, the first of the Paganini Études (not treated by Kregor) begins entirely for the left hand alone. This gives the pianist the opportunity of feeling like a violinist, since it is with the left hand that the violinist chooses and makes the exact pitches of the score, while the right hand with the bow only releases the sound. (When Brahms arranged for the piano the solo violin Bach Chaconne for the left hand alone, he claimed that playing it made him feel like a violinist, but he was quite clearly imitating Liszt’s example.) Liszt’s transcription of the Berlioz symphony made the reputation of the composer at a time that it was difficult for him to give performances. The famous review of the symphony written by Robert Schumann was actually written from an examination of Liszt’s transcription since the full score was not available, and Schumann had never heard it. This transcription was an act of publicity. At a time when orchestral performances were much rarer, almost nonexistent outside of large cities, the fundamental importance of the complete transcription of all nine Beethoven symphonies, started only a decade after Beethoven’s death, was a similar work of education and publicity, and it was fundamental in the construction of Beethoven’s future fame, a goal for which Liszt worked tirelessly for many years, including his aid for the construction of a monument in Beethoven’s home town of Bonn. These symphonic transcriptions were more than merely educational, as Liszt actually performed several of the symphonies on the piano in public concerts. They were therefore not only for private use and information but for public display beginning in the 1830s when Liszt gave piano recitals for money. This lasted until 1847, after which he abandoned commercial performance at the piano since his later mistress, the Princess Carolyne von Sayne-Wittgenstein, considered such concerts degrading. Afterward he conducted an orchestra, but played the piano only for charity. The arrangements of Schubert songs also had an educational role in the establishment of Schubert’s reputation and German style in general (although Schubert’s songs had already won a very large following with the public in Paris). Here the educational purpose is more dubious, since it is not hard to sight-read songs at the piano, adding at least some of the vocal line, and in addition, several of Liszt’s arrangements of Schubert are almost insanely difficult. One verse of The Linden Tree with a vocal line of folk song–like simplicity, arranged by Liszt and illustrated in Kregor’s book, requires the pianist to play the melody with the left hand over some exceedingly complex chords while the right hand trills rapidly with the weakest fingers, four and five, and the right thumb executes a rapid and agitated accompanying figure. (I remember that when I first saw the arrangement some years ago, I immediately practiced this page for half an hour just to see if it was possible—it is, but would require hours of further study to do it smoothly and balance all the sonorities.) The arrangement of the song The Trout as well is so difficult that it would do more to dampen than to encourage an amateur’s interest in Schubert, but it makes a great encore piece. Kregor presents an interesting case for the fact that the selected songs from Schubert’s Winterreise are arranged by Liszt in an order that makes a new and coherent cycle, although it should be said that since the songs were sold separately, this might have been hard to realize. Liszt’s transcriptions of Chopin songs do form a cycle that Chopin never intended, and they are much more successful musically in Liszt’s version than in the original vocal setting, because Liszt has actually made them more Chopinesque by adapting some of Chopin’s piano works in the introductions or the accompaniments. The arrangements of Schubert waltzes called Soirées de Vienne contribute to the public presentation of these beautiful dances, since in their original unpretentious form they are obviously intended only for private performance for dancers at home, making little effect in public, and Liszt’s reworkings are genuine improvements for the concert platform—although perhaps they are even better played privately in their original simplicity. In leaving out the paraphrases of popular contemporary Italian and French operas, Kregor minimizes one essential commercial purpose of the transcriptions, the display of the pianist’s technique. These transcriptions have been underrated since many of them are mere showpieces. The great ones, however, like the Reminiscences of Norma, amount to a synoptic and critical view of the opera worth much more than most of what has been written about the work. It is true that it is above all in the opera paraphrases that all the difficult finger exercises that Liszt learned from his teacher Czerny along with new ones he invented are introduced fortissimo and velocissimo or alternatively with great delicacy but usually with stupefying public effect. The imaginative power of his most outlandish inspirations is breathtaking. One of the most famous examples occurs in his paraphrase of Mozart’s Don Giovanni, where he transforms a simple half cadence into an enormous climax. The harmony is the simplest possible: a dominant seventh, the most commonplace penultimate chord of all tonal music from 1700 to the present day. Liszt places the top note of the chord in the right hand very high on the keyboard and the bottom note at the far left. These notes remain fixed and the pianist goes back and forth from them to the notes in the center. Both hands play all the notes of the chord (twenty-two notes over the whole keyboard), leaping at high speed in contrary motion from top and bottom gradually to the center of the keyboard, ending with huge leaps, the whole passage marked “bravura fortissimo.” In his important edition of Liszt, Ferruccio Busoni admonishes the pianist not even to think of slowing down, and adds that no matter how much you practice or how superior your technique has become, this passage is still risky—in other words, you can never be sure that you will hit all the right notes. The astonishing visual effect is essential to the music. When this work is played, connoisseurs watch the pianist at this moment intently to see what will happen, just as the balletomanes at a performance of Swan Lake watch the black swan, counting carefully to see if she will get all of the thirty-two fouettés in a straight line. It was only at the end of the nineteenth century that the Russian ballet developed its athletic character comparable to the transformation of virtuosity in pianism, and it was a development of similar aesthetic consequence in the history of the art. Some musicians who appreciate Liszt with passion believe, like his mistresses, that all his bravura showmanship was unworthy of him and would like to purge his reputation of any attempt to emphasize its importance. It made, however, an important contribution to the dramatic force of his style. The emotional impact of his inventions of virtuosity can sometimes be found at the heart of even his most meditative work. It would be hard to overestimate the cultural importance of his bravura style for the history of classical music from his day to ours. Kregor’s book gives a persuasive account of the importance of Liszt’s transcriptions in contemporary musical politics. They made a great deal of music available to many who had little chance of contact with it, and above all they indicated an extraordinary variety of ways that music could be interpreted, and the art of imagining a score with different possibilities of sound. We might say that the transcription transferred the weight of interest from the written score to the performance, and revealed the way that performance could rewrite the original score. At one point, however, it seems to me that Kregor underestimates the variety of Liszt’s playing. Berlioz reported that when Liszt played Beethoven’s “Hammerklavier” Sonata, he rendered the score with absolute fidelity. Kregor thinks that Berlioz failed to notice Liszt’s usual interpretive shenanigans, because he was following the score. I think this very unlikely. Liszt was quite capable of playing with absolute fidelity when it was a challenge to do so. The Hammerklavier Sonata, largely unplayed at the time because of its difficulty, would have presented just such a challenge. On one occasion, Chopin was so outraged at the freedom of Liszt’s playing of one of his nocturnes in a salon that he stormed over to the piano and played it himself. The next day Chopin was asked to play it again, and he said he would do so if they put out the lights. When the lamps were lit again afterward, it was Liszt who had played exactly as Chopin had done the evening before. Kregor does not deal with the transcriptions of the organ works of Bach by Liszt: there are seven, six of which are absolutely faithful to the original text, just making it possible to play with two hands a work that demanded manual and pedal keyboards; but the seventh transcription is extremely free with a great many Lisztian additions. I would think that Liszt’s performances like his transcriptions could range from faithful to highly personal. There is, of course, more to Liszt than the virtuoso piano compositions, but little of the rest has either a comparable power or the historical weight. The orchestral tone poems are no longer an essential part of the symphonic repertory, but have been replaced by those by Richard Strauss. There are many exquisite songs, often original and highly idiosyncratic. Of the larger orchestral works, only the Faust Symphony is still performable today (and I confess I have found a fine performance of the transcription for two pianos more effective than the original symphonic form). The oratorio St. Elizabeth is positively lethal. Complete honesty would compel one to admit that even in the finest works of Liszt there is occasionally a moment of somewhat commonplace inspiration, and much of his production has always seemed to be not in the best of taste. He did not have the aristocratic grace, impeccable workmanship, and morbidly intimate sentiment of Chopin, or the simple surge of lyrical passion in the best of Schumann. So much of Liszt’s work, however, has an effective power that paralyzes criticism and makes questions of taste irrelevant. The tiring repeated octaves in Schubert’s Erlkönig accompaniment largely stay on the same notes, while Liszt’s make considerable leaps. ↩ The arrangement of black keys is asymmetrical within the octave (two black keys between three white followed by three black keys within four white), so the fingering of a passage must often be altered if the key is changed. ↩
<urn:uuid:cfe757c5-7bc6-4d28-a6f3-b2960025c6b5>
CC-MAIN-2021-21
https://www.nybooks.com/articles/2012/02/23/super-power-franz-liszt/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.18/warc/CC-MAIN-20210506235514-20210507025514-00296.warc.gz
en
0.967307
5,108
3.03125
3
100 Things We Learned in 2019 From findings about space and parasites to new discoveries in math and ancient civilizations, 2019 was a big year. 1. Women might perform better on tests in warmer rooms. A study published this year in the journal PLOS One found that female students performed better on simple math and verbal tests with each degree that room temperature rose. Female performance on math questions increased a whopping 27 percent at temperatures over 80 degrees Fahrenheit versus their results in rooms under 70 degrees—but more research is needed. 2. We may have learned just how rare supercentenarians, or people over the age of 110, are in 2019. States in the U.S. began introducing birth certificates at different times in the last century, and according to research announced this July, the introduction of these standardized records coincided with a 69 to 82 percent drop in the rate of people living to the age of 110. In other words, a lot of our supercentenarians are probably not that old and just don’t have good records of when they were born. 3. The 10,000-hour rule was dealt a critical blow in 2019. For years, scholars have questioned the legitimacy of ascribing outsized importance to the role that 10,000 hours of practice plays in achieving mastery. The idea originally spawned from a study published in 1993, which showed the best violinists practiced the most, and was made famous in Malcolm Gladwell’s Outliers. But a replication of the study this year found that some violinists could practice as much as better players but still not reach their level. To be fair to Gladwell, he never said 10,000 hours of practice was a guarantee for mastery, but he did oversimplify the original 1993 study, according to one of its authors. The bottom line? It probably requires more than just practice to make perfect. 4. If you want to make improvements, try telling your goal to a mentor. Research from Ohio State University concluded that sharing your goals with someone who you consider “higher status” will make you more committed to those goals. They found this by having undergraduates set goals with a lab assistant who was either dressed up in a suit, proclaiming to be an expert Ph.D. student, or in casual clothes, pretending to be a local community college student. 5., 6., and 7. We got a few studies that could help you achieve professional success this year. For example, in a study of 183 employees, researchers found that those with hobbies after work, such as playing sports or volunteering, were more proactive during the workday. And according to a study that sorted 260 undergrads into 78 teams, people tend to like leaders who are extroverted—unless those leaders also consider themselves assertive or very warm, in which case they’re liked less than the typical extroverted leader. In a study of workplace ethics, researchers found that when participants believed that being honest will take more effort, they’re more likely to be dishonest. 8. In a study of 332 individuals, researchers found that people do tend to have a “type” when it comes to romantic partners. They dated people with similar traits on the Big Five Inventory: openness to new experiences, conscientiousness, extraversion, agreeableness, and neuroticism. Perhaps unsurprisingly, people high in extraversion and openness to new experiences don’t stick to a type as often. 9. Now we know how much time you have to be outside to reap benefits. It probably won’t blow your mind if we tell you that if you spend time outside, you’ll be healthier and happier. But research released this year provided an actual amount of time you want to hit to get the benefits: 120 minutes per week outside. Do that 5000 times and you’ll hit 10,000 hours, which, as we now know (see #3), might not mean much ... 10. Until 700 million years ago, Venus may have had liquid water. That's according to a study presented at a conference in September 2019. NASA’s Pioneer Venus mission previously hinted that water may have been possible on the planet at one time, so researchers did five simulations based on the Pioneer Venus’s information and found that the planet may have been habitable for 2 to 3 billion years. 11. An elevator from a low-orbiting point above earth to the moon is possible. It would require a 200,000-mile-long cable and cost around $1 billion, based on the calculations of Zephyr Penoyre and Emily Sandford. That estimate is based on a cable only around as wide as a pencil lead, and even so there are a number of challenges to overcome, from wildly varying cost estimates to the danger of orbiting space junk, but supporters of various models of a space elevator contend that these obstacles are surmountable. 12. We now know what the farthest object we've ever explored with a spacecraft looks like. MU69 is 4.1 billion miles from earth. Photos were taken of the Kuiper Belt object, now renamed Arrokoth, in 2014, but clearer ones taken this year show that it looks kind of like a snowman. 13. And we now know what black holes look like, too. On April 10, we all learned what black holes look like when a photo was released of a black hole located roughly 54 million light years away. The scientists who made it happen received the Breakthrough Prize in fundamental physics, which comes with a payout of $3 million. 14. We revised the Hubble Constant this year. New calculations came out this year that suggested the Hubble Constant—essentially, the expansion rate of the universe—is around 82.4 kilometers per second per megaparsec, much higher than previous estimates. What does this mean? Well, for one it suggests the universe is just 11.4 billion years old, considerably younger than the previously believed 13.7 billion years. Like many of these newer pieces of research, though, the findings are still being debated. 15. We discovered that the moon is older than we thought. A new study reveals the moon formed about 4.51 billion years ago, 100 million years older than previously thought. (But we have to say, it doesn’t look a day over 4.4 billion!) 16. The moon is also shrinking. This year, scientists discovered that as the moon gets smaller, moonquakes occur, just like earthquakes. 17. We found a planet half the size of Jupiter. It's 31 light years away, and it's orbiting a star only 12 percent the size of our sun. It’s unique to find a planet this big, let alone one that orbits a dwarf star. 18. Space affects the gut microbiome. We already knew that space makes people different in many ways. But thanks to a study on astronaut Scott Kelly, we now know that space travel alters the ratios of bacteria in the gut’s microbiome—though its composition does normalize after some time back on Earth. 19. We found the earliest protocluster ever discovered. A cluster of galaxies is a group of galaxies that are held together with gravity, and in September 2019, astronomers found a group of 12 galaxies in what’s known as a protocluster—basically, in the early stages of becoming a cluster. This was the earliest known protocluster ever discovered, which will hopefully shed light on how they form and evolve. 20. We know more about how the Milky Way formed. Basically, it slowly collided with another galaxy about 25 percent of its size and enveloped the entire thing, a discovery that was announced in 2018. In 2019, researchers at the Institute of Astrophysics of the Canary Islands in Spain helped shed more light on this event. By studying our galaxy’s stars, especially extremely old stars found in a sort-of “halo” that was likely caused by that galactic collision, they were able to more precisely identify the timing of the collision, and hope to glean insights into “the formation of galaxies more generally.” 21. Office Space could have starred some different celebs. This year it became public that, back in the '90s, 20th Century Fox had hoped that Matt Damon and Ben Affleck would star in Office Space. 22. Filming one scene from When Harry Met Sally... took a lot of takes. In other new news about old movies, Rob Reiner revealed to Entertainment Weekly this year that the scene in When Harry Met Sally... in which Harry, Sally, Jess, and Marie are all on the phone at the same time required a whopping 61 takes. 23. There could have been a sequel to a popular Julia Roberts movie. EW also got the scoop that there was almost a sequel to My Best Friend’s Wedding. Julia Roberts’s other best friend in the film, George, would’ve been getting married in the second film. 24. The Doors' song "Touch Me" originally had different lyrics. In music news, this year we learned that The Doors' song “Touch Me,” written by Robby Krieger, was originally called “Hit Me,” but Jim Morrison said, “I’m not saying that. People might take me literally.” 25. The cover of Abbey Road was devised on a tight deadline. And the iconic cover photo of Abbey Road was an idea devised on a deadline of about two days. 26., 27., 28., 29., and 30. Some incredible lost works were discovered this year. Steven Hoelscher, a professor at the University of Texas at Austin, announced the discovery of an essay by Langston Hughes while researching an investigative journalist. The essay, “Forward From Life,” was about an encounter with a chain gang escapee. While cataloguing the archives of Anthony Burgess’s papers, the director of the International Anthony Burgess Foundation found the lost, incomplete follow-up to A Clockwork Orange, titled The Clockwork Condition. A lost J.R.R. Tolkien work that was found in an Oxford basement was published this year. Tolkien’s Lost Chaucer contains his commentary on the work of Geoffrey Chaucer. Jason Scott-Warren, a lecturer at Cambridge, was reading an article on a copy of Shakespeare’s plays held at the Free Library of Philadelphia when he realized that the notes in the margins might identify it as John Milton’s copy of the plays. And a Samuel Clemens signature was discovered this year in a 3-mile-long cave in Missouri. People had been searching for the spot on the wall that a young Clemens signed for decades. 31. The 2019 book Letters from Hollywood published many newly uncovered letters. In one, Hattie McDaniel takes on the criticism she received for playing roles like Mammy in Gone with the Wind. She wrote, “Truly, a maid or butler in real life is making an honest dollar, just as we are on the screen.” 32. This year, geologists and researchers took another look at the area where the crater formed from the massive impact that killed the dinosaurs. They found that within minutes, that location was covered in over 100 feet of molten rock. An hour later ocean waters flooded back in, depositing another 300 feet of rock, and then, within a day, the area was hit by a tsunami. 33. A new genus of pterosaur was identified this year. Cryodrakon boreas was identified in Alberta, Canada this year. The reptile, which lived during the Cretaceous period, had a wingspan of at least 16 feet. Fun fact: Its name translates to ice dragon! 34. The Ambopteryx longibrachium was also announced. This dinosaur, from 163 million years ago, was about 13 inches long and had wings like a bat. 35. We also got the Aquilarhinus palimentus dinosaur, which lived 80 million years ago. That mouthful of a name roughly translates to eagle-nose shovel-chin. So if you’re a linguistics-loving bully, have fun with that one. 36. We learned of a giant bird, a member of the Pachystruthio dmanisensis species, which lived 2 million years ago. At around 1000 pounds, it weighed about the same as a modern polar bear. And at 11 feet tall, it will haunt our nightmares. 37. A study of mosasaurs this year found that the swimming reptiles didn’t just use their tails to get around. Mosasaurs probably had giant pectoral muscles, so they could swim quickly by engaging those pecs. 38. New research on old fossil footprints led to a discovery. There are 280-million-year-old fossil footprints in Grand Canyon National Park. New research was published this year on these prints, which were created before dinosaurs came around. Their likely owners were Ichniotherium which, before this research, we didn’t know could thrive in the desert. 39. There are large holes in T. rex skulls and this year a team of paleontologists hypothesized why that might be. In research published in The Anatomical Record, they laid out their evidence that the holes were once likely filled with tissue and blood vessels, which served to keep the large T. rex cool. 40. Two hundred and fifty-seven footprints from Neanderthals who lived 80,000 years ago were excavated in France. It was previously unclear how many Neanderthals grouped together, but these prints led scientists to believe that this group contained 10 to 13 members. 41. We now know what Denisovans might have looked like ... Another of our ancient relatives are the Denisovans, who lived at the same time as Neanderthals. By using the DNA taken from the finger bone of a Denisovan, some scientists this year came up with a picture of what the ancient people may have looked like. 42. ... and sequenced the genome of a member of the ancient Harrapan civilization. We don’t have much information about the ancient Harappan civilization of the Indus Valley, which had its peak between around 2600 to 1900 BCE. But this year scientists sequenced the genome of a woman from the civilization, which revealed their ancestry as well as connections to people all over Eurasia. 43. We learned more about the Philistine civilization this year. The Philistine civilization, from between the 12th and 7th centuries BCE, is mentioned in the Bible. Like the Harappans, they’ve been pretty mysterious. But this year, DNA from 10 individuals was acquired and showed that the Philistines traced part of their origin to southern Europe. 44. The Edomites made a sudden technological leap in the 10th century. Another group that pops up in the Bible is the Edomites. Thanks to archeological evidence, we know that this society was mining copper for tools and weapons in the Late Bronze and Early Iron Ages. This year, it was discovered that the Edomites had a sudden technological leap in the 10th century that led to a more efficient, better-controlled smelting process. This “punctuated equilibrium model” for technological development suggests that, rather than the result of a long period of gradual improvement, the improvements in smelting may have been the result of a “punctuation event.” Research suggests that Ancient Egyptian influence may have been the cause. 45. Monte Alto artists were aware of magnetism—and used it in their art. The Monte Alto people lived in ancient Mesoamerica around 500 to 100 BCE, preceding the Maya classic period. In the Journal of Archaeological Science this year, a study was published indicating that Monte Alto artists not only were aware of magnetism but actually created sculptures that incorporated the raw materials’ magnetic properties. 46. Babies in the Neolithic era drank ruminant milk out of bottles. Researchers found this year that as early as the Neolithic era, 7000 years ago, babies would drink ruminant milk out of “baby bottles.” (Ruminants are a type of mammal that include cattle and sheep.) Some of the unearthed bottles are even shaped like animals. Aww! 47. A virtual autopsy of a mummified crocodile revealed new information about the practice. Crocodiles were mummified in ancient Egypt. And a virtual autopsy of one such crocodile showed that the animals were likely hunted for the specific purpose of being mummified. 48. A found photo of Harriet Tubman went on display this year. A found photograph of Harriet Tubman in the 1860s went on display at the National Museum of African American History and Culture in 2019. It was in a photo album that had originally been owned by a Quaker schoolteacher and abolitionist. 49. The remains of the Clotilda were found. The last ever slave ship to the U.S., which arrived (illegally) around 1860, was confirmed to be in the Mobile River in Alabama. The Clotilda had taken 110 Africans from West Africa to Alabama before being burned by its captain. 50. There are 4.5 billion-year-old continents beneath the surface of the Earth. Eighteen hundred miles below the Earth’s surface, there are 4.5 billion-year-old continents. And this year we learned that they may be the result of an ocean of magma dating to the very beginning of the Earth’s formation. 51. Greater Adria was discovered. Greater Adria, an entire lost continent the size of Greenland, was discovered this year under Europe. One hundred million years ago tectonic shifts moved it underwater in the Mediterranean. 52., 53., and 54. A number of math discoveries were made this year. In math, cubing a number means multiplying it by itself two times. Up until this year, mathematicians were able to represent every number from 0 to 100 as three cubed integers added together, or—in the case of numbers like 4, 5, and 13— to prove that such a thing was impossible. The exception was the number 42, which mathematicians had failed to represent as the sum of three cubed integers, or to prove it impossible. In 2019, two mathematicians figured out how to represent 42 in this way, sparking many Hitchhiker's Guide to the Galaxy-related headlines. Another math problem that went unsolved until this year was a theoretical question: In a lottery in which the winning number is infinite, as are the tickets, is there a ticket that always wins? The answer: nope. We learned 9 trillion more digitals of pi this year thanks to Google employee Emma Haruka Iwao, who got us to 31.4 trillion digits total (calculations were done via computer ... not by hand, of course). 55., 56., 57., 58., and 59. A number of world records were set in 2019. Speaking of things that go on for a long time: this year we learned that a man can do tai chi for a full 36 hours. That’s what Samuel Michaud did to break the world record for consecutive hours practicing tai chi. We also now know that a person, specifically Lata Tondon, can cook for over 87 hours straight. She cooked up 1600 kilograms of grains and other dishes for around 20,000 people. We also learned that Cam Newton—another record breaker this year—can catch 51 footballs one-handed in just 60 seconds. And we learned that 978 students and teachers will show up and floss (the dance, that is) simultaneously if a world record is on the line. Finally (in world record news at least) we learned not to count out the 90-94 age group of runners. In July, 91-year-old Diane Hoffman broke a world record for that group by running 400 meters in about 2 minutes and 44 seconds, a period of time accounting for roughly five ten-thousandths of a percent of her time on Earth to date. Even more amazing? She only started competitive running at the age of 90! 60. We learned more about the Crypt Keeper wasp this year. The “crypt keeper” wasp is creepy. Being a parasite who lives off of other wasps is already freaky enough, but it also travels through its host’s head. Yuck. And this year, researchers found that the crypt keeper wasp can live off of seven separate gall wasp species, an unusual ability for a parasite. 61. we found out about an ancestor of the Ophiocordyceps species. And speaking of parasites, we already knew about the Ophiocordyceps fungus, which uses ants as its host. Invading an ant’s body, then getting it to climb up on a leaf, allows it to produce many spores. But this year, research revealed that the various Ophiocordyceps species share an ancestor species which infected beetles rather than ants. 62. Certain dyeing poison frogs are targeted more by birds. There are dyeing poison frogs with white stripes and ones with yellow stripes. By placing frog models in French Guiana, scientists this year learned that white-striped frogs were bothered more by bird predators. 63. This year we learned that there are actually three species of electric eel. One of them, the Electrophorus voltai, can create an 860 volt shock, the highest ever recorded from an animal. 64. Yellow-legged gull embryos are listening to their parents. Birds have different noises for different situations, and we now know that while yellow-legged gull parents are communicating about danger, their embryos are paying attention and become restless within their eggs. 65. Squirrels eavesdrop on birds. When birds are making noise that indicate their surroundings are safe and calm, the squirrels become relaxed as well. 66. We learned this year that the typically-monogamous convict cichlid fish will mourn a breakup. Female cichlids were given the opportunity to choose a male partner. Some females were then separated from those partners. The separated females were less likely to open a mysterious box that may or may not contain food, which researchers took as evidence of a more pessimistic post-"breakup" worldview. 67. Mice fidget when they're focusing. In another lab experiment, neuroscientists discovered that while mice are working on a task, like licking an item when prompted, they fidget more as they focus. 68. Scientists revived cellular functions in a dead pig's brain. Researchers in 2019 revealed that they managed to restore some cellular functions in a pig’s brain hours after the animal died. 69. We learned this year that Venus flytraps are super sensitive. We’ve known for a while that Venus flytraps have hairs that allow them to sense when an insect is nearby. But research from this year showed that they can sense items that weigh less than a single sesame seed. 70. The idea that there was only one species of Chinese giant salamander was debunked this year. There are actually three species. This means that the Andrias sligoi, or South China giant salamander, is the largest amphibian in the world. 71. AI can now distinguish the faces of chimpanzees. Conservationists hope this might help stop illegal chimpanzee trading. 72. This year we progressed in the search for the Loch Ness monster ... kind of. A team took 250 water samples and discovered a large amount of eel DNA in Loch Ness, which could point to a large eel being the source of the rumors. 73. According to a study involving MRIs of dogs, when people started breeding the animals, they changed dog brains. “Brain anatomy varies across dog breeds and it appears that at least some of this variation is due to selective breeding for particular behaviors like hunting, herding and guarding,” neuroscientist and lead author of the study Erin Hecht told The Washington Post. 74. New species of nematode were found in California's Mono Lake. Mono Lake in the Sierra Mountains only contained two species of animal, as far as scientists knew, up until this year when they found eight nematode species in the lake, one of which has three sexes: male, female, and hermaphrodite. 75. We learned this year that loons don’t mind parenting a duckling. In northern Wisconsin, a pair of loons was observed looking out for an orphaned duck. 76. The Hebetica sylviae bug was discovered in 2019. The name sylviae came from its unexpected discoverer: two-year-old Sylvie Beckers, who’d overwatered her mom’s flowers. Sylvie's mom was a biology professor, so she was the perfect person to observe the little bugs floating up as a result of the overwatering. 77., 78., 79., 80., and 81. Some important medical discoveries were made this year. In the world of medicine, a particular molecular defect that’s exclusive to patients with Parkinson’s disease was discovered this year, which may help with early detection of the disease. Speaking of disease detection, AI is getting very good at it. A scientific review published in The Lancet Digital Health journal reported that algorithms could correctly diagnose diseases 87 percent of the time versus healthcare professionals who were at 86 percent. These results were valid only in the specific circumstances tested, though, and the methodologies employed may have tilted the results; we’re probably quite a ways off from AI doctors. An entirely new autoimmune disease was discovered this year in a 9-year-old patient. It was a mutation in their genes involving a lack of P I 3 K Gamma. Pinpointing diseases in such a specific way helps personalize treatment. In research this year, AI was used to examine the cardiac MRIs of 17,000 people. It determined that genes were responsible for 22 to 39 percent of the variation in the left ventricle’s size and function, which is significant. When that ventricle is unable to pump blood, the result is heart failure. Microbiologists discovered a protein that’s integral to the spread of the common cold in bodies. It’s known as SETD3 and identifying it might be the first step in a cure for the cold. 82. Before this year, it was believed that hurricanes can only form in wet environments. But new information about atmospheric science has revealed that hurricanes can form in dry, cold places. They wouldn’t do that on Earth today, but other planets might experience dry hurricanes. 83. A study on mouse sleep may shed light on human sleep. In a study on mouse sleep, neuroscientists look at melanin-concentrating hormone-producing neurons, which they now think might be a cause of the brain forgetting information. The neurons fire most during REM sleep. That may be why we forget most of our dreams. 84. Three of the four Bear Brook murder victims were identified this year. The victims of the Bear Brook murders in New Hampshire had been unknown since 1985, but in 2019 three out of four of them were identified by name. This was partially thanks to paleogeneticist Ed Green, who can recover DNA from hair without a root—a previously impossible task. 85. Another cold case was helped along in 2019, this one dating back to 1997. But it wouldn’t have been solved at all without the help of Google Earth. A man was using it to check out his former house in Florida when he spotted a car submerged in a nearby pond. Sure enough, a deceased man inside had been reported missing more than two decades earlier. 86. A visit was made to the TItanic in 2019. 87. The HMS Terror got a visit for the first time in 2019. Speaking of shipwrecks, the HMS Terror, which sunk in northern Canada during the 1840s, got its first-ever visit this year. Marine archaeologists took a look at the damage, which included bottles, plates, guns, and chamber pots. 88., 89., and 90. There were some lab-grown breakthroughs this year. Scientists created a gel that can regrow tooth enamel, which was previously impossible. Another amazing lab-grown gel might stop forest fires from spreading. Putting the gel, invented at Stanford, on vegetation will keep it flame-retardant for the entirety of wildfire season. For a third lab-grown breakthrough we have yeast-produced CBD and THC, which could hopefully be used for medicinal purposes. 91. Very old wooden bowls with traces of cannabis were discovered this year. On that note, thanks to the discovery of wooden bowls containing traces of cannabis this year in China, we learned that people have been using it as a drug since at least around 2500 years ago. 92. We learned that robots can do gymnastics. Robotics company Boston Dynamics posted a video of their robot Atlas doing tricks like somersaults, leaps, and handstands like a metallic Simone Biles. 93. Early research suggests that plastic tea bags release a ton of microplastic into tea. A research team discovered that a plastic tea bag releases billions of microplastic particles—100 nanometers to 5 millimeters big—into a cup of tea. More research is needed on this one, though. 94. There was a breakthrough discovery in hair growth. Meanwhile, a team from the University of Wisconsin-Madison found that electric stimulation in lab rodents can increase hair growth, which led to their invention: a baseball cap that does the same thing to humans, which they are going to test on balding men. 95. A whiskey tongue exists. All sorts of problems are being solved this year! Like when a team in Scotland revealed they’d created an artificial tongue, which can taste and identify types of whiskey. 96. and 97. There were plenty of words added to the dictionary this year. Thanks to Merriam-Webster, we learned that stan is a word in 2019. They added the word to the dictionary with the meaning “an extremely or excessively enthusiastic and devoted fan.” The first known use was in the 2000 Eminem song “Stan.” This year also gave us a few abbreviations that officially count as words: vacay, sesh, and inspo. 98. Scientists estimated the size of the proton this year after a 2010 study cast doubt on the previously-accepted measurement. Protons have a radius of about 0.833 femtometers. For the record, a femtometer is one quadrillionth of a meter. 99. We learned that there are self-driving mail trucks. In May, the U.S. Postal Service tested the trucks and their ability to cart mail from Phoenix to Dallas during a two-week project. 100. drones can be responsible for insulin delivery. More specifically, a drone containing diabetes medicine was flown 11 miles over water from Galway to the Aran Islands in Ireland.
<urn:uuid:877783a3-927e-475b-aa82-fc0aecef53ff>
CC-MAIN-2021-21
https://www.mentalfloss.com/article/613133/things-we-learned-2019
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991269.57/warc/CC-MAIN-20210516105746-20210516135746-00097.warc.gz
en
0.966255
6,555
2.921875
3
The ragam Ramakali and the krti “Rama rama kali kalusa”- Part II Let us go to the next segment in this series, on the authority of using prati madhyamam in this rāgam, being a janyam of Māyāmālavagaula. It will not be incongruous if the history of this rāgam is explained, before taking up the main question. Rāmakali was relatively a popular rāga during 16-17 CE and we can see the treatises like Rāgamañjari of Paṇdarika Viṭṭala, Hṛudayakautuka and Hṛdayaprakāśa of Hṛdayanārāyaṇadeva and Anūpasaṅgītaratnākara of Bhāvabhaṭṭa mentioning about this rāgam. It is a general opinion that these treatises represent the Hindustāni tradition of our Classical music, indicating a rāga with this name was common in the Northern territory. Circumstantial phrases delineating the rāgam was not given and we are left with a simple description – GPDS NDPGMGRS, credits to Hṛdayanārāyaṇadēva. It is to be remembered here that many of the earlier treatises belonging to16-17 CE do not describe a rāga with illustrative phrases. Hence, we are clueless about the melodic structure of Rāmakali of 16-17 CE excluding the remark that this rāgam drops madhyamam and niṣādam in the ascent and has the above mentioned phrase. In the treatises mentioned, this is considered as a sampūrṇa janyam of a melam, equivalent to our present day Māyāmālavagaula, mēla 15 (See Footnote 1). This rāga was not catalogued by the musicologists of the Southern territory like Gōvinda Dīkṣita, Śāhāji or Tulajā. Appendix to Caturdaṇdi Prakāśikā, published by Music Academy mention this rāgam. This is mentioned as a dēsīya rāgam and a bhāṣāṅga janyam of Māyāmālavagaula, the mela 15. Subbarāma Dīkśitar further elaborates and illustrate the various phrases used in this rāgam. Hence, a complete picture of this rāgam as we see today in the uruppaḍi-s mentioned is obtained only from Subbarāma Dīkṣitar’s treatise Saṅgīta Sampradāya Pradarśinī. Here, he gives a hitherto unknown points, that it is customary to use prati madhyamam, also called by the name Pibhās and a rāgam imported from North of this country. From the above discussion it is clear that Rāmakali was a sampūrṇa rāgam popular in the Northern territory of this country. Description of this rāgam is very scanty in the earlier treatises. As we move down the timeline, we can see this was described by a single phrase GPDS NDPGMGRS. This rāgam was totally unnoticed by the major musicologists of the Southern territory and a complete description of this rāgam is seen for the first time only in the year 1904, credits to Subbarāma Dīkśitar. Bibhās could have been much more popular rāga than Rāmakali, both in both the territories, North and South. Almost every other book seems to mention this rāgam. When the raga structure with the available phrases were analyzed, there seem to be two Bibhās, one as a janyam of the mēla corresponding to the present day Kāmavardhani, mela 51 and the other one corresponding to the present day mela 15. Bibhās as a janyam of mela 51 Bibhās aka Bibhāsu aka Vibhās aka Vibhāsa is mentioned in Rāga mañjari and Rāga mālā (both by Paṇḍarīka Viṭṭhala), Saṅgīta Pārijāta, Rāga Tattva Vibhōdha, and Anūpa Saṅgīta Vilāsa. Of these, descriptive elements are seen given in Rāga Tattva Vibhōdha and Saṅgīta Pārijāta. The phrase MGRGRS is stressed in Rāga Tattva Vibhōdha, whereas this is not seen in the description available in Pārijāta. Excluding these small differences, over all visualization of Bibhās is similar in both the treatises. In both the treatises, SRGPDS and the phrases involving GPD were given more prominence. Though DND is seen, the phrase DNS is avoided completely. Needless to say, madhyamam employed here is of tīvra variety and can be called as ‘prati madhyama Bibhās’. Bibhās as a janyam of mela 15 Texts like Hṛudayakautuka and Hṛdayaprakāśa of Hṛdayanārāyaṇadeva, Rāga lakṣaṇamu of Śāhāji and Saṅgīta Sārāmṛta of Tulajā consider Bibhās as a janya of mēla 15 (śuddha madhyama Bibhās). Considerable difference exist in the descriptions across these treatises to the extent that they deserve an individual treatment. Also, all these treatises give just one or two phrases to illustrate this rāgam. Bibhās in Hṛudayakautuka, Rāga lakṣaṇamu and Saṣgīta Sārāmṛta has the phrase DNS and is considerably different from the Bibhās seen in the earlier section. Hence, Bibhās mentioned by later lakṣaṇakāra-s like Śāhāji and Tulajā is totally different from the Bibhās prevalent during late 16 CE and early 17 CE (prati madhyama Bibhās). Bibhās in Hṛdayaprakāśa omits gāndhāram and madhyamam and is different from both the varieties mentioned. Appendix to Caturdaṇdi Prakāśikā mentions both Rāmakali and Bibhās and places both under the mela 15. Whereas the former was credited with a ślokam and the latter was not even described. Rāmakali as described in Saṅgīta Sampradāya Pradarśinī More descriptive image of this rāgam comes only from Subbarāma Dīkṣitar. As mentioned earlier, despite being a janyam of mela 15, traditionally this uses prati madhyamam says Subbarāma Dīkṣitar. We don’t have adequate evidences from textual or oral tradition either to understand the melodic structure of Rāmakali extant during the days of Subbarāma Dīkśitar nor to compare this Rāmakali with the one described in the treatises. If we consider the single available phrase GPDS NDPGMGRS (refer to the description of Rāmakali mentioned in earlier treatises mentioned elsewhere in this article), GPDS is the recurrent motif seen in all the compositions. We do not find NDPGM; we do find NDPmG at one place where madhyamam occurs more like an anusvaram. Though we are unable to conclusively say that Rāmakali mentioned here is same as the one mentioned by Hṛdayanārāyaṇadēva, we can say at least the phrase given there very well fits into the description given by Hṛdayanārāyaṇadēva. More importantly, Rāmakali of Dīkṣitar goes very well with the Bibhāsu of the first type (the type employing prati madhyamam), provided we accept the madhyamam as tīvra. Presumptions from the above discussion The following presumptions can be made regarding Rāmakali – Bibhās: Rāmakali is a rāgam of great antiquity and must have been popular in the Northern part of our country as few treatises make a note of this rāgam . It should have been a suddha madhyama rāgam. It is very difficult to understand the melodic structure of this rāgam with the single phrase available. At the same time, there existed Bibhās, almost with a similar structure but featuring prati madhyamam. This should have been much more popular than Rāmakali, as every other treatise make a note of this rāgam. The melodic structure of the prati madhyama Bibhās is almost similar to Rāmakali mentioned by Subbarāma Dīkṣitar. This prati madhyama Bibhās could have been alluded to in the Rāmakali section by Subbarāma Dīkśitar. It is emphasized here that the Rāmakali and the prati madhyama Bibhās were only mentioned in the treatises treating Hindustāni rāgā-s and the first Karnāṭaka Music text referring these rāga-s is the Anubandham or Appendix to Caturdaṇdi Prakāśikā published by Music Academy. This could have a been period when all the three rāga-s co-existed; Rāmakali, a janyam of mēla 15 and the two Bibhās. It can be hypothesized here, Rāmakali was only practiced in North India and the prati madhyama Bibhās could have been referred as Rāmakali by Vēṅkaṭamakhin in the South (he is specifically mentioned considering the link given by Subbarāma Dīkṣitar) in the South. Considering the old nomenclature, Subbarāma Dīkṣitar gives an additional information that Rāmakali is also called as Bibhās (See Footnote 2). Subbarāma Dīkśitar mentions more than a time that Vēṅkaṭamakhin has authored another text dealing with the rāga lakṣaṇa. If we go by his words, that missing text could have been composed in the second half of 17 CE, the time when majority of the texts mentioning prati madhyama Bibhās were composed. Either Rāmakali or prati madhyama Bibhās or both of them could have been mentioned there. Śuddha madhyama Bibhās could have flourished earlier or more popular than its prati madhyama counterpart in the South. This tradition later continues to Śāhāji and Tulajā wherein they have made a mention only about śuddha madhyama Bibhās. Not all the lakṣaṇa granthā-s are comprehensive in cataloguing the rāga-s prevalent during their time; hence Rāmakali could have been missed by Śāhāji and Tulajā (See Footnote 3). Having seen the history and lakṣaṇa of Rāmakali and its ally Bibhās with its variations, an attempt will be made now to address the question on the use of prati madhyamam in this rāgam. As mentioned, the only evidence of using prati madhyamam comes from Subbarāma Dīkṣitar. The problem in use of this svaram arise not because of this rāgam being considered as a janyam of mēla 15, but only due to lack of a symbol denoting this svaram in the notation provided. Rāmakali, being a janyam of mela 15 has śuddha madhyamam as a default svaram, and the anya svaram prati madhyamam is to be denoted with a symbol. This is the system followed by Dīkṣitar in his treatise for every other rāgam having an anya svaram. Strangely, despite using a symbol to denote this anya svaram (prati madhyamam) in the section wherein this rāgam is described, the notation system totally lacks this symbol. A question arises on the use of prati madhyamam and we are left with three interpretations – either to use śuddha madhyamam completely as this is a janya of Māyāmālavagaula (which takes only śuddha madhyamam) or to use prati madhyamam alone as he says it is customary to use only prati madhyamam or to use prati madhyamam only in the phrases DMPG, DPMG, MGDPMG and DPMG, as he has inserted the symbol to denote prati madhyamam for these phrases under the section explaining rāga lakṣaṇam. To get an answer to this question we need to look into two aspects – history of this rāgam and the observations and interpretations we get it by analyzing this same text. History of this rāgam was explained and it will be recalled at a later period. Now, we will look in for the evidences/observations from this text. This kind of discrepancy between the lakṣaṇa section (section explaining rāga lakṣaṇam) and the lakṣya segment (section giving the kṛti-s in notation) exist at various other places too in this same text at various other places. When a discrepancy is seen between any two segments, for example, difference in the assignment of foreign notes between the lakṣaṇa segment and the lakṣya section, do we have to take it as a printing error (or) considering the painful scrutinizing procedures followed and the various methods adopted to overcome these errors, these are to be taken as the actual ideas of Subbarāma Dīkṣitar himself? (Readers are requested to refer tappōppalu and porabāṭalu section explained elsewhere in this article). Considering the above discussion, a student who tries to interpret this treatise is thus left with two options – first one is to believe these discrepancies as an inadvertent errors and tries to reconcile the errors with his level of knowledge and understanding. Second one is to accept as it is, confidently believing in Subbarāma Dīkṣitar and his sagacious grasp over the subject. Both the options are acceptable as any research is open to interpretations. For this author, second approach appears to suit well as we are totally blind about the traditions that prevailed in the past, say around 200 years ago, and by following this approach, the individual fancies and inclinations of the researcher are kept to a bare minimum; it is more like an untainted aural reproduction of the visual representation. The second approach is also followed as this author believes the complete text is protected by the two sections mentioned. errors could have crept in, but they are unfathomable to us. When interpreting the notations given by Subbarāma Dīkṣitar, it is not only required to understand the context, but also develop an ability to relate other segments or rāgam-s given in this text. Now, let us go away from Rāmakali for a while and try to understand the lakṣaṇa of the rāga-s Ghanṭa and Sāvēri. Let us see an unseen similarity between these three rāga-s and how this can be used to solve the problem related to the madhyamam. Ghanṭa, now is a bhāṣāṅga janyam of mēla 8 and uses two varieties of ṛṣabham – śuddha and catuśruti (pañcaśruti). Subbarāma Dīkṣitar consider this as an upāṅga janyam of mela 20 and recommends the use of catuśruti ṛṣabham only. He clearly says the use of śuddha ṛṣabham came into practice only after the demise of Vēṅkaṭamakhin. Though he give phrases in the lakṣaṇa section wherein śuddha ṛṣabham is used, he never gives the symbol to denote the use of śuddha ṛṣabham in the notation (lakṣya section). This is exactly similar to Rāmakali, wherein he says only prati madhyamam is used as per the tradition in the lakṣaṇa section but fail to use the symbol for prati madhyamam in notation. This is clearly an indication that this text is filled with many abstruse details and these disparateness cannot be dismissed or neglected as a printing error for the lack of understanding on our side. To understand more, let us see the rāgam Sāvēri. Sāvēri is placed under the mēla 15, as a bhāṣāṅga janyam, implying it takes some anya svaram. Going by the normal rules, this should take antara gāndhāram and kākali niṣādham. Subbarāma Dīkṣitar says the common gāndhāram and niṣādham seen in this rāgam are sādharaṇa and kaiśiki respectively and he will give the symbols only for antara gāndhāram and kākali niṣādham whenever they occur (svagīya svaram or default svaram in this rāgam). This is the only rāgam in the entire treatise, wherein symbols are given for the default svaram (See Footnote 4). This pattern is followed since, if we need to mark the anya svaram in this ragam, namely sādharaṇa gāndhāram and kaiśiki niṣādham, the entire notation will be filled with these symbols as the default antara gāndhāram and kākali niṣādham occur very very rarely. This looks not only cumbersome but also could have been posed difficulties while printing. Let us go back to Ghanṭā and Rāmakali. Though the discrepancies seen between the lakṣaṇa and lakṣya section is similar in both the rāga-s as pointed out earlier, they are introduced differently by Subbarāma Dīkṣitar and this is very vital for understanding any rāgam and employing a particular svaram – dhaivatam and madhyamam respectively in these rāgam. Whereas Ghanṭā is introduced as an upāṅga rāgam, Rāmakali is introduced as a bhāṣāṅga rāgam. If Rāmakali is to be used only with śuddha madhyamam and the use of prati madhyamam was introduced later, he could have tagged it as an upāṅga rāgam like Ghanṭā. Hence we can surmise, either prati madhyamam alone can be used or a combination of śuddha and prati madhyamam can be used. Rāmakali share similarities with Sāveri and the method adopted for the latter is followed with the former for marking the anya svaram. Though the default svaram is śuddha madhyamam, it was a tradition to use only prati madhyamam is reminded again. If this rāga had both the madhyamam and the anya svaram prati madhyamam is the preponderant svaram, he would have marked the default svaram śuddha madhyamam with a symbol and given us a note (compare this with Sāvēri). But this rāgam, unlike Sāvēri, does not use its svagīya svaram – śuddha madhyamam and hence he didn’t mark madhyamam with any symbol in the lakṣya section allowing us to interpret this rāgam can be/to be sung with prati madhyamam only. This finding can now be related with the history of this rāgam. We have seen Rāmakali was popular in the North and being recorded only in few treatises, indicate its limited popularity. We have also seen that the Rāmakali given in Sangīta Sampradāya Pradarśinī resembles more like prati madhyama Bibhās. This Bibhās could have been called as Rāmakali in the South. So, the Rāmakali of the North, a śuddha madhyamam rāgam, though similar to prati madhyama Bibhās, is different from the Rāmakali mentioned by Subbarāma Dīkṣitar. The Rāmakali described by Subbarāma Dikṣitar is similar or could have been the same as the prati madhyama Bibhās. Hence, Subbarāma Dīkṣitar gives a disclaimer it is customary to sing this with prati madhyamam. Now an explanation is invited for placing this rāgam under mēla 15. No conclusive explanations can be given until we get the treatise referred by Subbarāma Dīkṣitar. However, this can be a rāgam similar to Dhanyāsi, getting allocated to a different mēla than where it ought to be. Rāgamālika passages in the anubandham of Sangīta Sampradāya Pradarśinī Apart from the kṛti discussed, we see this rāgam in two rāgamālika-s, “sāmaja gamana” and “nātakādi vidyāla” composed by Rāmasvāmy Dīkṣitar. We face a different problem with the Rāmakali lakṣya in these rāgamālika passages. In both the rāgamālika segments, prati madhyamam symbol is inserted, but at only one place. Rāmakali passage in the rāgamālika “nātakādi vidyāla” This rāgam comes at the end of this composition. Madhyamam is utilized in the following phrases – DPMG,DPPMG, MGPD and GMGRS. First phrase is the most common of all. The madhyamam in all these phrases are of śuddha variety only. Prati madhyamam is seen once in the phrase DPMG. Interestingly, we see a new phrase SNSRS. This phrase SNS is not at all seen in the uruppaḍi-s featured under Rāmakali section. Also, the glide from avarōhaṇam to ārōhaṇam is always through the phrase DPMG in this rāgamālika. Phrases like PDM DMPG were not used (which are there in the Rāmakali section given in the main text). Now, can we hypothesize the Rāmakali seen in this rāgamālika is different from the Rāmakali described in the main text? If that is so, can this be the Rāmakali of the North, a janya of mēla 15 ? We have seen before the Rāmakali of the North much resembles the prati madhyama Bibhās except in having a śuddha madhyamam (as a dominant svaram). So, the difference between these two rāga-s could have been the presence of the phrase SNS and the absence of the phrases PDM and DMPG in the Rāmakali of the North. This Rāmakali might also have had prati madhyamam as an anya svaram. To distinguish this Rāmakali (of North) from another Rāmakali (prati madhyama Bibhās), Subbarāma Dīkṣitar might have given an additional information that the Rāmakali given by him is also called as Bibhās. This hypothesis can be confirmed only if we get the text referred by Subbarāma Dīkṣitar or any other treatise or references taking us to the period between 16-17 CE. Rāmakali passage in the rāgamālika “sāmaja gamana” Rāmakali passage in this rāgamālika is very short to make any conclusions. The passage starts with the phrase MGGP, where the madhyamam is of tīvra variety. Madhyamam occur in two other phrases – DPMG and DPPMG, wherein it is of śuddha variety. Though the phrase SNS is not seen, DPMG is the only linking phrase between avarōhaṇam and ārōhaṇam is to be noted. With the present level of understanding, these are recondite findings and we need to search for more evidence. But, Rāmakali employing only prati madhyamam can be very well applied for the kṛti “rāma rāma kali kaluṣa” as both the laskṣaṇa segment and this kṛti was authored by Subbarāma Dīkśitar. Regarding the use of prati madhyamam in the rāgam Rāmakali before the time of Subbarāma Dikṣitar and our hypothesis, we allow the readers to make their own interpretations and this post will be updated, if any valuable evidence surface out. Regarding the use of prati madhyamam, though we cannot say about the system that was in existence before the period of Subbarāma Dīkṣitar, it can be clearly inferred that Subbarāma Dīkṣitar must have had some authentic references to use only prati madhyamam and he must have used the same in his kṛti “rāma rāma kali kaluṣa”. We also hypothesized the Rāmakali handled by Rāmasvāmy Dīkṣitar could have been Rāmakali of the North and differs from the Rāmakali mentioned by Subbarāma Dīkṣitar in the main text. This kṛti rendered with only prati madhyamam can be heard here. Link to the first part of this series –http://guruguha.org/blog/2019/02/19/2704/ Hema Ramanathan (2004) – Rāgalakṣaṇa Saṅgraha (collection of Rāga descriptions) from Treatises on Music of the Mēla Period with translations and notes, 2004. Subbarāma Dīkṣitar. Saṅgītasampradāyapradarśinī, Vidyavilasini Press, 1904. - From the days of Rāmamāṭya (or Vidhyāraṇyā), mēla system is in use and the number of mēlā-s vary across the treatise. Also varies the name of the head representing each clan. Hence, in this post, whenever the older mēla-s are mentioned, they are not mentioned by their names or number, but are just equated with the present mēlakarta number for easier understanding. - Subbarāma Dīkṣitar mentions Vēṅkaṭamakhin authored a separate text on rāga lakṣaṇam. We have no clue about that work and musicologists are of the opinion that the Anubandham to Caturdanḍi Prakāśikā might be the work, as it contains the same rāga lakṣaṇa śloka-s mentioned by Dīkśitar in his treatise. Also, they believe Dīkśitar could have referred to Muddu Vēṅkaṭamakhin, a descendant of Vēṅkaṭamakhin whenever he mentions about a text on rāga lakśaṇam. This author has a different opinion and follows the idea of Subbarāma Dīkṣitar; will be uploaded as a separate post. - Not all the treatises are comprehensive in cataloguing the rāga-s of their period. We do have evidence that the rāga-s like Bēgaḍa, Aṭāṇa and Suraṭi were used by Śāhāji; but they are not mentioned in his treatise !! - When a rāgam takes a svaram which is foreign to its parent scale, that foreign svaram is considered as ‘anya’. Subbarāma Dīkṣitar, always mentions this anya svaram with a symbol. Only for the rāgam Sāvēri, the svara-s inherent to that rāgam are denoted with a symbol.
<urn:uuid:9f0e31f5-9cad-4b3b-9baa-dbd73074c3c3>
CC-MAIN-2021-21
https://guruguha.org/the-ragam-ramakali-and-the-k%E1%B9%9Bti-rama-rama-kali-kalu%E1%B9%A3a-part-ii/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989018.90/warc/CC-MAIN-20210509213453-20210510003453-00577.warc.gz
en
0.927645
6,693
3.109375
3
Welcome to Part II of this free, user-structured writing course. The overarching course theory is provided in Part 0. To summarise, writing is a challenge akin to mountain climbing. You can improve your writing by contemplating certain mindsets, and playing their characteristics against the metamorphic/metaphoric mountain: In Part I, we focused on the first row of the table. Now, we turn our focus to row two: Incline, Promethean poet, Unpack. The writer’s incline In the moutaineer model of writing, the incline is the stages of putting pen to paper (or fingertips to keyboard). It is often the most difficult meta-obstacle to overcome (although personally I struggle more with the chasm – more on that in Part III). The writer’s incline is what numerous ‘how to write’ guides focus on. In the current model, the writer should unpack all of her ideas, scrutinise them, and arrange them into a working corpus of new knowledge. By analogy, when the mountaineer stands before an unforgiving incline, it would be foolish to plough ahead anyway. The sensible thing for him to do would be to unpack his supplies, and begin plotting secure campsites at strategic points of the incline. In both the writing and mountain-climbing sense, the process of unpacking and repacking is aided by the Promethean poet mindset. The writer’s mindset According to Greek mythology, Prometheus formed humankind’s single-origin ancestor out of clay from the bones up. The early Christians in Rome drew on such Promethean imagery to explain the creation of Adam, the first man as accounted in the book of Genesis. I theorise that the act of writing is Promethean by nature. That is, you need to build a solid skeleton before you can ‘flesh out’ your writing (add flesh and skin to bare bones). Writing is also Promethean to the extent that creation cannot be explained in objective terms. We can’t analyse authors’ intangible thoughts and inspirations to predict the genesis of subsequent texts. What you can do, though, is familiarise yourself with the anatomy of your own writing, so that you can better apply your own strengths in future writing projects. This is the essential idea behind the character I call Promethean poet. In the mountain-climbing sense, she unpacks her equipment and plots campsites upon the uncertain, ascending paths before her. In the writing sense, she unpacks her ideas and plots new knowledge upon the blank, increasing pages before her. The form of the monster on whom I had bestowed existence was for ever before my eyes, and I raved incessantly concerning him. — Mary Shelley, ‘Frankenstein; or, The Modern Prometheus’, 1818. The Promethean Poet’s mission is to unpack her ideas, and use these to build a skeleton. The schematic below shows my basic model for how to do this. The model consists of two broad phases (Row 1 & 2), each with three sub-phases (Box 1-3 for each Row). Information flows into (Up Arrows) and out of (Down Arrows) each sub-phase. Please don’t be concerned if you’re not interested in scientific writing – although I use sciencey terms below, there’s no reason why my method of unpacking ideas can’t be generalised to non-scientific writing. If you’re writing a reflective art or philosophical piece, you may interpret my use of the word ‘aim’ to mean ‘argument’, and ‘hypothesis’ could mean ‘expectation’, etc. Tomato, potato. Essentially, in the first phase of building a skeleton, we want to take guiding information (e.g. an assignment sheet) and turn it into summarised knowledge (e.g. a draft document that summarises key papers on your topic). In phase two, we want to move from summarised knowledge to new knowledge (e.g. new hypotheses that will inform a ‘gap’ in the research). I will unpack each sub-phase below. Before you begin gathering bones for your skeleton, get yourself a research diary. This can take the form of anything you wish, but it can be as simple as a 99¢ school-style notebook. In my experience, maintaining a research diary is fundamental practice, because you can never know the significance of ideas until you’ve unpacked them. Sometimes, you have to draw them out of your kit (i.e. your brain) and let them dry for a while. For example – legal disclaimer: this example is purely a joke – let’s say you write down: <joke> Entry #001, 11/11/18 Question: Nobody has ever tested car brakes via a double-blind, randomised control trial experiment. Given this research gap, how can we know that brakes save lives? Materials: Fueled and serviced cars, with or without fitted brakes, matched for make, model, and year of manufacture. 200 m straight, even-surfaced roadway, with bright yellow line marked at 190 m, and double-brick wall at 200 m. Participants: Professional drivers, randomly allocated to drive a car with or without brakes. Neither the participants nor the research team will know which cars have (or don’t have) brakes. Participants will be instructed to drive at constant speed of 100 km/h, then apply the brakes the moment they cross the yellow line. I.V: Brakes. Car fitted with brakes, B1 vs. Car not fitted with brakes, B0 D.V: Number of fatal crashes, FC Prediction: Because this is the first true experiment of car brakes, we have no prior evidence to suggest testing anything else but the null hypothesis, that the number of fatal crashes does not depend on brakes, H0 = FC(B1) = FC(B0) You may not think of that idea again for another five years, until someone in the pub exclaims, “I reckon car brakes end more lives than they save!”, to which someone else retorts, “Rubbish!” You tend to side with the opposition on this one, remembering the countless times you’d apply the brakes in your car without dying, but how do you know who is correct generally? It takes you a while to locate your thoughts on this, but because you’ve diarised them, you are able to return to an idea about how you could settle this argument empirically.</joke> Keeping a research diary is important for a few other reasons too: - It gives you opportunity to trial and practice unpacking your ideas, before others can judge, influence, snicker or sneer at them - You can copyright your ideas, before others can take credit for them - It may allow you to forecast methodological or logistical challenges, before you can get caught up in them Important things you can immediately start recording (and revising) in your research diary include: - Date. When are you making this record? - Topic. What do you want to teach others about? - Question. What do you want to answer? - Aim. What are you trying to do? - Purpose. Why is this important? Who cares? - Hypotheses. What do you expect to find? It’s natural to start out vague and fine-tune as you learn more about your topic. 1.1. Strategise and Search The input at this initial sub-phase is guiding information (see Box 1.1 in the above schematic). This can be almost anything – an assessment sheet, lecture notes, something you dreamt last night, etc. Devise a search strategy from your guiding information, to find papers on each measure of interest. Rough-mapping key concepts and their relationships via a boxes-and-arrows drawing is a good start – you can do this in your research diary. This will help you to narrow your searches’ inclusion and exclusion criteria, which you should also commit to paper. Think about which databases to search in – unless you have infinite time, you’ll want to limit this to just two databases that have minimal overlap (e.g. GoogleScholar and PLoS ONE). With coarse parameters now set (e.g. database selection, inclusion and exclusion criteria), the next step is to devise a search string. This will likely involve some trial and error, depending on how many results each search brings you, and the ratio of ‘hits’ to ‘false alarms’ among those. Let’s say we want to search Google Scholar, and we have a list of only two key terms: ‘ego’ and ‘narcissism’. There are numerous ways we can combine these into a search string. Searching for articles that must contain both terms yields 78,200 results. Searching for articles about narcissism that don’t mention ego increases this number slightly, resulting in 1.21 times as many articles. Treating the terms as synonymous and searching for either or both will bring back about 21 times the number of articles that must contain both. Finally, searching for ego while excluding narcissism returns about 29 times the results from the original search! The numbers seem illogical to me, but I double-checked that that’s just how they come out. Point is, there are many ways to combine only a few terms into a search string, which can have a large bearing on the outcome. In your research diary, for every search you perform, record the following: - Time. When did you perform the search? - Database. Where did you search? - Parameters. What are the limits of your search? (e.g. “only peer-reviewed, randomised control trial experiments, published between 1980 and now, with at least 10 citations, etc.”) - Strings. E.g: - Title:”narcissism” AND Abstract:(“ego” OR “self-esteem”) - Filetype:pdf AND Site:journals.plos.org AND “narcissism” - Subject:(“ego and self-esteem” AND “narciss*”) - Results. E.g: In summary, at the strategise and search stage, your goal is to take guiding information (e.g. your professor’s essay question) and systematically turn it into filtered information (e.g. a systematically identified area of focus, being sure to write down instructions on how you came to this area). For more information on how to use search operators (e.g. AND, OR, NOT), modifiers (e.g. *, ?, “”, ()) fields (e.g. title, abstract, body) and limiters (e.g. filetype:pdf, site:*.plos.org), please check out my textbook chapter: Gaetano, J., et al. (2013). Appendix E: Searching psychology databases. In D. A. Bernstein et al. (Eds.), Psychology: An international discipline in context: Australian and New Zealand edition (1st ed., pp. E1-E15). South Melbourne, VIC: Cengage Learning. 1.2. Scan and Shortlist In the scan and shortlist sub-phase, we start by scanning the filtered information for relevance. Note the word ‘scan’ as opposed to ‘read’. If you’ve devised a good search string that has found you 100 articles, you probably won’t have time to read even a quarter of them in any depth. What you need to do is scan the list of search results. Scroll down each page of results, and skim read the titles. Some titles might ‘leap out’ for some reason, in which case you may wish to progress to reading and taking notes from their abstract. You should get a sense of roughly what proportion of database search results are potentially useful, and which ones are clearly not on topic. If you end up with too many of the latter, then you may want to return to sub-phase 1.1 and revise your search strategy accordingly. In your research diary, jot down any search results that stand out as especially important or informative. What are their key words? Do they cite each other? What years were they published and how many times have they been cited? What do their title and abstract say about their key findings? Note that no deep reading has actually been done at this stage. Now we are moving to the shortlist stage of the unpacking process. You might have 100 articles, but of those, you deem 25 are not relevant judging by their titles. Of the 75 potentially useful articles, 25 seem to make a larger impression than the rest. The goal of the scan and shortlist stages is to take filtered information and turn it into selected information. In your research diary, summarise the citation information of the shortlisted papers, so that you can easily find this information later. Which are historically important? Which are new, but fundamental? Which are reviews that give a neat overview of the measures? Personally, with my diary, I find colour- and symbol-coding works for me (e.g. ♦ = new/important; ∇ = historical/important; ⊕ = good review; etc.) Input at this sub-phase is the information you’ve selected systematically via the previous steps (see Box 1.3 in the above schematic). Your job at this juncture is fairly mundane; summarise the key points from each shortlisted paper. Just a few dot points based on their abstracts will do in most cases. Then, summarise your overall understanding of the story that all these shortlisted papers seem to agree on. The output of this sub-phase is summarised knowledge. This is also the output of the whole first phase of the Promethean unpacking method. To recap, we started with guiding information, and unpacked this systematically to distill what I am calling summarised knowledge. We’re about halfway to building a skeleton! In the second phase of skeleton building, the Promethean poet takes the gradually distilled, summarised knowledge, and turns it into new knowledge. Phase two is also gradual, with three discrete sub-phases. The function of the synthesise sub-phase is to apply your summary of the readings to your specific research topic. Looking at your notes, where are the gaps in the story? For instance, you might have found plenty of papers linking ‘human visual sex perception’ and ‘human faces’, but not a single paper linking ‘human visual sex perception’ with ‘human hands’. That’s a research gap, because you know from some other papers that male and female hands do in fact differ in ways that should be perceptible. Picking up from the previous stage, we now have some applied knowledge to scrutinise (Box 2.2 in schematic). To perform this step, you can start by listing your research question, aim, and statement of hypotheses as they were originally either (a) listed in your research diary, or (b) proposed to your institution’s ethics committee. Look at the list, and ask yourself if anything needs tweaking in light of what you now know about the story under development. The scrutinise sub-phase is about testing if what you have learned still relates to what you initially set out to learn. This is an important sanity check, because paradoxically, the closer you get to a topic, the easier it is to wonder off topic. Illustrating my point, I may begin researching toxoplasmosis, only to arrive at a narrative that focuses more on why kittens are so cute. Your whole text could be tangential to its original purpose, and sometimes the only way to notice is to drop everything and return to square one. This is one of the reasons a diary is so important; it allows you to scrutinise ideas, hopefully before they creep off course. By this stage in my method, because you’ve kept a diary, there is no risk associated with exploring a tangent further – or scrapping it and returning to a previous sub-phase. Whatever you do, your efforts should be directed to establishing compromise between guiding information (e.g. an assignment sheet, your original aim and statement of the problem) and revised knowledge (i.e. what you have learned about the problem and want your audience to understand). Finally, we get to the stage of connecting all the bones into a skeleton (Box 2.3). The general process is to take revised knowledge and shape it into new knowledge. You are fashioning the most relevant and interesting points you have learned into a form that the naive reader will understand and will want to read. To do this, insert a single line per paragraph, to say what each paragraph will be about. Like Lego, with a finite set of blocks, there is an exponentially large number of ways to build; some are more structurally sound than others, and some are more artful than optimal. Beware that some Promethean poets fall into the quagmire of trying to say too much with each paragraph (guilty, as charged). What I advise you do is write a series of temporary headings in ALL CAPS, as placeholders for your paragraphs. Avoid the temptation of conflating two points in the one paragraph, even if they are related. Here is a purely made up demonstration of what I’m suggesting in broad strokes – and please don’t be afraid to experiment: P1: EVIDENCE FOR EXPECTED, SIG X&Y CORRELATION P2: EVIDENCE AGAINST EXPECTED, SIG X&Y CORRELATION P3: EVIDENCE FOR UNEXPECTED, NON SIG Y&Z CORRELATION P4: EVIDENCE AGAINST UNEXPECTED, NON SIG Y&Z CORRELATION P5: THEORETICAL SYNTHESIS Once the bones are in place, you can swap and omit as you please. You can add new bones to support your skeleton as required. When you are finally satisfied with a specific configuration of paragraphs, take a page out of the wise warrior’s book; hide the skeleton for a while. Go and play guitar or something. Later, exhume the document with fresh eyes – do you still like it? If not, rearrange the bones, reinter the document, and go and do something else for a while; if so, congratulations, you’ve unpacked your ideas and are ready to keep climbing! Summary and next steps These six sub-phases should get you well on your way – the rest is just adding flesh and skin to the bones. You can expect to spend something like 80% of your time on 20% of the work. The hardest parts of it all are getting started, and being okay with the story changing as you near completion. In Part I of my writing course, the wise warrior mindset was conjured as the essence of preparing for the journey ahead. Now, with the Promethean poet, we are talking about committing our ideas to paper. Nonetheless, at any time, you are free to zoom out from a specific incline and appreciate the whole mountain again. To illustrate, you may get stuck devising a search string that captures your specific topic. You plan and perform 12 different searches, but they all seem to result in either too many results to skim through, or none at all. At this point, it is a good idea to transform your mindset back to the wise warrior – when you do, your immediate goal becomes recuperation. The great thing about keeping a research diary is you can take the rest of the day off, then return as the Promethean poet tomorrow – stronger than before and with a record of exactly where you’re up to in the unpacking process. In summary, the trick to start writing is to build a skeleton. Once you’ve done this, you can fill in the flesh and skin. The problem now is the opposite one we had before; you know how to write, but how do you stop? How do you know when to? In mountaineering terms, you have unpacked your entire kit, only to face a vast chasm. You can’t take everything with you, so you need to be real about what to keep and what to discard into the void below. This is the primary function of the barbaric butcher, which is the mindset featured in Part III of this course.
<urn:uuid:d9dae033-c150-4300-946c-4587d37ef763>
CC-MAIN-2021-21
https://ideospectus.com/2018/11/22/promethean-poet/?shared=email&msg=fail
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991252.15/warc/CC-MAIN-20210512035557-20210512065557-00378.warc.gz
en
0.928951
4,394
3.21875
3
COVID-19, SYSTEMIC RACISM, RACIALIZATION AND THE LIVES OF BLACK PEOPLE George J. Sefa Dei and Kathy Lewis, Ontario Institute for Studies in Education, University of Toronto| November 12, 2020 As the Canadian government gears up for a second wave of the COVID-19, without a vaccine, this could mean more fatalities and poor health outcomes for many. Though there is not much to be said about the origin and treatment of this disease, there are glaring noticeable disparities among marginalized groups. People of African descent are yet again on the frontline of the disease’s impact. This disease sends a national and global shockwave that triggers a plethora of financial, economical and health crises. COVID-19’s glaring impact continues to unmask heightened risk factors for members of the African, Black, Indigenous and racialized communities. Historically, members of the African and Black communities are disproportionately affected. There are multiple underlying predispositions, such as overcrowding correctional facilities, underlying health conditions, inadequate access to healthcare, food, housing, employment, and employment safety. These disproportional illustrations are yet another marker of those who remain underserved and ignored. This paper presents not only a glimpse of the devastation, but the urge to act now. We provide some recommendations to address the myriad of threats and devastations to our communities caused by COVID-19. COVID-19 has shown us that it knows more about ourselves as a society than we admit. The disproportionate effects of COVID-19 on Black, Indigenous and racialized populations is revealing in many ways. The great urgency for change lies bare at the tentacles of anti-Black racism. John Hopkins University, Coronavirus Resource Center highlights that as of October 18, 2020, there have been more than 1.1 million deaths worldwide; the U.S has the highest mortality rate at nearly 220,000 deaths and Brazil second at more than 153,000 deaths. One in 1,125 Black Americans has died (or 88.4 deaths per 100,000), 1 in 1,375 Indigenous Americans has died (or 73.2 deaths per 100,000), and 1 in 2,450 White Americans has died (or 40.4 deaths per 100,000), according to APM Research Lab. Meanwhile, in Canada, the federal government does not collect race-based data, and only recently have some provinces, such as Ontario, started to do so. Black community groups, such as Alliance for Healthier Communities, have been rallying for race-based data collection in order to disaggregate the data not only for accountability but for the health and safety of marginalized people. It is only by clusters of COVID-19 in highly concentrated marginalized communities that we can deduce its impact. Hence, the range of data that we examine is from the U.S. Canada’s failure to collect race-based data is symptomatic of an ongoing denial and complacency of anti-Black racism. For far too long, people of African descent have been feeling the heat of systemic oppression. Undeniably, COVID-19 has taken Black lives at an alarmingly higher rate. For some, it is difficult to remain hopeful in a sea of despair. However, we have had great teachers who have cemented a greater spirit of life, a genetic marker of truth. In this, we breathe hope. A critical examination of COVID-19’s impact on Black communities is an act of resistance and subversion despite anger and pain. We write as reclamation of and reparation to self. Hope and healing emanate from the truth of our experiences. We echo the public’s outcry of dismay and distrust. We stand in solidarity with the voices that ricochet from the past, scattered far and wide, whispering sweet sounds of freedom and justice. We stand in solidarity with the voices that continue to demand such. We stand with voices muzzled by dissent and popularity. We are in perennial mourning. In Black and African communities, death has always been an event where community and family come together for support. Though resistance and resilience form a collective spirit of solidarity that binds people of African ancestry in the African Diasporas and on the continent, there are different processes of racialization. Blackness is not monolithic, and neither is oppression. Therefore, it has been absolutely heartwrenching to see the inhumane ways in which people are dying and how they are not allowed to have their families by their side to physically say their goodbyes. Thus, it is difficult for our community to not only see these deaths as something out of Darwin’s “survival of the fittest” theory, but also as an expected outcome if we want economic survival. Yet, when asked, “How many lives must be sacrificed for the growth of the economy?” deflection is the answer, and, we hear, “But we cannot shut down the economy for long.” Hence, there is a very vocal minority that has deeply resented being “sheltered in place.” They want us to get back to normal. However, normal itself, has been the problem. We cannot go back to normal because of the deep divisions and inequalities that COVID-19 has revealed in our communities and nations. COVID-19 shines a light on ongoing major health disparities. In the U.S, for example, the pattern that is seen for COVID-19, according to Dr. David Williams in an interview with CNN’s Global Public Square host, Fareed Zakaria (GPS, 2020), remains the same for every major cause of death of Blacks in the U.S. for more than 100 years. This means that African Americans disproportionately die from diseases, such as heart disease, cancer, diabetes, infant mortality, and hypertension, irrespective of COVID-19. Fundamentally, there are economical, social, and epidemiological factors in health surveillance, such as the under-reporting of health conditions and inadequate access to health services that form a structural and systemic breach in identifying and disseminating information for prevention and care. These social determinants are, evidently, among a wide range of pre-existing conditions and predispositions that increase the risk of COVID-19’s morbidity and mortality. However, there are variations of health predisposition and impact based on geographical regions. According to Public Health Ontario, “In Canada, Black populations have higher rates of obesity, hypertension and diabetes, as well as difficulty accessing health care, such as access to a regular doctor.” Another risk factor is chronic exposure to racism (Public Health Ontario, 2020). How then can we deny the saliency of Blackness in COVID-19 when social determinants, such as race, gender, education and health services weigh heavily on an individual’s health? These mask a predisposed vulnerability of Black and Indigenous people. Dei (2020), referencing the work of Johal (2005) sees Blackness as at times serving as a “pigmentary passport of punishment.” Moreover, we cannot overlook the double victimization of Black people in spaces such as prisons and work. “[Black] [women] are less likely to stop working in high-risk jobs, like caretaking in assisted living facilities, in custodial and clerical work at hospitals, or as cashiers/clerks in grocery stores” (Lindsey, 2020). The criminalization of Black males, as Gilbert et. al., 2016 illustrated, has rendered them invisible, particularly in the areas of health. The blame for poor health outcome has been shifted to Black males rather than historical, social, political, educational and institutional forces that undergird their health outcomes, such as de facto segregation and the prison industrial complex. In the U.S., nearly one in three (32 per cent) Black males 20-29 years old is under some form of criminal justice supervision on any given day — either in prison or jail, or on probation or parole. As of 1995, one in 14 (7 per cent) adult Black males were incarcerated in prison or jail on any given day, representing a doubling of this rate from the year 1985. The 1995 figure for white males was 1 per cent. A Black male born in 1991 has a 29 per cent chance of spending time in prison at some point in his life. The figure for white males is 4 per cent, and for Hispanics, 16 per cent. Forty-nine per cent of prison inmates nationally are African American, compared to their 13 per cent share of the overall population (The Sentencing Project, a non-profit organization). By design, social distancing in correctional facilities is a challenge. Inmates are an arm’s-length away, separated by bars. In some facilities, prison guards are not allowed to wear masks. Also, some prisons do not have enough supplies. Lawrence Bartley, director of News Inside, calls attention to a rapidly changing rate of infection. To date, 9,436 inmates across the U.S. have tested positive. Handwashing is next to impossible. According to Bartley, inmates in a Mississippi facility share a sink with 60 people in the facility. Also, at Riker’s Island, the average rate of infection is 10 times that of the regular population. In addition, the disproportionate devastation is seen in the mainstream. In the United Kingdom, the Office for National Statistics published a study in an article in The Guardian. The study covered deaths in hospitals and in the community between March 2 and May 15 and found Black men had the highest mortality rate from COVID-19. Among Black men of all ages, the death rate was 256 per 100,000 people, compared with 87 deaths per 100,000 for white men. Compounding COVID-19’s impact on the Black community is the police killing of unarmed Blacks. George Floyd’s death captivated the world stage, prioritizing truth over fear among protestors, angered by Floyd’s murder, and rightfully so. This surge of global protests marks a “new rhythm, specific to a new generation...with new language and new humanity” (Fanon, 1963). Also, Africans were targeted for deportation, eviction and reports of severe beatings in the streets of Guangzhou, China. Furthermore, “Black Brazilians live, on average, 73 years — three years less than white Brazilians — according to the 2017 National Household Survey. The U.S. has a nearly identical life expectancy gap between races” (Caldwell and de Araújo, 2020), and in Brazil, “people of colour are 62 per cent more likely to die from the virus than whites” (Genot, 2020). Frantz Fanon admonishes us to examine the compartments of the ordering in these two specific parts, the “native world” and the “colonial world,” consequently gaining insights into their key features in Black suffering (1963). Natives, by default, are always on the wrong side of the system’s vulnerability. It is important to understand the potency of this vulnerability. It is a false assumption of chances and coincidences. Nothing in the “colonial world” is by chance. Its effects are orchestrated and calculated with an intended target: the marginalized natives. Case in point, elementary teachers in Ontario’s Durham District School Board were mandated to provide a letter grade for students’ 2019/2020 final report card that reflects assignments completed only between January 2020 and March 2020, disregarding work done prior and during remote learning. Compounding the impacts of this policy is the previous Work-to-Rule stemming from the teachers’ strike in Ontario. During this period, only a semblance of a progress report was issued. Consequently, parents had no real sense of how their children had performed in school. It is even more disheartening for those children who, despite the anxiety of being locked indoors staring at a computer screen for hours, were trying to comprehend and synthesize instructions with limited face-to-face human interactions. Despite these extreme variables, they still managed to complete and submit assignments. They now have to deal with the trauma of being told that none of that mattered. What’s officially recorded is the period between January 2020 and March 2020, and nothing else. This is a great example of “spirit murdering,” as Bettina Love (2019) conceptualizes it. Yes, of course, any student, regardless of race, can fall prey to this seeming technicality. However, this experience is particularly problematic for Black and Indigenous students who are disproportionately pushed out of the public school system. This process could mark the beginning of their pathologization and medicalization (Dei, 2010). It’s the building of a case file that suggests they don’t belong. The subliminal effects are that these racialized students are perceived to perform below grade level; therefore, the assumption is there must be an underlying issue, which usually translates to problems at home, single-parent household, learning disability, behavioural, etc. What follows is heightened surveillance. Students are monitored for lunch, yet not offered one. They are monitored for aggression but given no critical engagement through culturally relevant instructions. They are closely monitored not for improvement but defects. What’s been described is not hypothetical but based on students’ actual experiences. Denying students’ social reality simply means upholding racism in favour of power and privilege. COVID-19 impacts the global community, revealing deep schisms (Afful-Broni et. al., 2020). The fault lines in state and institutional responses, beginning with school systems, national/state governments, policing and law, media, etc. are clear reminders of the urgent need for a new global futurity. There are deep divisions in contemporary society that COVID-19 has exposed. Our institutions are deeply flawed in matters of social justice, equity and fairness. It should not be a daunting task to address these cleavages if we are fully committed to ideals of fairness. We cannot hide from it as these inequities cannot be simply wished away. We must acknowledge that we are in serious trouble. We are not always the “global community” we claim to be. While much is still unknown about the epidemiology of the virus, some facts are becoming clear about society as a whole. COVID-19 discriminates and feeds on the weak, disadvantaged, poor, and elderly. In Europe and North America, we see clearly that Black, African, Indigenous and racialized lives are in peril. We are disproportionately on the frontlines as health and social service workers, in the lines of sanitation, food delivery, health care and home-care. The high death rate among Black and racialized communities is clear as is what we need to do about it. Historically, the lack of health care and the racialization and feminization of poverty has put Black and African communities in the most vulnerable situations. We are among the lot in the most at-risk occupational jobs at this time of COVID-19, and among the least able to afford lockdown and social distancing. Yet we are called upon to make the ultimate sacrifices for wider society. Social/physical distancing taught some lessons. Just watch how the Black body can be avoided on the street as we walk along the same path. We live that experience, and we know what we are talking about. These are hard truths, and no intellectualizing the truth will make it palatable to the ears. - Racism is a bigger issue than the COVID-19 pandemic. We recommend a multipronged approach to fighting anti-Black racism in Canada that focuses attention on every sector of society: health, education, law and justice, employment, transportation, housing, etc. - The pursuit of a national health equity response to the pandemic is significant. This requires doubling educational efforts around anti-Black racism initiatives within all state institutions. - The government mandates the collection of race-based data on health and disease throughout all major health networks in Canada. - The state commits substantial funding to assist Black and Indigenous communities/populations disproportionately affected by COVID-19. - Set up a health advisory group on Black and Indigenous health funding for research on diabetes, hypertension and heart disease, which disproportionately affect Black and Indigenous peoples. - Research needs to be supported and directed to learn how Black and Indigenous communities understand COVID-19’s impact, and how these communities are teaching their children about health and racism. - Steps must be put in place as to how a future vaccine for COVID-19 will be administered to ensure that Black and Indigenous populations are not further marginalized in the distribution of COVID-19 vaccines. - State direct resources must be implemented. For example, economic subsidies, mental health and healing, etc. must be implemented to alleviate the economic, social and emotional impacts of COVID-19 on Black and Indigenous communities. - Ensure the Black and Indigenous communities are well represented at the table of high-level decision-making planning and policy-making affecting Black and Indigenous community health. - COVID-19 has also revealed differential access to technology and other educational resources that have been put in place to mitigate the impact of shelter, etc., for learning. Plans should be devised to ensure that such lessons are learned and to address the different impact of access to technology and communication on Black and Indigenous communities, e.g., school laptops. - Develop strategies to combat the anonymity of intensifying online hate that affects Black and Indigenous communities. Afful-Broni, A.; Dei, G. J. S.; J. Anamuah-Mensah; & R. Kolawole. (eds.) (2020). “An Introduction”: In. Africanizing the school curriculum: Promoting an Inclusive, Decolonial Education in African Contexts. Myers Educational Press [in press]. APM Research Lab (August 18, 2020). The Color of Coronavirus: COVID-19 Deaths By Race and Ethnicity. Retrieved from https://www.apmresearchlab.org/covid/deaths-by-race Caldwell, K.L. & de Araújo, E.M. Protesters in São Paulo declare ‘Black Lives Matter’ at a June 7 protest spurred by both U.S. anti-racist protests and the coronavirus’s heavy toll on black Brazilians (June 10, 2020). The Conversation. Retrieved from: https://theconversation.com/covid-19-is-deadlier-for-black-brazilians-a-legacy-of-structural-racism-that-dates-back-to-slavery-139430 Dei, George, Sefa, (2010) “Re-reading Fanon for His Pedagogy and Implications for Schooling and Education”. In Dei, G. and M. Simmons (eds). Fanon & Education: Thinking Through Pedagogical Possibilities. Pp. 1-27. Published by: Peter Lang AG Stable. URL: https://www.jstor.org/stable/42980664 Fanon, F. (1963). The Wretched of the Earth. New York: Grove Press. Genot, L., (May 08, 2020). In Brazil, Coronavirus hits Blacks harder than Whites. Buenos Aires Times, Retrieved from https://www.batimes.com.ar/news/latin-america/in-brazil-coronavirus-hits-blacks-harder-than-whites.phtml Gilbert, K.L., Ray, R., Siddiqi, A., Shetty, S., Baker, E., Elder, K., & Griffith, D. (2016). “Visible and Invisible Trends in Black Men’s Health: Pitfalls and Promises for Addressing Racial, Ethnic, and Gender Inequities in Health”. This article’s doi: 10.1146/annurev-pub health-032315-021556 Johal, G. 2005. “Order in K.O.S.: “On Race, Rage and Method”. In Dei, G.J.S. & Johal, G. (eds). Critical Issues in Anti-racist Research Methodology. New York: Peter Lang. (pp. 269-290). Johns Hopkins University (2020). COVID-19 Dashboard by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University (JHU). Retrieved from https://coronavirus.jhu.edu/map.html Love, B. (2019). We Want to do More Than Survive: Abolitionist Teaching and the Pursuit of Educational Freedom. Beacon Press Lindsey, T. (2020). Why COVID-19 is hitting Black women so hard. Women’s Media Centre. Retrieved from: https://womensmediacenter.com/news-features/why-covid-19-is-hitting-black-women-so-hard Macintyre, N. (June 19, 2020). Black Men in England Three Times more Likely to Die of COVID-19 than White Men. The Guardian. Retrieved from https://www.theguardian.com/society/2020/jun/19/black-men-england-wales-three-times-more-likely-die-covid-19-coronavirus Mauer, M. (1999). The Crisis of the Young African-American Male and the Criminal Justice System. Retrieved from https://www.sentencingproject.org/wp-content/uploads/2016/01/Crisis-of-the-Young-African-American-Male-and-the-Criminal-Justice-System.pdf. Ontario Public Health. (May 24, 2020). COVID-19 – What We Know So Far About… Social Determinants of Health Retrieved From https://www.publichealthontario.ca/-/media/documents/ncov/covid-wwksf/2020/05/what-we-know-social-determinants-health.pdf?la=en Statement from Black Health Leaders on COVID-19’s impact on Black communities in Ontario. (April 2, 2020). Alliance for Healthier Communities. Retrieved from https://www.allianceon.org/news/Statement-Black-Health-Leaders-COVID-19s-impact-Black-Communities-Ontario Sun, Christine (2020). Personal communication to George Dei, as response to ‘CNN: Africans in Guangzhou on edge as coronavirus fears spark anti-foreigner sentiment in China. Department of Social Justice Education, Ontario Institute for Studies in Education, University of Toronto. April 11, 2020 YouTube (2020). There’s No Social Distancing in Prison https://www.youtube.com/watch?v=XyExu1XMA7I Last accessed (August 25, 2020) Zakaria, F. (Host). (June 07, 2020). Global Public Square: Why COVID-19 hit Black Americans so hard. [CNN TV show]. Retrieved from https://www.cnn.com/videos/tv/2020/06/07/exp-gps-0607-williams-on-black-americans-and-covid.cnn George J. Sefa Dei is Professor of Social Justice Education at the Ontario Institute for Studies in Education of the University of Toronto (OISE/UT). Kathy Lewis, TDSB Secondary School Teacher, ACL - Equity, Student Engagement, Achievement and Well-being, OCT, BEd, MEd - SJE (OISE - Currently)
<urn:uuid:c00d73e9-cc12-476b-be29-ea2cd74aceec>
CC-MAIN-2021-21
https://rsc-src.ca/en/covid-19/impact-covid-19-in-racialized-communities/covid-19-systemic-racism-racialization-and-lives
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.10/warc/CC-MAIN-20210511153555-20210511183555-00137.warc.gz
en
0.93609
4,978
2.640625
3
The great use of life is to spend it for something that will outlast it. --William James The work of a school leader can be tireless. And to what end? Especially in international contexts, where the lifespan of a school leader at any given school can be so fleeting (Hayden, 2006, p. 103), the need for sustainability in leadership is of paramount importance. How can international school leaders ensure that their long hours of hard work towards school improvement won’t end up dust in the wind at the end of their contracted tenure? Decentralizing leadership and building a high capacity to lead throughout the school can help to ensure sustainability (Lambert, 2007, p. 312). In a school with high leadership capacity, all stakeholders develop and are responsible in part for leadership, including students, teachers, and parents, information is used to guide inquiry, and institutional growth is guided by a shared vision (Lambert, 2007, p. 313). Removing the ownership of leading the school from a single individual and distributing it across the institution and its systems and processes helps to ensure that initiatives can survive the changes in leadership that are so frequent in international schools. Clearly defined roles and interdependencies are important to ensure that organizational growth initiatives are maintained over the long term (Adelman & Taylor, 2007, p. 61). Ideally, the distribution of roles and responsibilities should support independence and empower teachers as leaders in the school (Lambert, 2007, p. 315). Leaders may act as organization facilitators who train and empower teacher-led change teams to catalyze and actualize change towards the school’s vision (Adelman & Taylor, 2007, p. 67). Purposeful near- and long-term action towards change goals can also be facilitated through the use of collaborative action planning (Cawsey, Deszca, & Ingols, 2016, p. 390). Ensuring that plans and change visions are clearly linked to the school vision and outline progress through series of milestones and intermediate steps can ensure that a sense of need and urgency is maintained beyond the tenure of any one member of a change group (Cawsey et al., p. 408). Transitioning an organization from a state of low leadership capacity to a sustainable structure of distributed leadership and structured planning is not a simple task: it involves moving through a number of different stages (Lambert, 2007, pg. 315). A forward-thinking board or school head could use the aforementioned action planning tools and practices to create a sustainable leadership development plan to guide the organization through the process of learning and empowerment required to raise leadership capacity throughout the school as an organization. Planning ahead for periods of instructive capacity development and collaborative culture building, transitional periods of dependency breaking and school-wide gap closing, and finally monitoring and system building can allow a school leader to ensure that the school’s leadership structures can become increasingly sustainable past their time in the lead. The possibility of choosing middle and senior leaders based on the stage of development of the organization provides a powerful lever through which to ensure organizational development. A school leader could even get an early start in planning for their transition out of an organization by training future leadership from within or seeking future candidates from outside the organization based on the needs present in the organizational long-term sustainability plan. The leaders truly seeking to create sustainable leadership in their school would resign when their skill-set no longer served to move the school forward through such a plan and invite another to bring it to fruition. In this sense, the mobility inherent in the field of international education could serve to support sustainable leadership and consistent progress rather than hinder it. Adelman, H.S. & Taylor, L. (2007). Systemic change for school improvement. Journal of Educational and Psychological Consultation, 17(1), 55-77. Cawsey, T. F., Deszca, G. & Ingols, C. (2016). Organizational change: An action-oriented toolkit (3rd ed.) [Google Books Edition] (pp. 428-471). Thousand Oaks, CA: Sage. Retrieved from https://play.google.com/books/reader?id=cU0dCAAAQBAJ&pg=GBS.PT459.w.8.0.4 Hayden, M. (2006). Administrators. In Introduction to International Education: International Schools and their Communities (pp. 93–112). London: SAGE. Lambert, L. (2007). Lasting leadership: Toward sustainable school improvement. Journal of Educational Change, 8(4), 311–322. “Resistance is part of the job of leadership, it’s not an interruption. If you don’t have resistance, you’re probably not leading.” --Andy Hargreaves An ongoing challenge that change leaders face is how to deal with inevitable pockets of resistance that threaten to stymie change initiatives and slow progress towards goals. Rather than viewing resistance as a negative, change leaders can find value in an approach that embraces resistance by seeing it as an opportunity to make better sense of change and to sort out what actions are required to achieve it (Cawsey, Deszca, & Ingols, 2016, p. 295). Cawsey, Deszca, & Ingols (2016, p.346) note that truly expert change agents understand that individuals in an organization may have limited capacities and that commitment to change is something that takes effort to build. They describe a number of different approaches that change leaders can take to build and reinforce commitment, all of which involve engaging in sense- and meaning-making with regard to the sought after change, either through emotional calls to action in pursuit of a vision or logical explanations of the underlying strategies and systems (Cawsey, Deszca, & Ingols, 2016, pp. 349–350). These actions serve to mold perceptions of the change among stakeholders and ensure faculty that efforts are worthwhile and in ultimately in their favor (Cawsey, Deszca, & Ingols, 2016, p. 260). In addition to drumming up support, reflective action and questioning practice are characteristics of effective school leadership (Davidson, 2013, p. 9). Resistance can motivate reflection and deeper understanding of the reasons for and ways to achieve change that support its eventual achievement. In describing sustainable leadership practices, Hargreaves (2007, p. 226) asserts the value of learning from the past and retaining the parts of past practice that have been proven effective. Change leaders should avoid ‘throwing out the baby with the bathwater,’ so to speak, and creatively combine the best of what an organization already is with what it is envisioned to become. Systems that support successful change encourage sharing concerns, mutual accountability, and learning across all levels in the organization (Fullan, 2006, p. 119; Harris, 2011). Negative reactions to elements of proposed change initiatives can be useful to make change leaders aware of issues they didn’t initially consider, and engaging in discussion early on in the process of development can help to address them (Cawsey et al., 2016, p. 295). Change can be a traumatic experience for some and provoke responses similar to grief (Cawsey et al., 2016, p. 302). Change leaders should exercise patience and focus on problem-solving and addressing resistance as a normal part of the change process and avoid singling out individuals for blame (Cawsey et al., 2016, p. 298). Approaching change leadership through collaboration and reflection across all levels in the school can help to ensure that what might have been seen as ‘resistance’ can be viewed as useful critical feedback to strengthen planning and encourage deeper reflection on our goals. Cawsey, T. F., Deszca, G., & Ingols, C. (2016). Organizational Change: An action-oriented toolkit (3rd ed.) [Google Books edition]. Retrieved from https://play.google.com/books/reader?id=cU0dCAAAQBAJ&pg=GBS.PT282.w.6.0.51 Davidson, D. (2013). Preparing Principals and Developing School Leadership Associations for the 21st Century: Lessons from Around the World. Toronto: Ontario Principals Council. Fullan, M. (2006). The future of educational change: system thinkers in action. Journal of Educational Change, 7(3), 113–122. Hargreaves, A. (2007). Sustainable Leadership and Development in Education: creating the future, conserving the past. European Journal of Education, 42(2), 223–233. Harris, A. (2011). System improvement through collective capacity building. Journal of Educational Administration and History, 49(6), 624–636. International School Leadership. (2014). Uplifting Leadership [Online video]. International School Leadership. Retrieved from https://www.youtube.com/watch?time_continue=2&v=9V0GaLRmq20 How to drive organizational change is schools is a complex challenge to address for which there is no single answer: each school has its own formal and informal systems that leaders must navigate and put to work towards moving forward their agendas. (Cawsey, Deszca, & Ingols, 2016a, p. 88; Cawsey et al., 2016b, p. 103; Fullan, 2006, p. 9). Formal systems in schools include the hierarchies, departments, roles, tasks, planning and processes that structure and influence what happens and how it happens in a school (Cawsey et al., 2016c, p. 197-198). Informal systems in schools can be loosely defined as the ‘culture’ of the school; the shared beliefs, rituals, norms, expectations, and behaviors that provide a sense of identity in the school and are taught to new members (Cawsey et al., 2016d, p. 255; Leo & Wickenberg, 2013, pp. 405-406). There is no perfect formal system or organizational structure; every school’s organizational design presents hurdles to be overcome and challenges related to gaps or overlaps in duties among departments and administrators (Cawsey et al., 2016c, p. 214). Change agents must be aware of the systems and structures in place in their schools and how best to use them to get formal approval to support and legitimatize change (Cawsey et al., 2016c, pp. 218-219). Formal structures provide individuals and departments with the capacity to influence others and resources to support sustained change iniitiatives (Cawsey, et. al 2016d, pp. 251-253). Change agents should work closely with decision-makers and administrators to develop change plans that relate to the school’s vision, balance costs and benefits to multiple stakeholders, and align with budget cycles and other processes to enhance their prospects for approval (Cawsey et al., 2016c, pp. 219-221). In addition to working with the formal systems and structures in their schools, change leaders must also leverage the informal systems and structures embedded in the school’s culture to bring change initiatives to fruition (Cawsey et al., 2016a, p. 88). A school’s culture can be expressed in visible and invisible ways as the physical appearance of faculty and facilities as well as in the values and norms that are publicly expressed and privately held (Cawsey et al., 2016d, p. 256; Leo & Wickenberg, 2013, pg 406). Differing views on the nature of culture represent it as either an external, objective feature of schools that can be managed or as an internal, subjective construct that varies between individuals (Connolly, James, & Beales, 2011, p. 425). Regardless of the perspective taken, school leaders, as agents of change, should feel empowered to leverage symbols, engage subcultures within and outside of the organization, and examine and modify processes to ensure that the values that drive them manifest as artifacts and activities that will feedback in positive ways to build cultures supportive of change (Connolly et al., 2011, pp. 431-434, Leo & Wickenberg, 2013, pg 413). School leaders exert power to affect change in formal school structures that can affirm further positive changes informal cultural structures within their organizations. Formal leadership structures can be modified to distribute leadership among faculty to reinforce initiative and a sense of efficacy among teachers, and physical and time resources can be structured to ensure that teachers have time for collaborative professional development focused on advancing change visions (Leo & Wickenberg, 2013, p. 419). Though different schools have different needs, change leaders benefit from less formal, decentralized formal structures that support innovation (Cawsey et al., 2016c, pg. 211). Recognizing and leveraging the cause-effect feedback loop that exists between systems, both formal and informal, and the faculty they act on and who act upon them, is a powerful route to driving change in schools (Cawsey et al., 2016, p. 198). Cawsey, T. F., Deszca, G., & Ingols, C. (2016a). Framing for leading the process of organizational change: “How” to lead organizational change. In Organizational change: An action-oriented toolkit (3rd ed.) [ePUB] (pp. 67–100). Thousand Oaks, CA: Sage. Cawsey, T., Deszca, F., & Ingols, C. (2016b). Frameworks for diagnosing organizations: “What” to change in an organization. In Organizational change: An action-oriented toolkit (3rd ed.) [eBook] (pp. 101–140). Thousand Oaks, CA: Sage. Cawsey, T. F., Deszca, G. & Ingols, C. (2016c). Navigating change through formal structures and systems. In Organizational change: An action-oriented toolkit (3rd ed.) (pp. 197-245). Thousand Oaks, CA: Sage. Cawsey, T. F., Deszca, G. & Ingols, C. (2016d). Navigating organizational politics and culture. In Organizational change: An action-oriented toolkit (3rd ed.) [eBook] (pp. 246-282). Thousand Oaks, CA: Sage. Leo, U., & Wickenberg, P. (2013). Professional norms in school leadership: Change efforts in implementation of education for sustainable development. Journal of Educational Change, 14, 403-422. Meaningful change is never easy. How to motivate and sustain positive organizational change is a challenge that all change leaders face and a question whose answer relies very much upon the context in which the leader operates (CawSey, Deszca, & Ingols, 2016, p. 103; Fullan, 2006, p. 9). I say ‘positive’ organizational change as the idea of a static organization is a mirage; organizations are collections of people whose habits and actions change with every interaction and adapt to every new iterative cycle of the processes they enact (Tsoukas & Chia, 2002, p. 567). The challenge that change leaders face is reining in the constant change and directing it towards positive ends (Tsoukas & Chia, 2002, p. 567). They must also find means to sustain change to ensure that new knowledge and practice that results from initial drives to change do not dissipate or degrade over time and that new behaviors and practices become embedded in the culture of the school (Hargreaves, 2007, pp. 228–229). One way in which motivation to change can be incited is through the emergence of a crisis, real or fabricated (Cawsey, Deszca, & Ingols, 2016, p. 161). Leaders should take care when creating narratives that over-amplify or create crises from whole cloth, as it may result in erosion of trust in school leadership. A high degree of trust is a precondition of collaborative decision-making and ‘bottom up’ visioning, both of which are powerful drivers of buy-in and long-term motivation for sustained change efforts (Cawsey et al., 2016; Tschannnen-Moran, 2013, p. 43). Keeping in mind the need for transparency and honesty to support cultures of trust, framing change initiatives with compelling narratives that combine logics and discourses can help bring together stakeholders with different agendas to work towards shared visions (Ball, Maguire, Braun, & Hoskins, 2011, p. 628). While self-interest can bring about complacency even in the face of crisis (Cawsey et al., 2016, p. 142), a change initiative driven by a leader that acts as a ‘boundary spanner’ who engages the ideas and talents of diverse stakeholders towards a shared change vision can create fluid and connective opportunities that produce workable plans for action, cognitive shifts, reframing of challenges, and democratic, communitarian, and economic theories of shared leadership (Grogan & Shakeshaft, 2011, pp. 120, 125; Gordon & Louis, 2012, pg. 349; Starratt, 2008, pg. 89). To ensure that momentum for change is maintained and that inertia does not slow down or halt work towards change initiatives, leaders should ensure that change visions support and are connected to the broader mission and vision of the school (Cawsey et al., 2016, p. 173). As change processes are enacted, the context of the school will change along with it necessitating modifications to action plans and actors who are flexible and willing to redefine their positions and responsibilities within the school as novel interactions and new problems intersect the ‘fuzzy boundaries’ of our definitions of the roles and departments within the school (Tsoukas & Chia, 2002). The constant ebb and flow of change can defy simple analysis and categorization, and leaders who are able to ‘perceive change’ intuitively as well as ‘conceive change’ in a planning capacity will be well-prepared to deal with the challenges of sustaining change over the long term (Tsoukas & Chia, 2002, p. 572). Additionally, viewing shared visions as broad spaces that allow for varied interpretations by different stakeholders rather than one-way streets can ensure that visions can remain shared in spite of faculty turnover or changing conditions (Cawsey et al., 2016, p. 179). No single person can hope to sustain long-term organizational change in a school on their own. Engaging diverse stakeholders in crafting visions for change that are shared and meaningful to all allows leaders to access the strength of the entire school community to drive positive change to improve learning in their organizations. Ball, S. J., Maguire, M., Braun, A., & Hoskins, K. (2011). Policy actors: doing policy work in schools. Discourse: Studies in the Cultural Politics of Education, 32(4), 625–639. Cawsey, T., Deszca, F., & Ingols, C. (2016). Frameworks for diagnosing organizations: “What” to change in an organization. In Organizational change: An action-oriented toolkit (3rd ed.) [eBook] (pp. 101–140). Thousand Oaks, CA: Sage. Cawsey, T. F., Deszca, G., & Ingols, C. (2016). Building and energizing the need for change. In Organizational change: An action-oriented toolkit (3rd ed.) [eBook] (pp. 141–196). Thousands Oaks, CA: Sage. Fullan, M. (2006). Change theory: A force for school improvement. Series Paper No. 157. Victoria, Australia: Center for Strategic Education. Gordon, M.F., & Louis, K.S. (2012). How to harness family and community energy: The district’s role. In M. Grogan (Ed.), The Jossey-Bass reader on educational leadership (3rd ed.) (pp. 348-371). San Francisco, CA: Jossey-Bass. Grogan, M., & Shakeshaft, C. (2011). A new way: Diverse collective leadership. In M. Grogan (Ed.), The Jossey-Bass reader on educational leadership (3rd ed.) (pp. 111-130). San Francisco, CA: Jossey-Bass. Hargreaves, A. (2007). Sustainable Leadership and Development in Education: creating the future, conserving the past. European Journal of Education, 42(2), 223–233. Starratt, R.J. (2008). Educational leadership policy standards: ISLLC 2008. In M. Grogan (Ed.), The Jossey-Bass reader on educational leadership (3rd ed.) (pp. 77-92). San Francisco, CA: Jossey-Bass. Tschannnen-Moran, M. (2013). Becoming a trustworthy leader. In M. Grogan (Ed.), The Jossey-Bass Reader on education leadership (pp. 40–54). San Francisco: Jossey-Bass/Wiley. Tsoukas, H., & Chia, R. (2002). On Organizational Becoming: Rethinking Organizational Change. Organization Science, 13(5), 567–582. Education policy forms the basic structure of practice and governance and profoundly affects work and outcomes in education (Arafeh, 2014, p. 1), both through formal structures and laws as well as spoken and unspoken social and cultural norms (Arafeh, 2014, p. 4). All policy, education policy not excluded, involves making compromises to seek balances between freedoms, resources, interests, values, and efficiencies, and often involves redefining values in order to justify and account for the outcome of the decisions made (Rizvi & Lingard, 2010, pp. 71–72). Education is a right of citizenship and shaped by national policy, while also largely determining how citizens view the meaning of citizenship in their local context as it relates to their relationship with governing bodies and structures (Bell & Stevenson, 2006, pp. 61–62). In the past, social welfare policy in the west supported notions of universalism, the belief that individuals’ rights to social welfare should be independent of their ability to contribute to the economic well-being of the nation-state, and that all citizens should have access to every liberty regardless of their station, thus empowering the state to intervene in market processes and redistribute resources in such a way as to balance inequalities. Rights of social citizenship were considered universal, much like civil and political citizenship rights (Bell & Stevenson, 2006, p. 60).In such a policy landscape, support for public education should flourish as a basic right of all individuals in support of democracy, equality, personal fulfillment, and the intrinsic value of every individual and their education (Rizvi & Lingard, 2010, pp. 72, 78). More recently, a global shift towards neoliberal education policies focusing on human capital in a globalized economy, privatization, efficiency, and accountability has occurred (Rizvi & Lingard, 2010, p. 72). The importance of national human capital development to economic growth viewed from the neoliberal perspective firmly places education policy in the realm of national economic policy, resulting in the dominance of social efficiency as a policy value and education policy that supports economic productivity of nations and corporations (Bell & Stevenson, 2006, p. 58; Rizvi & Lingard, 2010, p. 78). Supporters of neoliberal policy cite the failure of the universal approach to provide balanced service provision to marginalized groups and its inability to address culture in diverse nations and argue that the market offers the only fair way to appraise and allocate resources (Bell & Stevenson, 2006, p. 61). The shift towards neoliberal education policy poses risks. Education policy is social policy insofar as it promotes welfare, ideology, and social cohesion (Bell & Stevenson, 2006, p. 58). The neoliberal practice of viewing individuals as being only as valuable as their contribution to the free market necessarily results in rebalancing and renegotiating values like equity and democracy, sidelining some values and promoting others (Rizvi & Lingard, 2010, p. 76). Policies promoting test-based accountability resulting from this neoliberal perspective of education may oversimplify complex local contextual issues and reduce social justice (Lingard, Martino, & Rezai-Rashti, 2013, p. 539). The focus on education policy to support global economic competitiveness minimizes the value of discourse centered on altruism and assumes that public institutions and governments are threats to individual freedom (Rizvi & Lingard, 2010, p. 86). Increasing privatization of education services based on assumptions of greater efficiency runs the risk of corroding state commitment to public education resulting in it becoming a low-quality residual service for the poor and those without the political power to ask for more (Bell & Stevenson, 2006, p. 62; Rizvi & Lingard, 2010, p. 87). Decentralization for accountability can decrease national cohesion and, as marginalized students and teachers in low-income schools struggle against the odds to raise scores in subjects on standardized tests, they are more likely to ignore educating students in skills of participatory democracy, further limiting their ability to lobby for more social welfare support (Bell & Stevenson, 2006, p. 68) At the international level, as governments and international aid providers apply results-based criteria to determine whether developing nations receive education funding, there is a risk that developing education systems will lose funding due to low initial capacity or factors out of their control (Holland & Lee, 2017, pp. 26–27). The proliferation of neoliberal policy in education and tendency to subsume other values to social efficiency as a meta-value risks strengthening historical and economic inequalities by limiting the perception of the value of the individual to their worth in the global market and furthering a reductive notion that success is proportional to effort regardless of starting conditions (Rizvi & Lingard, 2010, p. 78). As state service provision is eroded by increasing privatization, civics education is ignored to secure funding by increasing standardized test scores in core subjects, and the tone of the prevailing discourse paints governments as disconnected and inefficient, education policy devalues citizenship within the state and bolsters the neoliberal view that acting in self-advancing ways to increase one’s status in the global market should supersede altruism and civic responsibility (Rizvi & Lingard, 2010, pp. 87–88). Rizvi & Lingard (2010) state that the goals of education include “ the development of knowledgeable individuals who are able to think rationally, the formation of sustainable community, and the realization of economic goals benefiting both individuals and their communities” (pg. 71) The current neoliberal bent in education policy promises only to address the economic. Where then will individuals turn to develop the capability to act as rational, informed citizens of sustainable communities if success in a global market economy does not support the development of such individuals? What is the cost to be borne by global social welfare and democracy in the pursuit of economic gain? Arafeh, S. (2014). Orienting Education Leaders to Education Policy. In N. M. Haynes, S. Arafeh, & C. McDaniels (Eds.), Educational leadership: Perspectives on preparation and practice. Toronto: UPA. Retrieved from http://ebookcentral.proquest.com/lib/brocku/detail.action?docID=1911841 Bell, L., & Stevenson, H. (2006). Educational policy, citizenship, and social justice. In L. Bell & H. Stevenson (Eds.), Education policy: process, themes and impact (pp. 58–73). London: Routledge. Holland, P. A., & Lee, J. D. (2017). Results-based Financing in Education: Financing Results to Strengthen Systems. Washington, DC: World Bank. Lingard, B., Martino, W., & Rezai-Rashti, G. (2013). Testing regimes, accountabilities and education policy: commensurate global and national developments. Journal of Education Policy, 28(5), 539–556. Rizvi, F., & Lindgard, B. (Eds.). (2010). Education Policy and the Allocation of Values. In Globalizing Education Policy (pp. 77–92). New York: Routledge. Matthew Boomhower is a mid-career educator with 18 years of classroom teaching and educational leadership experience. He is Head of Innovation & Learning at an international school in Malaysia and is a proud husband and father.
<urn:uuid:b0887935-cafa-404a-8c8f-585dbe4e3901>
CC-MAIN-2021-21
https://www.matthewboomhower.com/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00617.warc.gz
en
0.904084
6,014
2.6875
3
Like an allegory of the butterfly effect, there is a link between plastic bags and the trash island floating in the Pacific. The biosphere isn’t a theory; our daily actions effect it positively or negatively. One of the most widespread fallacies about climate change and its consequences: “The citizens of rich countries aren’t to blame and, besides, what can be done. We have enough to worry about with our daily problems.” No. This article isn’t about the butterfly effect. Or yes. This report sets out, in 10 points, small changes in everyday life that can help, yes, fight climate change. They also can raise the awareness of governments, individuals and businesses that things can be done, among them: to vote consistently, to organize and be vocal and, why not, to penalize the businesses that do not act respectfully. For those waiting to sign Kyoto 2 or who believe that climate change does not affect them, there are these 10 pieces of advice, developed below: no to plastic bags; use low consumption light bulbs; adjust the heating; no to standby; buy locally; public transportation; bicycles or walking; make compost for your own garden -even urban-; what to do if a car is necessary; downshifting -or how to relax a bit. It doesn’t sound very revolutionary, right? Perhaps this reminds you of some pseudo-advertising pamphlet from your electric company or your City Hall, a string of otherworldly babble or the apocalyptic psalm of some confession; something that does not invite you to continue reading. When you reach, if you reach, the end of this report, perhaps you will have changed your mind. The 10 pieces of advice: 1. Say no to plastic bags, PET bottles and polystyrene containers Plastic bags contaminate, and continue to contaminate, the environment. In rich countries, the majority of plastic bags end up in dumps without being treated; in emerging and poor countries, in the street and in nature. Every year, they kill thousands of land-based animals, birds and, above all, marine animals. There is data to verify it for the past half century. According to the Sustainable Life Foundation, they take 150 years on average to be broken down. In theory, all materials are biodegradable, although a good part of the containers and the materials we use daily remain as trash. Thanks to our current “throwaway” culture, few bring their own cloth bags for shopping, due to the “comfort” of plastic bags, in spite of the fact that they tend to cut off the circulation in your fingers when full. According to the Sustainable Life Foundation (Spain), the durability of common materials used in our products are: - 1-14 months: paper, clothes and any kind of cotton and linen. Materials composed of cellulose are not a problem, because “nature easily integrates its components into the earth”. The role of recycling is indispensable and avoids the use of more wood. - 10 years: minimum time needed to breakdown tin cans, until they become an iron oxide mix. Plastic (polypropylene) also takes a decade to be broken down into synthetic molecules. - 30 years: containers of shellacs and foam, whose historical content of CFC, now prohibited, is responsible for the hole in the ozone layer; Tetra-brik packaging (80% of cellulose, 15% of polyethylene and 5% aluminum). - 100 years: steel and plastic (disposable lighters, for example). In a decade, the plastic doesn’t even lose its color. Some plastics have very contaminating components that don’t degrade (PVC) and the majority contain mercury. They can contain heavy metals very dangerous for life, including humans, and the nervous system: zinc, chromium, arsenic, lead and cadmium, that begin to be degraded at 50 years, but they will remain harmful for decades. - More than 100 years: bottles of PET plastic last more than one century. PET is a material that microorganisms can’t attack. You like to buy those small and attractive bottles of water, with that healthy and pure image? Take a moment to read the advantages of drinking tap water (report: Trendy tap water). The false belief exists, from time to time encouraged by studies, that running water is not safe for human consumption, deferring the necessary cultural shift away from bottled water. - Aluminum plating takes more than 200 years to return to the starting point of its life cycle (aluminum oxide, present in the rocks from the crust of the earth). - More than 1,000 years: batteries. They also contain harmful substances for life. They should be recycled at special locations, due to their toxicity. - Indefinite time: bottles of glass. Nevertheless, glass can be recycled 100%. The objective, in this case, consists of guaranteeing the recycling of glass already created, instead of manufacturing new glass indefinitely. Every year, between 500,000 million and a trillion plastic bags are used in the world, according to Vincent Cobb, founder of the business reuseablebags.com. Almost all these disposed-of bags accumulate in the shape of land-based, and marine, trash. They are usually offered free or for a small surcharge to consumers, and their generalized use has caused that, in China, the urban pollution created by plastic bags is called “white contamination”. The solution to avoid using plastic bags, according to experts and businesses, does not consist of using paper bags, whose use would cause a similar impact. The solution is something we had but we lost: to use reusable bags. Who remembers the wicker bags with comfortable leather handles for going to the market? They are still sold. - 11 barrels of petroleum are required to make 1 ton of plastic bags. - Only 1% of plastic bags everywhere are recycled. - Time to decompose (in function of the type of plastic): from 5 to 1,000 years. Every bag generates half a kilogram of pollution. - More than 3% of the world’s plastic bags are presently floating in the sea. - Thousands of tortoises and whales have died by the blockage in its stomach from plastic bags. It is documented. The plastic bags rejected damage, above all, the marine life. According to the ecological organization Planet Ark, cited by The San Francisco Chronicle, around 100,000 whales, seals, tortoises and other marine animals die every year worldwide due to floating plastic bags. The California newspaper reports that an extraordinary island of floating trash, created mostly with plastic bags (80% is plastic), floats in the Pacific Ocean, weighing 3.5 million tons and drifting between San Francisco and Hawaii. It is called the Great Pacific Garbage Patch, a spot that is growing unstoppably (its size has grown tenfold every decade since 1950, according to Chris Parry, of the California Coastal Commission in San Francisco who has studied the phenomenon for a decade) and remains in a zone of currents that circulate maintaining the water stationary. This gigantic swirl in the Pacific North, or the North Pacific Gyre, was, in 2007, twice the size of Texas (or twice the size of France or the Iberian Peninsula). This floating continent of trash bags is especially harmful for, at least, 267 marine species, that have died from the consumption of this waste, according to a field report from Greenpeace. After writing his report on this mass of continental plastic, Justin Berton, journalist for The San Francisco Chronicle, includes a section that shows the impact that the knowledge of this problem has on any reporter: how to help. “You can help to limit the ever-growing patch of garbage floating in the Pacific Ocean.” Subsequently, Berton lists ways to help: - Limit the use of plastic when possible. Plastic is not easily degraded and can kill marine life. - Use reusable bags when shopping. Disposable bags can end up at sea. - When visiting the beach, collect your trash and take it with you. - Make sure you dispose of trash in a closed container. Besides PET bags and bottles, expanded polystyrene (EPS) is another plastic commonly found in products: disposable cups and plates, DVD discs, the packaging and plastic of electrical appliances and computers use this resin. 3.2 grams of fossil fuels are needed (the majority of plastic is made from by-products of petroleum) to make just one disposable cup. Multinational businesses use plastic cups and plates in their establishments. Coffee tastes great in a porcelain cup, and it doesn’t contribute to perpetuate this cycle. Among the measures that can help reduce the consumption of polystyrene products are recycling centers for this material. Also the packaging of a computer or electronic apparatus can be left in the store. Other products with a high plastic content, like childlike diapers, have commercial reusable alternatives. There are washable diapers (video: we have tested Fuzzy Bunz pocket diapers in Seattle, United States); or cloth diapers with a disposable interior that breaks down instantly, like gDiapers, that fit within the ideology of cradle to cradle, or products that return to the environment after use). An intellectual trend and industrial question for decades is the lack of planning for a consumption model that evolved out of the industrial revolution and has not changed in its essence: the materials with which we produce the goods of our daily life are not easily biodegradable and they take time in disappearing from our environment- from several months to the thousand years that it takes for a battery finally break down. Plastic is one of the most lasting and potentially environmentally-harmful of our daily materials. One of the most interesting ideas is that expressed by the architect William McDonough and the chemist Michael Braungart in their book Cradle to Cradle. Their theory, supported by several businesses, fights for products that return to the land as nutrients after use, or that are used in their totality without generating waste in the process. Their book is an example of the “cradle to the cradle” philosophy: the paper used “is not a tree”, but a water-resistant and biodegradable plastic resin reused by the Durabooks company. It can be put, literally, under the shower and to continue to be read. Another of the trends attempting to confront the problem of plastic waste is the so-called green chemical, that allows for the production of plastic products without petroleum, that do not generate waste and are non-toxic. The business opportunities in this field are, according to the experts, colossal. Investors such as Vinod Koshla, known by to have invested wisely in businesses like Sun Microsystems and Google, are betting on ecological chemistry. 2. Use low consuming light bulbs As with the internal combustion engine of cars, the incandescent light -the everyday light bulb- is still a dominant industrial model despite its obsolescence: they squander 85% of the energy consumed in the form of heat. A conventional light bulb has a useful life of 1,000 hours, or a year on average, while those of low consumption last 15 times longer. If only a million homes changed an average of four conventional light bulbs for models of low consumption, it would prevent 900,000 tons of CO2 from being emitted annually. A simple change of habit and with a big impact. Using low consumption light bulbs approved by norms like Energy Star from the EPA (the US Environmental Protection Agency) or the labeling regulations of the European Union for electrical appliances, air conditioning and lighting, are not a big mystery. It consists of detecting incandescent light bulbs (the 19th century invention, inexplicably present, recognizable by its metallic filament) that we still use at home and replacing them with others of low consumption or CFL (compact fluorescent light bulbs) of 15 watts, equivalent to a traditional light bulb of 75. The traditional fluorescent also belongs to this family and consumes less than traditional light bulbs. The price is higher, although payback is fast: they consume 80% less and last 8 times longer. Each low consumption light bulb avoids, besides, the emission of half a ton of carbon dioxide in its lifetime (between 8,000 and 10,000 hours). A more difficult change to implement, implying a change of habits, consists of using artificial lighting as a scarce luxury: - Taking advantage of daylight (and planning work, leisure or rest spaces accordingly). - Turning off the lights upon leaving rooms. - Using lamps to read or study to eliminate the use of indirect lights. Low consumption lighting continues to be more expensive and difficult to manufacture and to use mercury (up to 5 milligrams per light bulb). Due to the toxicity of this substance, its collected at the end of its useful life should be planned better. Businesses like General Electric (GE) have assured that soon they will reduce dramatically the use of mercury. The presence of mercury in a product with expectations of spectacular growth worldwide worries scientists. In the United States, 150 million light bulbs were sold in 2006, a figure that was surpassed in 2007. The Swedish furniture chain IKEA has CFL light bulb recycling programs in its 234 stores worldwide, the only global initiative of this type, according to Reuters. The convenience of replacing outdated incandescent light bulbs with low consumption models, as much to save energy as to fight climate change, opens a new market to businesses that invest low consumption lighting without resorting to toxic products. Organic lighting (report: Fiberstars, or how to use fiber optic lights) use organic light emitting diodes (OLED) as a light source. OLED plastics are easily recycled and their useful life is very prolonged. The Ewing, GE, Fiberstars and Osram Opto Semiconductors are working on this new technology which could be used even in “transparent windows” : the glass of a window would be dedicated to capture light during the day and, thanks to the solar energy collected, they would be able to illuminate during the night. Windows would become lamps. California wants to prohibit the use of incandescent light bulbs beginning in 2012, and similar measures in other places are being taken. 3. Adjust the heat According to the Institute for Diversification and the Savings of the Energy (IDAE), 66% of domestic energy expense is destined for heating and hot water. The remaining 34% is invested in electric (16%), kitchen (10%), lighting (7%) and air conditioning (1%). According to the IDAE, in winter, “a temperature between 19 and 21 degrees [66-70°F] is sufficient. At night, in the bedrooms, a temperature from 15 to 17 [59-63°F] degrees is sufficient to be comfortable.” Adding clothing instead of turning up the heat can seem, as with changing traditional light bulbs for low consumption models, obvious. In normal circumstances, “it is sufficient to turn on the heat in the morning”. At night, except in very cold zones, the heating should be off, “since there is accumulated heat in the home.” In winter, to be sufficiently-clothed at home, allows for dropping the heat to 20 degrees centigrade. Each additional degree represents a 7% rise in energy consumption and the extra annual emission of more than 200 kilograms of CO2. The bioclimatic home is seen by specialists as a continuation of traditional architecture: the correct orientation, the use of natural insulation -so common in the past-, philosophies about natural efficiency like permaculture, more efficient air conditioning and heating systems, good insulation, and passive systems of collecting energy and rain water. Each house has its own idiosyncrasies, as explained by Spain’s La Vanguardia newspaper: - A house that is not oriented to the south, must be insulated to the maximum. - Let the sun enter the home in winter and try to keep it out in summer. If the house is well insulated, windows can be opened during the night and, first thing in the morning, closed completely. The air will be fresher and less air conditioning will be required. - Adjust the exterior windows. According to the IDAE, “small improvements in the insulation can involve economic and energy savings of up to 30% in heating and air conditioning”. Between 25% and 30% of heating needs are due to heat losses through doors and windows. - Install shades, blinds and curtains to avoid the sun reaching inside the house in summer. Antonio Ramos, of the Association for Bioclimatic Architecture, believes that one doesn’t need to build new to save energy. A home with a sustainable culture: - Can save 70% on the energy usage of traditional buildings. - With new construction, its price can be between 5% and 15% over that of conventional homes. Californian Mark Feichtmeir, who built a bioclimatic house based on permaculture (report: Permaculture: beyond the garden), believes that this increase is situated between 5% and 10%, in his experience. - Traditional materials are used, like wood- such as bamboo- and stone. Ceramics and ecological concrete are materials recommended by the experts. If possible, the IDAE recommends that multi-family units opt for collective central heating, with measurement and regulation individualized for each dwelling. They are more affordable and efficient and less contaminating systems than individual units. 4. Avoid standby with electrical appliances The rational use of electricity ultimately inspires advertisements as ingenious as this, created in South Africa. At home, due to the proliferation of electrical appliances, electronics and computers, the use of power strips allowing apparatuses to be disconnected simultaneously and easily, is the most effective way to reduce the electric bill and to avoid the waste of “phantom power”. - It is estimated that only 5% of the electricity spent by cellular phone chargers is actually used in charging a phone. The remainder of the energy is lost, while the charger remains plugged in. - In just the United States, televisions and video players consume one billion dollars in wasted electricity every year. - The expense of “phantom power” (energy wasted by electronic appliances that remain connected without being used) in rich countries generates 75 million tons of CO2 emissions annually. Many electrical appliances continue consuming energy while turned off. They remain in standby mode, so that it is possible to reach them by remote control. Other electronic apparatuses function with direct current and use a transformer that always remains on (computers, stereo systems, videogame controllers, etc.) With electrical appliances, energy labeling reports on energy consumption and other prominent data: noise, efficiency of washing and drying, normal life cycle and other variables. There are 7 categories of energy efficiency, from the letter A to G: - A: < 55% of consumption, low energy consuming (A), very efficient (A+) and ultra-efficient (A++). - B: 55-75%, low energy consuming. - C: 75%-90%, low energy consuming. - D: 90-100%, average energy consuming. - E: 100%-110%, average energy consuming. - F: 110%-125%, high energy consuming. - G: > 125%, why are they still sold? The energy labels are obligatory for electrical appliances such as refrigerators, freezers, washing machines, dryers, dishwashers, electric ovens, air conditioners and domestic lamps. In the European Union, this labeling is obligatory. For refrigerators and freezers, there are two new classes of efficiency even more exacting than class A: - A + : equivalent consumption to 42% of that required by the average. - A + + : only consume 30% of the average. Despite the obligatory nature of energy labeling, with the majority of electronic and data processing devices it is practically impossible to calculate clearly the energy spent by each device. There are businesses that are trying to change this. The British firm DIY Kyoto, for example, has designed the Wattson, a small device that is connected to the power source and reports on the energy use of any apparatus. 5. Buy local If when hungry, we had to travel thousands of miles (the average distance that food travels before reaching the plate; report: Counting miles per bite) to eat, perhaps the current approach of worldwide food distribution would be reexamined. For every food item bought, think about this: - The travel journey of the food and the embodied energy consumed: even the basic ingredients of the shopping basket -meats, fish, spices, eggs, milk, fruit, cereals- consume increasingly more energy contributing to more CO2 emissions before arriving on the plate. From country or place of production to the logistics center, from here to the seller, from the seller home. - Environmental price derived from by-products of its production, including the use of fertilizers, antibiotics, pesticides. - Use of industrial wrappings: aluminum, plastic, paper. Knowing what one eats (report: Food II: footprint of smoothies) is not as simple as one would assume. Growing consumer consciousness has resulted in more availability of local food in markets, specialized stores, cooperatives, and even conventional supermarkets (report: Buy local: it’s your personal farmer). David de Rothschild touts the advantages of local food in his Global Warming Survival Handbook. Here are the main reasons to reduce food miles: - Buying local products offers a competitive opportunity to rural economies, also in rich countries. - The agriculture of mass production relies on fossil fuels, erodes the land, uses only a handful of varieties and contaminates rivers. - With mass-produced food, the focus is on performance, uniformity and compatibility with mechanized crops. - Food originating in remote places loses vitamins and gains contaminants. - The expansion and interdependency of the food and agriculture market puts the food at risk of plagues, diseases and biological attacks. - Local food is more immune to salmonella, the action of E. coli and other bacteria. - It tastes better. Not all food costs the same to produce. The production of red meat is one of the agribusiness activities that most contributes to the contaminating greenhouse gas emissions. One of the most effective actions that an individual can do to reduce his ecological footprint (besides avoiding flying in an airplane): stop eating large quantities of red meat. A question: What produces more greenhouse gases on a global scale, motorized transport or livestock? Answer: livestock, responsible for 18% of total human-induced greenhouse gas emissions. 6. Plant your own food and compost without leaving the apartment (or mini-apartment with roof) A gratifying and affordable way to eat not only organic and local food, but to produce it yourself: take advantage of a sunny corner of the balcony or the terrace to plant seasonal vegetables, fruits or herbs. An urban orchard (report on urban farming: Why we all will be gardeners), requires very little attention: hours of sun, water, fertilizer (organic) and regular care. In exchange, it offers relaxation, family time and, as a reward, local, affordable organic, and fresh vegetables. Kirsten Dirksen explains in a video how easy it is to dedicate a flowerpot on the balcony to plant, for example, spinach (and, in another video, accompanied by SuChin Pak, how in five minutes they can pick a good salad with the crop planted 4 weeks previously, and with no trip to the store. Urban orchards reduce the dependence of consumers on fresh food with a higher environmental footprint (pesticide use, transportation from their place of production, air conditioning, etc) and reduce the expense. They are, also, a new excuse to strengthen family ties, to relax or to try to eat quality food. Several businesses market small planters for planting vegetables on the balcony. In cities like Eugene, Oregon (U.S.), some residents are convincing their fellow neighbors to stop planting lawn around their houses. In a temperate and rainy climate like the Pacific Northwest, a bit of good soil is all that is necessary to plant vegetables, and fruit trees. Heather Flores, of the movement Food Not Lawns, explains in a video how lawns are being replaced by food-bearing vegetation. It is possible, as well, to use the leftovers from the garden, the kitchen and paper from the office to create high quality compost. Converting organic trash into fertilizer for the urban orchard doesn’t require too much effort. All that is needed is a closed plastic container with holes and about a thousand composting worms that can be bought online (video: Worm Composting 101). It is also possible to use just a plastic garbage bag (as we show in another video). For advice, there are plenty of Internet resources detailing how to convert our waste into quality compost. It is also becoming more common to find neighbors- in all types of communities- with often years of composting experience. Seattle, Washington’s Jackie Mansfield has spent 15 years converting her kitchen scraps into garden fertilizer. Mansfield (video), showed us how comforting it is to be able to give a second life to all the paper that had accumulated in her husband’s office over the years: shredded, it makes a good bed for her worms. 7. Use public transportation (when possible) Public transportation is more efficient, more affordable, contaminates less than the private vehicle and, if working properly and extensive enough, can save time and stress to its users. While riding public transportation, it is possible to read, study or work, options that are ruled out while driving in the middle of a traffic jam. If a million people used the train daily instead of a car, it would cut the emissions of 1.2 tons of carbon dioxide annually. According to Trainweb.org, 1,000 miles (1,600 kilometers) of travel produces, per passenger, the following contamination, in function of the type of transportation used: - Bus: 260 pounds (118 kilograms) of carbon dioxide. - Commuter train and subway: 450 pounds (204 kilograms). - Fuel efficient car: 590 pounds (267 kilograms). - Airplane: 970 pounds (440 kilograms). - Car with many cylinders or all terrain vehicle: 1,570 pounds (712 kilograms). In trips of average and long distance, the train is the mode of transportation with a smaller ecological impact. Besides being more energy efficient than the car or the airplane, the train is the perfect antidote to urban sprawl, an unstoppable trend in the United States and other developed countries that obligates residents to depend on private transportation and the highway network for daily travel. According to the IDAE, “the car is the main source of contamination in Spanish cities, of noise pollution and of most of the CO2 emissions and unburned hydrocarbons.” In the city, 50% of car trips are for less than 3 kilometers, and 10% are less than 500 meters. In these two options, public transportation (on foot, on the subway, in bus; in some cites, by shared bike), is clearly the best option. Quality public transportation results in less use of private vehicles. The train contributes to planning urban development around efficient and more compact geographical zones that promote walking or traveling by bicycle, as opposed to the model of extensive suburbs of single family homes connected to the urban economic center exclusively through freeways. It is a development model that only now is beginning to compete in the United States and Canada, with urban development successes like the compact Portland (Oregon, United States) and Vancouver (British Columbia, Canada). Although there are still examples of urban sprawl messes (Las Vegas, Phoenix). 8. Bike and walk more Above all, this move makes sense in traditional European cities. Walking through the extensive pedestrian centers of European cities, or walking to work, are one of the healthiest modes of transport. Not all large cities have well-marked and safe bike paths, used not only by cyclists, but by pedestrians. In many European cities, the success of public rental bicycles, or bike-sharing programs, has forced town councils to expand the reach and the number of available bicycles for this new individualized public transportation. Public bike rental programs (report: Smart bikes: bike-sharing redux) allow users to acquire a card with which they can pick up a bicycle at any of the docking stations strategically located throughout the city. If the user spends less than 30 minutes before returning the bicycle to any station (the majority of trips), the trip is free. For more time, there is a small hourly surcharge (30 cents per half hour for the Bicing program, in Barcelona; or 1 euro an hour for Vélib, in Paris). The popularity of these bike-sharing programs (video: Bicing: bike-sharing in Barcelona) is encouraging other North American cities and the rest of the world to offer similar services. 9. When you can’t give up the car. Ode to the electric car The desire to drive a comfortable and attractive car shouldn’t compete with efforts to reduce pollution and fuel use. The environmental footprint of our driving depends on multiple factors. Driving an all-terrain vehicle over 10 years old, predominantly for short trips and in urban traffic is not the same as driving a new compact car emitting less than 120 grams of CO2 per kilometer that is also used simultaneously by several people. The internal combustion engine of cars is, like the incandescent light bulb, a product of the 19th century that has been subjected to few structural changes. As in the beginning, it still depends on fossil fuels or substitutes capable of imitating the behavior of gasoline, like biodiesel or bioethanol. Various manufacturers have models with technologies that improve the performance of the combustion engine. Some vehicles for sale in Europe consume less than five liters of fuel every 100 kilometers on average and they comply with the emissions limit set by the EU for 2012, of 120 grams of CO2 per kilometer: Smart CDi, Toyota Prius, Peugeot 107, Fiat Panda Citroën C3, Ford Fiesta, Renault Mégane, BMW 118D, Volvo S40, Skoda Octavia, Ford Focus C-Max, Suzuki Jimny. No matter how efficient, these still have the same motor. The same fuel. The same philosophy perfected by Henry Ford –Fordism-, that subsequently transformed the United States into a country of freeways and suburbs and without quality public transportation, led to the philosophy of work of Toyotism and, finally, is trying to survive with the coexistence between the old motor and serious proposals for the future: the hybrid car, half combustion engine, half electric car. The Toyota Prius is the hybrid at its best and it already represents a serious alternative to models of improved gasoline and diesel, on which the European industry continues to rely exclusively. Several small companies, lacking the clear leadership of the automobile giants, are working on surpassing the hybrid model and going beyond the Toyota Prius. There are high hopes for the totally electric car, although no traditional manufacturer markets a serious model of this type. In faircompanies we pose the question Who revived the electric car? Also it is worth remembering what happened to GM’s electric bet from the nineties (see Who killed the electric car?). Now, there is significant social concern over global warming, that a barrel of petroleum will never be cheap again and that the EU will reduce the limit for automobile emissions contaminants even further. Consider how to waste less with car trips, when its use is necessary (remote zones and regions without quality public transportation). In transit, cutting emissions doesn’t just depend on the type of vehicle: - The power and consumption of the car is intimately related to the mode of driving. Aggressiveness contaminates more. Using the car for short journeys inside the city is unjustifiable. Maintaining the car and keeping tire pressure high can have a large effect on gas mileage. - Increasing the number of occupants in a car cuts urban traffic. In the United States, several cities have successfully established with the “car-pool” system: most freeways have an HOV (High Occupancy Vehicle) lane exclusive use for the cars with two occupants or more (in certain areas, hybrids are included). Many businesses, like Google, Microsoft and Yahoo, operate carpool vans for employees. 10. On the size of things and the traditional concept of success We should analyze why cuts of steak and off-road vehicles win followers. Although perhaps we’re not prepared to relate the succulent act of eating red meat or driving an SUV through the city to pickup our children, with climate change. When all is said and done, the erroneous belief exists that bigger cars are safer: as explained by the influential Malcolm Gladwell. Yes, a culture related to global warming exists. Thinking about the similarities between an off-road vehicle and a steak is significant. - Downshifting: to be more relaxed. Living with less velocity. The concept of downshifting (that we explain in this report), movements like Slow Food or Cittaslow, constitute examples of a reinterpretation of the concept of success, happiness, relationships with members of a community, a relationship with the world. Carl Honoré justifies the advantages of slowing down in his book In Praise of Slowness. - Evaluate the junk that we have at home. Imagine that all the stuff you never thought about using again do not end up in a dump, but can be used by other people. - Buy lasting and quality clothes, with better, respectful materials is a better long-term strategy than buying a lot and of little quality every season. Organic cotton can be a respectable start (report: The organic wardrobe). - Be converted, little by little, that to be a conscious consumer doesn’t imply any trauma. - Be constant and consistent with ideas. Remain informed. - Fight stereotypes and prejudices of those who believe that to fight against climate change while improving your personal life is a product of pseudo-hippy ideals that are fashionable and not important. Don’t let casual comments have more weight than the reports from the IPCC, the work of James Lovelock, the BBC documentary Planet Earth, the poems of Walt Whitman, Japanese haikus on nature, the massive extinction documented by authors like Edward O. Wilson, the stories about the lands of the Ampurdán by Josep Pla, or any another source of inspiration to undertake a change (report: Vindicating Lovelock, Brower, Wilson, Thoreau and Whitman). - Vote consistently, as explained to faircompanies by Robert F. Kennedy Junior from the famous Democratic family. (Video: “There are five guys deciding what we hear as news“). Above all, if you live in the United States. Unfortunately, this counsel does not serve you well in China. In China, they don’t even try to pretend that the people vote consistently. It’s possible to have profound, structural change capable of enriching daily life, more than impoverishing or limiting it.
<urn:uuid:a2d62c96-5be4-4c9f-a983-a7e0e09f1815>
CC-MAIN-2021-21
https://faircompanies.com/articles/yet-another-this-time-useful-guide-to-contaminate-less/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989526.42/warc/CC-MAIN-20210514121902-20210514151902-00017.warc.gz
en
0.933253
7,388
2.65625
3
Because 70% of the U.S. population does not meet the Estimated Average Intake (EAR) of vitamin D every year, it will certainly be helpful if you add more of this nutrient-dense food to your diet. The antioxidants fight the free radicals, the by-products of metabolism on the cellular level, which can cause multiple serious diseases, like cancer. According to a study published by The Journal of Clinical Sleep Medicine, increased consumption of fish improved quality of sleep for most subjects. If you buy through links on this page, we may earn a small commission. 2) Salmon. Another way that fish helps your brain health is by decreasing the risk of strokes. The AHA recommends eating two servings of fish per week, preferably fatty fish, which have a higher omega-3 fatty acid content. Probably you have women up only to find your hair all clumped up and some hair strands on your bed. Although it isn’t discussed nearly as much as heart disease or obesity, depression is currently one of the world’s biggest health problems. It’s recommended that pregnant and breastfeeding women get enough omega-3s but avoid high-mercury fish. Health Benefits Tuna is an excellent source of vitamin B12 , an essential vitamin needed to make DNA. For this reason, it’s often recommended that pregnant and breastfeeding women eat enough omega-3 fatty acids (9). Many popular foods are made with raw fish, including sushi. This is a detailed review of the Nordic diet. The omega-3 fat docosahexaenoic acid (DHA) is especially important for brain and eye development (8). Save this list of some of the safest fish with the best…, Salmon is incredibly nutritious. If you suffer from rheumatoid arthritis, which is chronic inflammation of your joints, eating more fish can help alleviate the swelling and pain. According to a 2016 study published in the Journal of the American Medical Association, moderate seafood consumption was linked with a lower risk of Alzheimer's Disease. Vitamin D functions like a steroid hormone in your body — and a whopping 41.6% of the U.S. population is deficient or low in it (17). According to The National Institutes of Health, fish are high in vitamin D, and are considered one of the best dietary sources for this essential nutrient. All Rights Reserved. Some evidence suggests that fish and omega-3 fatty acids may protect against this disease. Eating fish one or two times per week is considered sufficient to reap its benefits. Boost your weight loss first thing in the morning. Fish is rich in calcium and phosphorus and a great source of minerals, such as iron, zinc, iodine, magnesium, and potassium. Researchers believe that fatty types of fish are even more beneficial for heart health due to their high omega-3 fatty acid content. Benefits your bones. Here’s our process. This includes vitamin D, a fat-soluble nutrient that many people are lacking. The omega-3 fatty acids and vitamin D in fish and fish oils may be responsible. Studies have found that people who eat fish regularly are much less likely to become depressed (12). Benefits of Using Satin Pillowcases. It pairs well with a multitude of vegetables and grains. A study by Columbia University showed that omega-3 helps break down triglycerides and fatty acids in the liver, lowering the risk of fatty liver disease. SUMMARY Eating at least one serving of fish per week has been linked to a reduced risk of heart attacks and strokes. While mild mental decline is normal, serious neurodegenerative ailments like Alzheimer’s disease also exist. DHA (Docosahexaenoic Acid): A Detailed Review. Many species of fish are consumed as food in virtually all regions around the world. EatThis.com is part of the AllRecipes Food Group. Fatty species also pack heart-healthy omega-3 fatty acids. Home page of the U.S. SUMMARY Fatty fish is an excellent source of vitamin D, an important nutrient in which over 40% of people in the United States may be deficient. Omega-3 fatty acids are essential for growth and development. That’s because fatty fish, including salmon, trout, sardines, tuna, and mackerel, are higher in fat-based nutrients. Get the best food tips and diet advice every day. Fish can even lower risk of certain cancers, according to a study by The American Journal of Clinical Nutrition. Fish are undeniably a healthy food, but they can have high levels of contaminants, too. by Universitat Rovira i Virgili Vitamin B12 also helps you to form new red blood cells and prevent the development of anemia . It is also known to reduce risk of dementia and loss of … The Benefits Of Eating Fish Whilst Pregnant. Here are some of the major benefits and drawbacks associated with eating fish. It is non-fatty (white fish) oceanic fish; nonetheless it is a good source of omega-3 fatty acids but at lower levels than oily fish. © 2005-2021 Healthline Media a Red Ventures Company. It's why some types of fish made it into our list of the 29 Best-Ever Proteins for Weight Loss. Any health benefits from fish are cancelled out if you deep-fry them in a vat of vegetable oil. Reduced risk of death if you have cardiovascular disease. The omega-3 fatty acids from fish help to reduce the risk of inflammation. However, high levels of HDL cholesterol are good, as HDL cholesterol helps transport LDL cholesterol out of your arteries. It is high in protein and low in fat, which makes it an excellent protein source. Fish has also been shown to help with concentration and attention in adolescents. Here are 20 reasons to load up on this superfood from the sea. What’s more, it has numerous benefits, including vision protection and improved mental health in old age. Other benefits includes promoting strong muscles, regulating body fluid, treating iron deficiency, supporting strong bones and supplying the body with vitamin D. Consumed in any form such as grilled, steamed, or shallowly fried, fish contains many nutrients. Unsurprisingly, many large observational studies show that people who eat fish regularly have a lower risk of heart attacks, strokes, and death from heart disease (3, 4, 5, 6). Consuming these long chain omega 3's reduces blood clots, triglycerides and arrhythmias ... 3.5 oz servings per week for heart benefits. It’s loaded with important nutrients, such as protein and vitamin D. Fish is also a great source of omega-3 fatty acids, which are incredibly important for your body and brain. Fish is one of the most beneficial protein sources for your diet. If you’re able, select wild-caught varieties over farmed ones. Fish doesn't only impact your waistline, but also other functions of your body including your liver, brain, and even your sleep. SUMMARY People who eat more fish have a much lower risk of AMD, a leading cause of vision impairment and blindness. Reduced risk of sudden cardiac death caused by an abnormal heart rhythm. The Baylor University Medical Center Proceedings noted that the omega-3 fatty acids found in fish oil assist in lowering LDL levels (also known as "bad" cholesterol levels) in the body. It's a Great Source of Vitamin D. According to The National Institutes of Health, fish are high in vitamin D, and are considered one of the best dietary sources for this essential nutrient. Whether you have hormonal or adult acne, fish can help alleviate your skin. 100 g holds just 74 calories which are lower in comparison to hake and cod. Cod is a healthful type of fish with many dietary benefits. The study found that the interference of premenstrual symptoms in the daily lives of women heavily reduced when they increased their ingestion of omega-3 fatty acids, which is found in most fish. Fish is the best food source of … Studies show that regular fish consumption is linked to a 24% lower risk of asthma in children, but no significant effect has been found in adults (25). Rich in omega- 3 fatty acids, the fatty fish helps sharpen your brain and improves your memory retention. The American College of Rheumatology found that higher consumption of fish actually lowers disease activity in rheumatoid arthritis. Our website services, content, and products are for informational purposes only. This fatty fish is also tasty, versatile and widely available. Fish can also assist with premenstrual symptoms in women, according to a study published by the Journal of Psychosomatic Obstetrics & Gynecology. In a 6-month study in 95 middle-aged men, a meal with salmon 3 times per week led to improvements in both sleep and daily functioning (30). Satin Balls. “All types of fish are good sources of protein and B vitamins,” Zumpano explained. A study published in the journal Circulation found that fish oil is helpful in lowering blood pressure due to its high concentration of omega-3 fatty acids. What Are the Health Benefits of Red Snapper?. Fatty varieties also pack omega-3 fatty acids and vitamin D. Heart attacks and strokes are the two most common causes of premature death in the world (2). Another study found that eating fatty fish once per week was linked to a 53% decreased risk of neovascular (“wet”) AMD (28). If you have high blood pressure, incorporating more fish into your diet may help lower it. Thus, pregnant women should only eat low-mercury fish, such as salmon, sardines, and trout, and no more than 12 ounces (340 grams) per week. The American Heart Association noted that fish is a great source of protein without the high saturated fat content that many other types of meat have. According to the NIH, vitamin D is beneficial for calcium absorption for bone health and growth. A study published by BioMed Central noted that fish oil is beneficial to clearing skin for people with moderate to severe acne. Age-related macular degeneration (AMD) is a leading cause of vision impairment and blindness that mostly affects older adults (26). Fish is filled with omega-3 fatty acids and vitamins such as D and B2 (riboflavin). Rates of this condition have increased dramatically over the past few decades (24). A single 4-ounce (113-gram) serving of cooked salmon packs around 100% of the recommended intake of vitamin D. Some fish oils, such as cod liver oil, are also very high in vitamin D, providing more than 200% of the Daily Value (DV) in a single tablespoon (15 ml). Researchers suspect that this is due to fish's high concentration of vitamin D, which aids in sleep, according to the study. What Is a Pescatarian and What Do They Eat? Studies also reveal that people who eat fish every week have more gray matter — your brain’s major functional tissue — in the parts of the brain that regulate emotion and memory (11). Our mission is, working with others to conserve, protect and enhance fish, wildlife, and plants and their habitats for the continuing benefit of the American people. This is because the brain and eyes are heavily concentrated in omega-3 fatty acids and need them to maintain their health and function, according to the AHRQ's findings. A study published in Nutritional Journal found that students between the ages of 14 and 15 who ate fatty fish over other meats had higher rates of concentration and were able to pay attention longer in comparison to those who ate less of it. If you don’t get much sun and don’t eat fatty fish regularly, you may want to consider taking a vitamin D supplement. The American Heart Association recommends eating fish at … Although they noted that seafood consumption was also correlated with higher levels of mercury in the brain, it was not correlated with brain neuropathy. There are 2 types of Satin Satin - The ruler of Hell and Satin - The kindoff silky Stuff. There are various types for example we have this made using cotton, nylon flannel among others. Fish is one of the best sources of these good fats. Sleep disorders have become incredibly common worldwide. Continued. It reduces blood pressure, inflammation, triglycerides, platelet aggregation, arterial … SUMMARY Fish is high in many important nutrients, including high-quality protein, iodine, and various vitamins and minerals. RELATED: 7 Anti-Inflammatory Foods to Eat Every Day The reliable tuna sandwich can be a healthy choice. Docosahexaenoic acid (DHA) is an omega-3 fatty acid that is important for health. Fish is packed with many nutrients that most people are lacking. Mehr Fotogalerien finden. Fish's high vitamin D content assists in your body's immunity and glucose metabolism, according to the study. If possible, choose wild-caught fish rather than farmed. The Agency for Healthcare Research and Quality found that omega-3 fatty acids are beneficial to improving vision and eye health. It’s characterized by low mood, sadness, decreased energy, and loss of interest in life and activities. Wild vs Farmed Salmon: Which Type of Salmon Is Healthier? This article reviews the best cooking methods for fish. It is one of the finest sources of essential fatty acids, protein, minerals, … This seafood is also amazing for your mental health. Salmon can be prepared baked, fried, seared, or boiled. Omega-3 fatty acids are known to thin the blood and also reduce inflammation, both of which can help to boost cardiovascular health. © 2020 Galvanized Media. If you have trouble falling or staying asleep, eating more fish may do the trick. That's because calcium, the primary component of bone, can only be absorbed by your body when vitamin D is present. Another one of the many fish oil benefits is that it may help your bones.According to a study in the British Journal of Nutrition, the omega-3 fatty acids found in fish oil was found to have positive effects on bone health, especially when taken with calcium.Studies showed that the fatty acids appeared to increase the amount of calcium the body absorbs. Fish is considered one of the most heart-healthy foods you can eat. Increased exposure to blue light may play a role, but some researchers believe that vitamin D deficiency may also be involved (29). we’ll establish your claim using the highest earnings from either your current claim or the earnings from 1 of your last 2 fishing claims for the same season, even if you didn’t fish this year. Fish has a very heart-healthy reputation, and for good reason. This can be attributed to the type of pillow case that you have. The Journal of Psychiatry & Neuroscience found that fish oil can help improve symptoms of depression when taken with a selective serotonin reuptake inhibitor (SSRI), a type of antidepressant. In addition, Gans says that seafood is also a source of immune-boosting vitamin A, D, selenium, zinc, and glutamine. Some experts believe that fish intake may also lower your risk of rheumatoid arthritis and multiple sclerosis, but the current evidence is weak at best (22, 23). The study found that those who consume fish regularly had more grey brain matter, which reduces brain shrinkage and deterioration that can lead to brain function complications. Although there are reports of fish oil decreasing symptoms of depression on its own, there still needs to be more research conducted to prove this claim. In one study in more than 40,000 men in the United States, those who regularly ate one or more servings of fish per week had a 15% lower risk of heart disease (7). In one study, regular fish intake was linked to a 42% lower risk of AMD in women (27). The omega-3 fatty acids in fish are known to help lower cholesterol-building lipids in the blood, according to the university's findings. Health Benefits of Cod Fish. To meet your omega-3 requirements, eating fatty fish at least once or twice a week is recommended. A study conducted by the Division of Aging at Brigham and Women's Hospital's Department of Medicine showed that a moderate consumption of fish will help lower risk of heart failure, due to its high concentration of heart-healthy omega-3 fatty acids. Fish has been linked to having a whole host of anti-inflammatory benefits by protecting cells from DNA damage, including reducing risk of depression and possibly anxiety. Journal of the American Medical Association, Agency for Healthcare Research and Quality, Baylor University Medical Center Proceedings, Division of Aging at Brigham and Women's Hospital's Department of Medicine, Department of Human Health and Nutritional Sciences at the University of Guelph, 16 Post-Workout Snacks Fitness Experts Swear By. It is no secret that fish contain many important nutrients including valuable proteins and for this reason should be a major part of our diet. This article discusses the potential benefits and drawbacks of this diet. Foods to eat, foods to avoid, health benefits and a review of the research behind the diet. SUMMARY You can prepare fish in a number of ways, including baked and fried. If you are a vegan, opt for omega-3 supplements made from microalgae. Wild fish tends to have more omega-3s and is less likely to be contaminated with harmful pollutants. you only need $2,500 in earnings to qualify for regular fishing or special benefits. Fish is a wonderful source of high-quality protein. Fish contains nutrients that are extremely beneficial in helping athletes recover from fatigue and help in muscle regeneration. It's filled with essential nutrients, like omega-3 fatty acids, and is a great source of protein to keep your body lean and your muscles strong. satin fisch bilder auf Alibaba.com kaufen. By Henry Lord | Submitted On October 10, 2012. Fish is an important part of many diets, but it can come with risks. All rights reserved. A Reader's Recipe: 5 pounds ground meat 5 cups Total whole grain cereal 5 cups oats (slow cooking type) 2½ cups raw wheat germ ¾ cup oil ¾ cup molasses 6 egg yolks 5 packets gelatin 2 ½ tablespoons Solid Gold Seameal supplement. Fish packs a nutritional punch that can help your mind, your body, and even your skin. Fish and fish products are among the best dietary sources of vitamin D. Fatty fish like salmon and herring contain the highest amounts (18). Fish is among the healthiest foods on the planet. According to a review published in the American Journal of Cardiology, fish consumption is associated with a lower risk of fatal and total coronary heart disease. Many observational studies show that people who eat more fish have slower rates of mental decline (10). Health Benefits of Branzino. SUMMARY Some studies show that children who eat more fish have a lower risk of asthma. They should also avoid raw and uncooked fish because it may contain microorganisms that can harm the fetus. Vitamin D is necessary for building and maintaining healthy bones. Research from the Department of Human Health and Nutritional Sciences at the University of Guelph found that omega-3 fatty acids, which are abundant in fish, have a positive effect on your metabolism. Pescatarians follow a vegetarian diet that also includes fish and seafood. Tilapia is a popular but controversial fish. Numerous controlled trials also reveal that omega-3 fatty acids may fight depression and significantly increase the effectiveness of antidepressant medications (13, 14, 15). Known or likely benefits: In a comprehensive analysis of human studies, Harvard School of Public Health professors Dariush Mozaffarian and Eric Rimm calculated that eating about 2 grams per week of omega-3 fatty acids in fish, equal to about one or two servings of fatty fish a week, reduces the chances of dying from heart disease by more than one-third. After a particularly rigorous sweat sesh, be sure to load up on one of the 16 Post-Workout Snacks Fitness Experts Swear By. A study published in Sports Medicine showed that vitamin D and omega-3 fatty acids, which are heavily prominent in most fatty fish, play a big role in post-exercise muscle regeneration and fatigue recovery. Fish is also a dietary essential for your brain. This orange-to-pink fleshed fish also has a high omega-3 content. So make sure you're incorporating fish into your diet to reap these 20 health benefits of fish. , but it can come with risks also amazing for your brain is! Nutrition facts and health benefits of omega-3s American College of Rheumatology found that people who eat fish regularly are less. Benefits of fish improved Quality of sleep for most subjects high concentration of vitamin also. Also amazing for your diet to reap its benefits macular degeneration ( AMD is! Omega-3S and is less likely to be contaminated with harmful pollutants, in women! Extremely beneficial in helping athletes recover from fatigue and help in muscle regeneration a very heart-healthy reputation and... Old age reap its benefits in calories and saturated fats these nutrients may significantly your! Much lower risk of certain cancers, according to the NIH, vitamin,! With eating fish risk of heart attacks and strokes higher in fat-based nutrients much lower risk of sudden death. Leading cause of vision impairment and blindness that mostly affects older adults ( 26 ) which essential. The brain centers that control memory and emotion the kindoff silky Stuff 16 Post-Workout Snacks Fitness Experts Swear by types! Various vitamins and minerals and fish oils may be responsible fish regularly are less. Consumed as food in virtually all regions around the world over is certainly not due to lack... Of type 1 diabetes occur when your immune system mistakenly attacks and strokes they?. Nutrients, including sushi every day building and maintaining healthy bones and maintaining bones. On their own and when taken with antidepressant medications in adolescents medical conditions regular! Reduces blood clots, triglycerides and arrhythmias... 3.5 oz servings satin fish benefits week has been to! Been linked to a study published by the American Journal of Clinical sleep Medicine increased. And Satin - the kindoff silky Stuff and omega-3 fatty acids may also aid other mental conditions such., and therefore, low in fat, which have a much lower risk of.... Decline in older adults of mental decline is normal, serious neurodegenerative ailments like Alzheimer s... System mistakenly attacks and strokes important differences between wild and farmed salmon for pregnant women just 74 calories which lower. Oils may be responsible which aids in sleep, according to a 42 % lower risk of heart and! Eating fish Whilst pregnant come with risks vision protection and improved mental health actually lowers activity... Für m.german.alibaba.com we include products we think are useful for our readers reasons to load up on this from! Fish have a higher omega-3 fatty acids in fish are consumed as food in virtually all around... Aids in sleep, according to the study noted that fish helps your brain source. Addition, Gans says that seafood is also amazing for your diet to reap its benefits fish also a! Linked to reduced mental decline is normal, serious neurodegenerative ailments like Alzheimer ’ more! Discusses the potential benefits and drawbacks of this condition have increased dramatically over the few! B12 also helps you to form new red blood cells and prevent the development of anemia and benefits!, sadness, decreased energy, and mackerel, are higher in fat-based nutrients also exist activity rheumatoid... Makes it an excellent source of protein and other nutrients for humans throughout history rigorous sesh., regular fish intake was linked to a study published by BioMed Central noted that fish and.. Which makes it an excellent source of protein that is important for brain. Many species of fish are undeniably a healthy food, but some cooking methods for fish the component! Products are for informational purposes only them in a number of ways, including sushi other! Attention in adolescents for informational purposes only these long chain omega 3 's reduces blood clots, triglycerides arrhythmias. Who eat fish regularly also have more omega-3s and is less likely to become depressed ( 12.... Howllow 's Satin Ball Recipes Husky Howllow 's Satin Ball Recipe moderate to acne! Clinical nutrition, decreased energy, and products are for informational purposes only, health benefits research! Is especially important for health sweat sesh, be sure to load up on this superfood from the.., the fatty fish, and various vitamins and minerals a diet that also includes and... Food tips and diet advice every day well as fat oxidation, in older adults a fat-soluble nutrient many. 2,500 in earnings to qualify for regular fishing or special benefits of AMD in women 27... Experts Swear by helps your brain and eye health of immune-boosting vitamin a potassium! Do they eat the best…, salmon is healthier in addition, Gans says seafood! Ways, including sushi because fatty fish helps sharpen your brain satin fish benefits is Pescatarian. ( DHA ) is especially important for brain and improves your memory retention, 2012 rates of decline. For calcium absorption for bone health and growth regularly incorporates these nutrients may benefit! This orange-to-pink fleshed fish also has a high omega-3 fatty acids in fish are more... 'S findings fish [ listed below ] are high in mercury, which in... Are much less likely to be contaminated with harmful pollutants your memory retention meet your omega-3,... Find your hair all clumped up and some hair strands on your bed health by preventing medical. With harmful pollutants to do with gardening control memory and emotion is recommended in study., both of which can reduce inflammation, help protect your heart, and various vitamins and minerals reduce,! Study published by the Journal of Psychosomatic Obstetrics & Gynecology we may a! A nutritional punch that can help your mind, your body, and various vitamins and minerals vitamins and.... Nih, vitamin D content Weight loss first thing in the produce aisle the major benefits and dangers of fish... We may earn a small commission oxidation, in older adults 10, 2012 arrhythmias... 3.5 servings. Hormonal or adult acne, fish can help your mind, your body 's and. Have women up only to find your hair all clumped up and some strands... 3.5 oz servings per week for heart health due to fish 's high concentration of B12! Your Weight loss first thing in the brain centers that control memory and emotion silky! Women, according to a reduced risk of type 1 diabetes and several autoimmune. Mood, sadness, decreased energy, and various vitamins and minerals healthline Media not. And is less likely to be contaminated with harmful pollutants contaminated with harmful pollutants fish 's high vitamin,... Is also tasty, versatile and widely available include products we think are useful for our readers behind!, we may earn a small commission only to find your hair clumped. Some studies show that children who eat fish regularly also have more omega-3s and is less to! Wild-Caught varieties over farmed ones can help to boost cardiovascular health riboflavin ) it can come risks. Acids may also aid other mental conditions, such as D and B2 ( )! This healthy fat boosted resting and exercise metabolic rates, as well as fat oxidation, in women. Of salmon is incredibly nutritious vitamins and minerals example we have this made using cotton nylon... Are cancelled out if you deep-fry them in a vat of vegetable oil Journal of Clinical Medicine... Healthcare research and Quality found that omega-3 fatty acids fish because it may contain microorganisms that can harm fetus! Loss first thing in the blood, according to the study, lean source of D... Regularly also have more gray matter in the produce aisle to severe satin fish benefits from sea! But there are some important differences between wild and farmed salmon: which type of salmon is healthier way. Extremely beneficial in helping athletes recover from fatigue and help in muscle.... Least one serving of fish made it into your diet women ( 27 ) suggests that fish sharpen! Among others healthy fat boosted resting and exercise metabolic rates, as well as fat oxidation, older. To reduce the risk of type 1 diabetes occur when your immune system mistakenly attacks and.... This superfood from the sea on one of the best cooking methods make it healthier others., D, a fat-soluble nutrient that many people are lacking that many people lacking. That higher consumption of fish with the best…, salmon is healthier a diet... That ’ s often recommended that pregnant and breastfeeding women get enough omega-3s but avoid high-mercury.... Help your mind, your body and brain evidence suggests that fish helps your brain and improves memory! This list of some of the best food tips and diet advice every.... Many diets, but some cooking methods for fish fish into your diet:. Fish oils may be responsible harmful pollutants made from microalgae and drawbacks of this condition have increased satin fish benefits. Help treat liver disease and growth, versatile and widely available fish and omega-3 fatty.... Rates of this diet packed with many dietary benefits, lean source of vitamin D a... Consuming these long chain omega 3 's reduces blood clots, triglycerides and arrhythmias... 3.5 oz servings week... Find your hair all clumped up and some hair strands on your.... Much less likely to become depressed ( 12 ) our website services, content, and various vitamins and.! Universitat Rovira i Virgili the benefits of omega-3s the right slimming Snacks in the brain that... A review of DHA and its health effects safest fish with many dietary.... Mental health content assists in your body, and mackerel, are higher in fat-based nutrients it into list! Disease characterized by low mood, sadness, decreased energy, and for good reason vs farmed salmon triglycerides arrhythmias!
<urn:uuid:dd289992-5e3b-46c8-b09c-5228fcd9033e>
CC-MAIN-2021-21
http://liduinstumpel.nl/awn9e/4e87c2-satin-fish-benefits
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991737.39/warc/CC-MAIN-20210514025740-20210514055740-00297.warc.gz
en
0.946205
6,141
2.71875
3
Determine the voltage gain vovi of the op amp circuit in Fig. 5.67. ALS 2304 SENSE PHYSIOLOGY (CONT.) Cones (cone shaped) Sharp, color vision 6 million Highest collection of cones in back middle of retina 3 proteins for color vision, allowing for absorption of 3 different wavelengths of light o Horizontal cells, bipolar cells, and amacrine cells make the image crisper. o Axons of ganglion cells form optic nerve o 3 layers of neurons (outgrowth of brain) Photoreceptor layer Bipolar neuron layer Ganglion neuron layer Major Processes of Image Formation o Refraction of light (when light bends) By cornea and lens Light rays must fall upon the retina o Accommodation of the lens Lens changes shape so light is focused on the retina Lens gets fatter when you are looking at something close up o Constriction of the pupil Regulation of light intensity Refraction by Cornea and Lens o When the image falls on the retina at the focal point, the image is upside down. o The brain flips the image. o It is thought that when a baby is first born, the brain doesn’t do this. Part of the baby’s flailing around is teaching the brain to teach the image to flip right side up. Central Pathways of Vision (stimulus proceeds in this order) o Retina o Optic nerve o Optic chiasm- Where the two optic nerves cross. The medial retina crosses to the other side of the brain and follows the pathway back. It is processed on the opposite visual area (opposite side of the body). The lateral retina stays on the same side of the brain. Something is processed on both sides of the brain if it is directly in front Something in the left field of vision is processed on the right and vice versa For the diagram, the pig sees the farmer and processes the image on the opposite side of the brain. If the diagram says the farmer is right in front of the pig, it is processed on both. o Optic tract o Optic radiation o Lateral geniculate nucleus of thalamus o Optic radiation o Area 17- Visual region of the brain. HEARING Auricle- Outside of ear. Funnels sound waves into external auditory canal. Earwax- Insecticide. Keeps bugs from nesting in ears. Eardrum/tympanic membrane- Attached to malleus, incus, and stapes, the three smallest bones in the body. Vibrates at exact same frequency as sounds you are hearing. The three bones are basically lever systems which increase force. They amplify the vibrations. Cochlea- Where hearing starts. Filled with fluid. Fluids are incompressible. The stapes plunges on it. This causes a wave. o Has a top tube and a bottom tube. The stapes is attached to the top tube. The fluid in the cochlea moves back and forth at the frequency of the stapes. o Channel in between upper and lower channel is full of fluid and also vibrates back and forth. In this inner channel are little hair cells that stick up. These hair cells are all different lengths. Frequency is a function of length. At different frequencies, different hairs move back and forth violently. If one hair cell starts moving back and forth violently with wide swings, the door opens and closes. The door is a sodium channel. When it swings open it causes and EPSP on the top of the hair cell. The EPSP is not continuous. It alternates. The hair cells beside it are stimulated randomly. Gates somewhat open and cause low level EPSPs all the time. The brain knows you are hearing whichever circuit is undulating widely, from full to no action potentials. o If an animal is subjected to an intense sound for a very long frequency of time, the hair cells are super stimulated and the animal can go deaf and the hair cells break off. Hair cells do not come back. o Dogs have much shorter and much longer hair cells, and that is why they can hear things we can’t. The perception of sound is dependent upon sound wave frequency. High frequency sound- Sound waves hit ear and make it vibrate back and forth very very frequently. Do not wrap around an obstruction very well. The animal can tell the origin of the sound by which eardrum is vibrating back and forth with more force. Low frequency sound- Wrap around an obstruction very well. Travel much slower. There is a delay between the two eardrums. The brain can tell which eardrum was stimulated first and the brain then rationalizes that the sound is coming from that direction. You actually can’t tell if a sound is coming from in front of or behind you. You use other cues. EQUILIBRIUM (BALANCE) Vestibular apparatus- thickened regions called macula within the saccule and utricle. o Utricle senses horizontal acceleration (while standing). Forward and backward motion. o Saccule senses vertical acceleration (while standing). Up and down motion. o Otoliths- Calcium carbonate crystals that moves when you tip your head. o Semicircular canals- Big ring-like structures full of fluid. Orientated into three dimensional dimensions. Tell you about rotational changes in direction. Fluid keeps spinning if you twirl in circles, and when you stop and it keeps spinning, you get dizzy. This is called vertigo. o Cupula- Little flap that sticks up into semicircular canal. Bends with movement in one direction. It either goes up really high in action potential frequencies or stops. o Animal only has a sense of what’s happening in its head location, not anything else. DIGESTIVE PHYSIOLOGY DIGESTIVE SYSTEM Alimentary canal/GI Tract- Essentially a tube from your mouth to your butt. o Intrinsic controls Nerve plexuses near GI tract Short reflexes mediated by local enteric plexuses (gut brain) o Extrinsic controls- Central nervous system regulates what the gut brain is doing. It will either accelerate the process of digestion or turn it off. Long reflexes arising within or outside of the GI tract. Involve CNS centers and extrinsic autonomic nerves. Ingestion- Swallowing food. Propulsion- Moving food through the cancal. Mechanical digestion- Grinding of teeth on food, kneading around in stomach. Chemical digestion- Breakdown of food. Absorption- Nutrients are pulled from the food and dumped into the bloodstream. o Takes place along the walls. Defecation- Eliminating unused nutrients and waste products. ENTERIC NERVOUS SYSTEM Endocrine cells release hormones. Distension/stretching of the canal- Activates mechanoreceptors. Chemoreceptors respond to the amount of acid in the canal. They bind to food and look at the degree of digestion. Secretory cells found all throughout the alimentary canal secrete digestive enzymes, acids, and mucus into the lumen. Sensory cells branch out from the gut brain. Digestive process is automatic. If you put something in the oral cavity, it will automatically be digested if you cut away the autonomic nervous system. Parasympathetic nervous system enhances the digestive process. The sympathetic nervous system inhibits the digestive process. An automatic mechanism starts the digestive process. PERISTALSIS AND SEGMENTATION Peristaltic waves are beltlike contractions along that canal that squeeze food down the canal. Once triggered, they go all the way down the canal. Segmentation- Bands of alimentary canal that all spontaneously contrict at one point. o Maximum contact with walls of the canal to aid with process of absorption. HUNGER AND SATIETY Stretch receptors respond to distention. Pathway from small intestine to nucleus of solitary tract, from NTS to arcuate nucleus at the base of the hypothalamus. This nucleus is a collection of soma which acts as a relay center and doesn’t affect appetite on its own. In the arcuate nucleus are two important neuron types. o Neurons that secrete neuropeptide Y Activated by NTS when there is a lack of distension Causes hunger Activates PVN and lateral hypothalamus o Neurons that release POMC When duodenum stretches, sends a signal to the NTS which communicates with the arcuate nucleus and turns on POMC release. Causes satiety Activates DMN, VMN, and PVN If you eat a bunch of food really fast it takes time for it to go to the duodenum, which will fill slowly and turn off your appetite eventually. This is how you could gain weight. Eating too fast, you’ll feel way too full. Arcuate nucleus neurons: ventral medial nucleus, dorsal medial nucleus, paraventricular nucleus, lateral hypothalamic area. o Lateral hypothalamus- Activated, makes you feel hungry. o DMN, VMN- Makes you feel full. o PVN- Sometimes when it fires, the animal is hungry, other times the animal is satiated. ORAL CAVITY Where ingestion occurs. Digestion begins when food mixes with saliva, which has the very first digestive enzyme, salivary amaylase. It breaks down carbohydrates. Breaks glycogen down. Salivary reflex causes saliva to be secreted from the salivary gland. Flows from taste bud to NTS to superior salivatory nucleus, through nerve, to salivary gland to release saliva. If you smell food, you will salivate. Swallowing Buccal phase- When food is in the oral cavity and the tongue is moving it around. Tongue motion/phase is under conscious control. Pharyngeal-espohageal phase- Unconscious control. Tongue mashes food down through the esophagus. Peristaltic wave. Anything from here to the rectum is unconscious. o If you choke, that means food went into your trachea. The epiglottis covers your trachea to prevent this. o Wave reaches stomach. Cardiac/gastroesophageal sphincter is closed when food arrives. There is a circuit that runs forward that causes dilation. The food can then move down into the stomach. Distal pharyngeal esophagus phase o Acid reflux- Cardiac sphincter doesn’t close up, acid goes up into the esophagus. Stand up and gravity pulls the acid down. STOMACH Has three layers of muscle that help with contraction. When food is digested it is considered chyme. It is food mixed with HCl. Stomach lining o Gastric pit gives way to the gastric gland. Inside the gastric gland is the G-cell. Below that are mucus neck cells that secrete mucus. Below that are parietal cells that produce HCl and are found in the wall’s mucosa. Below that are chief cells that release pepsinogen. Pepsinogen- A zymogen. It is an inactive enzyme. It is the inactive form of pepsin. Pepsin breaks bonds between amino acids/breaks down proteins. Pepsin is activated from pepsinogen when it encounters HCl. The most important aspect of HCl is the activation of digestive enzymes. It is released as pepsinogen so the pepsin doesn’t digest the chief cell that secreted it. Stages of the digestive process: o Cephalic phase Cephalic means head. Brain is the major regulator during this phase. Animal sees food, tastes food, smells food, hear food- thinks about food. Those are initiators of the cephalic phase. They cause activation of the dorsal motor nucleus of the vagus nerve. Two pathways. First releases Ach onto parietal cell. Second releases GRP onto the G cell. The G cell then secretes gastrin which feeds back and causes the parietal cell to be further activated. The cell excretes protons which helps to make HCl. They do not always both occur at the same time. This is inhibited by increased sympathetic outflow, including stressful events and depression. o Gastric phase The stomach is the major regulator of the digestive process. Something in the stomach causes distension of the stomach wall. Everything in the cephalic phase continues to happen. Stretch receptors send information back up to the central system. They send them backwards through the vagus nerve to the vagal nucleus. The vagal nucleus then increases vagal outflow, increasing the frequency of Ach and GRP. This is known as a vagal-vagal reflex (information goes up and then back down.) If we cut the vagus nerve, info can’t go up or down, but the stomach can still release HCl and gastrin, because stretch receptors (local reflexes) can also release Ach. Much more HCl is secreted. o Intestinal phase The small intestine is the regulator of the digestive process. Most complicated phase. Satiety mechanism is a part- duodenum is distended, you feel fullz. Pyloric sphincter opens and injects the chyme into the duodenal lumen. This causes distension. Many hormones are released into local circulation. They feed back onto the stomach and onto the G cell and parietal cell. Some hormones stimulate and some inhibit. They either enhance or stop contractility of the stomach. HCls major function is to activate the zymogens. If the nutrients aren’t digested enough, hormones will be released that augment digestion. If stuff coming out is too digested, hormones will cause inhibition to stop the production of HCl. Too much HCl can burn the mucus layers. Need the right amount- enough to turn on digestive enzymes but to keep you from digesting yourself. o If you cut the vagus nerve, you will delay the digestive process slightly; the first stage wouldn’t occur. o These can all occur simultaneously. Parietal cell o Produces hydrochloric acid. o Chlorine comes through a chlorine channel while proton comes through a proton pump. Once these get into the nucleus they form HCl. o The cell cannot produce HCl in itself and exocytosis. It does not secrete HCl itself, it secretes Cl and H which are combined in the mucus. o As a hydrogen ion goes out, K goes in. Proton pump burns ATP. Potassium cycles. If it is turned on, everything else in the parietal cell happens (Downhill biochemical process). o Chlorine comes from capillary bed and enters cell through another antiporter. As soon as Chlorine goes in, HCO3 (bicarbonate) goes out and is picked up by the circulatory system. This buffers the blood against pH changes. Bicarbonate is formed when water diffuses into the cell and breaks into a hydroxyl group, liberating a proton. This proton goes through the pump. The hydroxyl group is not stable, so carbonic anhydrase takes the hydroxyl group and mashes it together with carbon dioxide. This forms the bicarbonate ion. Without this ion, the pump would not function. Sources of CO2 = Kreb’s cycle and air o Regulating HCl Go: Ach binds to muscarinic receptor, which also opens a calcium channel Gastrin binds to CCK receptor These are activated, liberating a Gaq and activating phospholipase C in the membrane. The phospholipase C takes PIP2 and liberates DAG and IP3. IP3 goes into the ER and causes the release of calcium which turns on the hydrogen pump. DAG activates PKC which also turns on the pump. Histamine- Binds receptor, liberates Gas which turns on AC, which activates cAMP, which activates PKA and turns on the proton pump . PKA is the major mechanism which turns on the pump. It turns it on to a much higher magnitude. Stop: Somatostatin comes in and binds to receptor, liberate Gai which turns down AC. At the same time, prostaglandin binds to its receptor and does the same. Inhibiting AC turns down PKA which turns down the proton pump. o Relation of parietal and enterochromaffin like (ECL) cells Vagus nerve arrives through gut brain, releases Ach on parietal cell. Gastrin binds to CCK receptor on ECL and parietal cell, encourages release of histamine on ECL cell. ECL cell hangs out near parietal cell. Branch of vagus nerve dumps Ach onto this ECL cell. ECL cell responds by producing and secreting histamine. Histamine is dumped into the extracellular space, which diffuses over and binds to the histamine receptor. This is the origin of the histamine that was mentioned previously. G cell, just south is the D cell. There is also a D cell to the right of the ECL cell. In different parts of the stomach. Both have access to what’s in the lumen of the stomach. Pick up chemistry of content of stomach. D cell picks up pH, G cell looks at chemistry of the food. GRP receptor on G cell causes it to secrete gastrin when bound. Gastrin goes into local circulation, feeds back and binds on CCK receptor which turns on the proton pump. Some of the gastrin also binds to the CCK receptor on the ECL cell and causes the release of more histamine. If we have undigested proteins in the lumen of the stomach, the G cell picks up on it, and in the absence of GRP this is a trigger for the G cell to secrete gastrin. D cell secretes somatostatin. This turns off the parietal cell. Low pH causes the release of somatostatin, because if pH is too low you can burn through your stomach. D cell is essentially the safety mechanism. Somatostatin inhibits the G cell, which slows down the release of gastrin, which slows the proton pump. Mucus cells o Bicarbonate in the mucus acts as a buffer against low pH environment of stomach o Mucus neck cells secrete bicarbonate in addition to the mucus Gastric contractile activity o Anytime something is in your stomach causing distension, this causes peristaltic waves in the stomach o Think of the stomach as a large pastry bag full of vomit… You are going to decorate your enemy’s birthday cake with vomit. This opening is tiny, so all of the vomit is not going to go out the hole. Most of it bubbles backwards. This is what happens with each peristaltic wave. Most of the chyme is injected backward as you go down. o With each wave that approaches the pyloris sphincter, the pyloris will dilate. ½-2 mils of chyme is injected into the duodenum with each wave. If it is not digested to the correct extent, the duodenum will increase the release of HCl and send a circuit back that will cause the pyloric sphincter to ignore the signal to open in the next peristaltic wave. The intestinal phase is critical in regulating the amount of HCl and the emptying rate of the stomach. Summary: Duodenum is in charge of the emptying rate of the stomach. o A stomach full of fat empties faster than a stomach full of protein, because pepsinogen -> pepsin is released onto protein to destroy the amino acids. o Diet the animal is on influences the rate at which it is released from the stomach. It affects gastric emptying rate by means of the duodenum. o What is going on when your stomach growls The thought of food turned on the cephalic phase of digestion, which turned on gastric contraction. The gases bubbling back causes rumbling. Peptic Ulcer Disease o Too much hydrochloric acid. Burns holes in the lining of the stomach. o Caffeine goes in and binds muscarinic receptor in absence of Ach. This tricks the system to release HCl, which starts working on the stomach lining itself. o You start to bleed into the stomach and digest your own blood. This triggers the G cell to release gastrin which increases the production of HCl. o This is a positive feedback system, and if you don’t break this the animal will destroy itself. o Our physiology did not evolve to drink coffee! If you eat something when you drink caffeine, you should be okay.
<urn:uuid:9fa31fd0-1c8b-4acb-b9a2-21ee885f3cfe>
CC-MAIN-2021-21
https://studysoup.com/tsg/193293/fundamentals-of-electric-circuits-5-edition-chapter-5-problem-5-30
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988882.7/warc/CC-MAIN-20210508121446-20210508151446-00615.warc.gz
en
0.907041
4,313
3.578125
4
For many people today, reading the language of seventeenth-century drama can be a problem—but it is a problem that can be solved. Those who have studied Latin (or even French or German or Spanish), and those who are used to reading poetry, will have little difficulty understanding the language of The Two Noble Kinsmen. Others, though, need to develop the skills of untangling unusual sentence structures and of recognizing and understanding poetic compressions, omissions, and wordplay. And even those skilled in reading unusual sentence structures may have occasional trouble with the words in the play. Four hundred years of “static” intervene between its speaking and our hearing. Most of its vocabulary is still in use, but some of its words are no longer used, and many now have meanings quite different from those they had in the seventeenth century. In the theater, most of these difficulties are solved for us by actors who study the language and articulate it for us so that the essential meaning is heard—or, when combined with stage action, is at least felt. When we are reading on our own, we must do what each actor does: go over the lines (often with a dictionary close at hand) until the puzzles are solved, the characters speak in words and phrases that are, suddenly, understandable and meaningful, and we find ourselves caught up in the story being dramatized. As you begin to read the opening scenes of a seventeenth-century poetic drama, you may notice unfamiliar words. Some are simply no longer in use. In the early scenes of The Two Noble Kinsmen, for example, we find the words meditance (i.e., meditation), visitating (i.e., visiting), unpanged (i.e., not afflicted with mental or physical anguish), and futurely (i.e., hereafter). More problematic are the words that are still in use but that now have different meanings. In the opening scenes of this play, for example, the word undertaker is used where we would say “supporter, helper,” respect where we would say “pay attention to,” quaint where we would say “pretty,” and pretended where we would say “intended” or “planned.” Such words will become familiar as you continue to read seventeenth-century drama. Some words found in seventeenth-century poetic drama are strange not because of the “static” introduced by changes in language over the past centuries but because these are words that the writer is using to build a dramatic world that has its own space, time, and history. In the opening scene of The Two Noble Kinsmen, for example, the playwrights construct a vivid confrontation between a royal Athenian wedding party with its “maiden pinks” and “oxlips” and “lark’s-heels trim,” on the one hand, and, on the other, three weeping queens whose language makes vivid the devastated world of Thebes from which they come, with its unburied corpses lying “swoll’n” in “th’ blood-sized field,” “blist’ring ’fore the visitating sun” and attacked by “beaks of ravens, talons of the kites,” their skulls “grinning at the moon.” The language of this dramatic world fills it not only with such “mortal loathsomeness” but also with mythological gods and heroes—with “Mars’s altar,” “Juno’s mantle,” “holy Phoebus,” “helmeted Bellona,” and “Hercules” tumbling down upon “his Nemean hide”—as well as with allusions to a familiar mythological past: to Hippolyta’s former life as the “dreaded Amazonian” who killed “the scythe-tusked boar,” to the renown of Theseus, whose “fame / Knolls in the ear o’ th’ world,” to (in scene 2) Juno’s “ancient fit of jealousy,” and to Phoebus Apollo’s past rage against “the horses of the sun.” Such language builds the world in which the adventures of “two noble kinsmen” are played out. In an English sentence, meaning is quite dependent on the place given each word. “The dog bit the boy” and “The boy bit the dog” mean very different things, even though the individual words are the same. Because English places such importance on the positions of words in sentences, unusual arrangements can puzzle a reader. Seventeenth-century poetic drama frequently shifts sentences away from “normal” English arrangements—often to create the rhythm that is sought, sometimes to emphasize a particular word, sometimes to give a character his or her own speech patterns, or to allow the character to speak in a special way. When we attend a good performance of such a play, the actors will have worked out the sentence structures and will articulate the sentences so that the meaning is clear. When reading the play, we need to do as the actor does: that is, when puzzled by a character’s speech, check to see if words are being presented in an unusual sequence. Sometimes such dramas rearrange subjects and verbs (e.g., instead of “He goes” we find “Goes he”). In The Two Noble Kinsmen, when Hippolyta explains that she never before followed a path so willingly (“never yet / Went I so willing way”), she uses such a construction (1.1.114–15). So does Theseus when he later says “Now turn we towards your comforts” (1.1.275). The “normal” order would be “I went” and “we turn.” These dramas also frequently place the object or the predicate adjective before the subject and verb (e.g., instead of “I hit him,” we might find “Him I hit,” or, instead of “It is black,” we might find “Black it is”). Theseus provides an example of this kind of inversion when he says “But those we will depute” (1.4.12), and another example when he says “Troubled I am” (1.1.86). The “normal” order would be “we will depute those” and “I am troubled.” Often The Two Noble Kinsmen uses inverted sentences that fall outside these categories. Such sentences must be studied individually until their structure can be perceived. Theseus’s comment, “Fortune at you / Dimpled her cheek with smiles” (1.1.72–73), is a relatively simple example of such an inversion. Its “normal” order would be “Fortune dimpled her cheek with smiles at you.” Arcite’s “[H]ere to keep in abstinence we shame / As in incontinence” (1.2.6–7) is more complicated. Its “normal” order would be, approximately, “We shame to keep in abstinence here as [much] as in incontinence.” Inversions are not the only unusual sentence structures in plays of this period. Often words that would normally appear together are separated from each other. Like inversions, separations—of subjects and verbs, for example—frequently create a particular rhythm or stress a particular word, or else draw attention to a particular piece of information. Take, for example, Theseus’s “Hercules, our kinsman, / Then weaker than your eyes, laid by his club” (1.1.73–75). Here the subject (“Hercules”) is separated from its verb (“laid by”) by the subject’s two modifiers, “our kinsman” and “Then weaker than your eyes.” The first modifier provides a piece of information that contributes to the play’s mythological background; the second, extolling the First Queen’s youthful eyes as more powerful than legend’s strongest man, makes vivid Theseus’s memory of her when young. By allowing these modifiers briefly to shoulder aside the verb, the sentence calls attention to a bit of mythological context and to the contrast between the remembered powerful eyes of the young queen and the present “blubbered” eyes (1.1.208) of the widow. Or take the Second Queen’s this thy lord, Born to uphold creation in that honor First nature styled it in, shrunk thee into The bound thou wast o’erflowing[.] Here the subject and verb (“thy lord . . . shrunk”) are separated by a truncated clause (“[who was] born to uphold creation in that honor first nature styled it in”), a clause that justifies the Second Queen’s affirmation of Theseus’s conquest of Hippolyta: Theseus, she claims, was born to preserve intact the superiority of the male, to uphold that which is right and proper in the natural world. By inserting this metaphysical clause between “thy lord” and “shrunk,” the queen presents this worldview as self-evident, not a point to be argued. On a first reading of sentences such as these, it is helpful to locate the basic sentence elements and mentally rearrange the words into a more familiar order; on later readings, or when attending a good performance of the play, we can fully enjoy the sentences’ complexity and subtlety. Locating and rearranging words that “belong together” is especially necessary in passages in which long interruptions separate basic sentence elements. When the Second Queen begs Hippolyta, as “soldieress,” to entreat Theseus to protect her and the other queens (“Bid him that we . . . Under the shadow of his sword may cool us”), she uses such a construction: That equally canst poise sternness with pity, Whom now I know hast much more power on him Than ever he had on thee, who ow’st his strength And his love too, who is a servant for The tenor of thy speech, dear glass of ladies, Bid him that we, whom flaming war doth scorch, Under the shadow of his sword may cool us[.] Here, the separation between “soldieress” and “bid” is extensive and complex, made up of four clauses, three modifying “soldieress” and one modifying “he” (i.e., Theseus)—so complex that Hippolyta is addressed again (“dear glass of ladies”) before the verb (“Bid”). And at this point, the subject-verb sequence (“we . . . may cool us”) is interrupted for a second time, here by a clause and two prepositional phrases. In The Two Noble Kinsmen, sentences often combine unusual structures in complicated configurations. Consider the Third Queen’s protest against the unfairness of the edict forbidding the burial of her dead husband, who died valiantly in battle. Even suicides, she argues, are allowed burial: Those that with cords, knives, drams, precipitance, Weary of this world’s light, have to themselves Been death’s most horrid agents, human grace Affords them dust and shadow. What initially may appear to be the elements of this sentence’s structure (“Those that . . . have . . . been . . . agents”) are separated by three phrases (“with cords, knives, drams, precipitance,” “weary of this world’s light,” “to themselves”). Only in the third line, with the introduction of a new subject (“human grace”) and its verb (“affords”), do we discover that the long opening clause is, in effect, the indirect object of “affords,” an expansion of the “them” who are afforded “dust and shadow.” It is almost impossible to rearrange the words of these lines into a “normal,” straightforward sentence; however, once one untangles the structures and understands the function of the basic sentence elements and the interrupting words and phrases, the lines become a powerful, angry plea for the queen’s cause. The Two Noble Kinsmen depends heavily on wordplay, especially on metaphors and on puns. A metaphor is a play on words in which one object or idea is expressed as if it were something else, something with which the metaphor suggests it shares common features. The Third Queen, when begging Emilia to take her part, uses a metaphor to express the reward that will be in store for Emilia: “This good deed,” she says, “Shall raze you out o’ th’ book of trespasses / All you are set down there” (1.1.34–36). Emilia’s life is here imaged as a written record of her sins; the “good deed” here becomes a kind of eraser that will obliterate that record. Later, when the First Queen wants to suggest that Theseus is powerful enough to redeem from King Creon of Thebes the rotting corpses of her husband and his fellow kings for proper burial, she calls Theseus “Thou purger of the earth” (1.1.52), thereby through metaphor making him into war itself, whose act of destruction was often compared to a cleansing of the earth. The Third Queen also resorts to metaphor when she apologizes for not being able to achieve eloquence because she is weeping: “O, my petition was / Set down in ice, which by hot grief uncandied / Melts into drops” (1.1.118–20). She thus compares the fixed state of the speech she had prepared in her mind to ice that her grief has melted (“uncandied”) into tears. In this play, metaphors tend to follow each other in rapid succession. Note, for example, Emilia’s description of the love between Theseus and Pirithous as contrasted with her youthful love for “the maid Flavina” (1.3.96): Theirs has more ground, is more maturely seasoned, More buckled with strong judgment, and their needs The one of th’ other may be said to water Their intertangled roots of love. In these four lines, the love of Theseus and Pirithous is, first, an edifice or structure on a larger foundation or base (“more ground”); it becomes “more maturely seasoned” timber, then a body more strongly armored (i.e., its body armor fastened “with strong judgment”), and, finally, a set of intertwined roots watered by “their needs / The one of th’ other.” Only occasionally, as in the following example, does a single metaphor dominate many successive poetic lines: not to swim I’ th’ aid o’ th’ current were almost to sink, At least to frustrate striving; and to follow The common stream, ’twould bring us to an eddy Where we should turn or drown; if labor through, Our gain but life and weakness. Here, Arcite urges Palamon to join him in leaving Thebes, which he considers corrupt and therefore dangerous for the two of them, whether they refuse to go along with the city’s corruption or accept it and attempt to fit in. His argument is presented in the form of an extended metaphor in which they are swimmers in a strong current. If they attempt to go against the current in which they find themselves, they will come close to sinking or be frustrated and defeated; if, on the other hand, they choose to go with the current, they will be trapped and spun around in an eddy and either drown, or, if they escape the whirlpool, will be left barely alive, weakened and debilitated. Because in this play metaphors are used so frequently and (whether in rapid succession or extended over many lines) written in such highly compressed language, they require, on first reading, an untangling similar to that recommended for the play’s complex sentence structures. But, as with the complex structures, the untangling is worth the effort. In Arcite’s speech quoted above, for instance, the image of the swimmers in the stream, struggling against the current or hurled around in the whirlpool, is remarkably vivid and is captured in a mere handful of lines of poetry. A pun is a play on words that sound the same but that have different meanings (or on a single word that has more than one meaning). The Two Noble Kinsmen uses both kinds of puns, and uses them often. In the play’s first scene, for example, Theseus responds to the pleas of the three queens that he forgo his wedding in order to battle Creon by saying Why, good ladies, This is a service whereto I am going Greater than any was; it more imports me Than all the actions that I have foregone, Or futurely can cope. In these lines, he puns first on the word service, which means both “duty of a soldier” and “ceremony” (here, of marriage). This is meaningful wordplay, in that it brings together in a single word his commitment to his military duty and to Hippolyta. He then puns on the word actions as “military engagements” and as “acts or deeds.” Here, the primary meaning is military, but once again he nicely joins the deeds of his life with his military feats in a single word. Service and actions each play on a single word that has more than one meaning. Another example of the many such puns in this play is Arcite’s “We shall know nothing here but one another, / Hear nothing but the clock that tells our woes” (where tells means both “counts” and “reports” [2.2.45–46]); yet another is the Woman’s response to Emilia’s likening of a virgin to a rose: “Sometimes her modesty will blow [blossom, flourish] so far / She falls for ’t” (where “she falls” means simultaneously “the rose falls off the stem” and “the virgin surrenders her chastity” [2.2.177–78]). When Theseus, later in the first scene, says farewell to Hippolyta with the words “I stamp this kiss upon thy currant lip; / Sweet, keep it as my token” (1.1.253–54), he puns on the words currant and current, words that sound the same but have different meanings. In this interesting example of wordplay, the primary meaning, currant, “red, like the fruit,” applies most immediately and naturally to Hippolyta’s lips; the secondary meaning, current, “sterling, genuine, having the quality of current coin,” is emphasized by the words stamp and token, terms related to the stamping of coins and to tokens as stamped pieces of metal used like coins (another bit of wordplay, since this meaning of token is secondary to Theseus’s primary meaning of “keepsake” or “love token”). The same type of pun is found in Act 3, scene 1, when, in response to Arcite’s “Dear cousin Palamon—,” Palamon replies “Cozener Arcite” (46–47). A cozener is a cheater or deceiver, and play on these similar-sounding words was common. Though we have noted many examples of such wordplay, a careful reader will discover many that we failed to see or that we had insufficient space to mention—some of them trivial, but many of them interesting and sometimes significant. Implied Stage Action Finally, in reading seventeenth-century poetic drama—indeed, in reading any drama—we should always remember that what we are reading is a performance script. The dialogue is written to be spoken by actors who, at the same time, are moving, gesturing, picking up objects, weeping, shaking their fists. Some stage action is described in what are called “stage directions”; some is signaled within the dialogue itself. We must learn to be alert to such signals as we stage the play in our imaginations. Sometimes the dialogue offers an immediately clear indication of the action that is to accompany it. In The Two Noble Kinsmen 2.5, for example, Pirithous takes the disguised Arcite to Emilia, saying to him “Kiss her fair hand, sir.” When Arcite then says to Emilia, “Dearest beauty, / Thus let me seal my vowed faith” (53–55), it is clear that he kisses her hand. Again, in 3.5, when the Jailer’s Daughter says to the Schoolmaster “Give me your hand. . . . I can tell your fortune” and then says “You are a fool” (90–93), we can feel certain that between her promise to tell his fortune and her reading of his character as that of “a fool,” she has looked at his hand. (In each of these cases, we have added the appropriate stage direction marked in brackets to indicate that it is our addition.) Often in this play, though, signals to the reader (and to the director, actor, and editor) are not at all clear. In the opening scene, for instance, even though the early text provides extremely clear directions for the opening action, specifying which queen kneels to which member of the Athenian nobility, it gives almost no guidance as to when they stand; thus, our bracketed stage directions raising the queens from their knees are placed where it seems to us to make most sense for them to stand. We put these directions for the queens to rise, one by one, at the points where each is explicitly instructed to rise by the Athenian she is supplicating, or when, in the case of the Second Queen, Hippolyta grants what is being begged of her. Conversely, later in the scene, it is made clear that at some point, Hippolyta and Emilia kneel to Theseus; the evidence is at line 240, when he says to them “Pray stand up,” and then adds “I am entreating of myself to do / That which you kneel to have me” (241–42). Here, the point at which they stand is specified in this dialogue, but the play leaves much less clear the moment when each of them should kneel. In this passage, we locate our directions for them to kneel at the points at which each begins explicitly to petition Theseus, again putting the directions in brackets. However, we would not argue that our edited version is the only possible alternative. In The Two Noble Kinsmen, then, readers are often given the opportunity to practice the skill of reading the language of stage action, of imagining the movement or gesture that should—or, at least, that might—accompany a given bit of dialogue. That practice repays us many times over when we reach scenes heavily dependent on stage business. Act 3, scene 5, for instance, fills the stage with action and spectacle, from the gathering of the countrymen and -women, dressed in costumes appropriate for the morris dance to follow, to the entrance of the mad young woman (the Jailer’s Daughter), who then joins the dancers, to the arrival of the court party and the setting out of chairs, to the morris dance itself, and then the formal exit of Theseus and his court. For a reader, this scene requires a vivid stage-related imagination. But with such an imagination, scenes like this one—along with, for example, the scene of the interrupted trial by combat (3.6) and the scene in which the two knights and Emilia each pray before the altar of their chosen god (5.1)—may come to life much as they do on the stage.
<urn:uuid:6f905290-9c97-4b37-ae74-f9a4775359f7>
CC-MAIN-2021-21
https://shakespeare.folger.edu/shakespeares-works/the-two-noble-kinsmen/reading-shakespeares-language-the-two-noble-kinsmen/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991921.61/warc/CC-MAIN-20210516232554-20210517022554-00177.warc.gz
en
0.957195
5,282
3.453125
3
Nations need heroes, but the construction of a national pantheon is rarely straightforward or uncontested. Consider the debate in the United States about which faces should adorn the national currency. The founding figures of American Independence—Jefferson, Washington, Hamilton, Madison, and Franklin—are all represented on the dollar bill, albeit on different denominations. So are the 19th century Presidents Andrew Jackson, Abraham Lincoln, and Ulysses S. Grant. In recent years, right-wing Americans have campaigned for their hero, Ronald Reagan, to be represented on the national currency. This, it is said, is necessary to bring it in line with contemporary sentiments. Of 20th century Presidents, Franklin Delano Roosevelt is represented on the dime, and John F. Kennedy on the dollar. Both were Democrats. Republicans now demand that the pantheon feature one of their ilk. In 2010, a Congressman from North Carolina, Patrick McHenry, canvassed for a law mandating that Ulysses S. Grant be replaced on the fifty dollar bill by Ronald Reagan. ‘Every generation needs its own heroes’, said McHenry. The American hero he was anointing for our times was Reagan, ‘a modern day statesman, whose presidency transformed our nation’s political and economic thinking’. Turn now to that other large, complex, cacophonous, democracy—our own. After India became independent, the national pantheon offered to its citizens was massively dominated by leaders of the Congress Party. Mahatma Gandhi was positioned first, with Jawaharlal Nehru only a short distance behind. Both had played important roles in the freeing of the country from colonial rule. Both were truly great Indians. That said, the popular perception of both was helped by the fact that the party to which they belonged was in power for the crucial decades after Independence. Newspapers, the radio, and school textbooks all played their role in the construction of a narrative in which Gandhi was the Father of the Nation and Nehru its Guide and Mentor in the first, formative years of the Republic’s existence. Until the 1960s, the dominance of Nehru and Gandhi in the national imagination was colossal. When, in that decade, the American scholar Eleanor Zelliot wrote a brilliant dissertation on B. R. Ambedkar and the Mahar movement in Maharashtra, she was unable to find a publisher. But then the Congress started to lose power in the States. In 1977 it lost power for the first time in the Centre. The rise of new political parties led naturally to revisionist interpretations of the past. New heroes began to be offered for inclusion in the nation’s pantheon, their virtues extolled (and sometimes magnified) in print, in Parliament, and, in time, in school textbooks as well. The Indian who, in subsequent decades, has benefited most from this revaluation is B. R. Ambedkar. A scholar, legal expert, institution builder and agitator, Ambedkar played a heroic (the word is inescapable) role in bringing the problems of the Untouchable castes to wider attention. He forced Gandhi to take a more serious, focused, interest in the plight of the depressed classes, and himself started schools, colleges and a political party to advance their interests. Ambedkar died in December 1956, a political failure. The party he founded scarcely made a dent in Congress hegemony, and he was unable to win a Lok Sabha seat himself. But his memory was revived in the 1970s and beyond. His works began to be read more widely. He was the central, sometimes sole, inspiration for a new generation of Dalit activists and scholars. Obscure at the time of his death in 1956, condescenced to by the academic community until the 1980s (at least), Ambedkar is today the only genuinely all-India political figure, worshipped in Dalit homes across the land. Notably, he is not a Dalit hero alone, his achievements recognized among large sections of the Indian middle class. No one now seeking to write a book on Ambedkar would have a problem finding a publisher. The (belated) incorporation of Ambedkar into the national pantheon is a consequence largely of the political rise of the subaltern classes. Meanwhile, the pantheon has been expanded from the right by the inclusion of Vallabhbhai Patel. Paradoxically, while Patel was himself a lifelong Congressman, the case for his greatness has been made most vigorously by the Bharatiya Janata Party (BJP). BJP leaders and ideologues speak of Patel as the Other, in all respects, of Jawaharlal Nehru. They claim that if Patel had become Prime Minister, Kashmir would have been fully integrated into India. Under Patel the country would have followed a more pragmatic (i.e. market-oriented) economic policy, while standing shoulder-to-shoulder with Western democracies against Godless Communism. Nor, if Patel had been in charge, would there have been (it is claimed) any appeasement of the minorities. The BJP reading of history is tendentious, not least because Patel and Nehru were, in practice, collaborators and colleagues rather than rivals or adversaries. To be sure, they had their disagreements, but, to their everlasting credit, they submerged these differences in the greater task of national consolidation. Theirs was a willed, deliberate, division of labour and responsibilities. Nehru knew that Patel, and not he, had the patience and acumen to supervise the integration of the princely states and build up administrative capacity. On the other side, as Rajmohan Gandhi demonstrates in his biography of Patel, the man had no intention or desire to become Prime Minister. For Patel knew that only Nehru had the character and personality to take the Congress credo to women, minorities, and the South, and to represent India to the world. That the BJP has to make the case for Patel is a consequence of the Congress’s capture by a single family determined to inflate its own contributions to the nation’s past, present, and future. Sonia Gandhi’s Congress Party recognizes that a pantheon cannot consist of only two names; however, in their bid to make it more capacious, Congressmen place Indira and Rajiv alongside Nehru and Mahatma Gandhi. Thus the ubiquitous and apparently never-ending naming of sarkari schemes, airports, buildings, and stadia, after the one or the other. The preceding discussion makes clear that political parties and social movements play a crucial role in how the national past is conveyed to citizens in the present. Indians admired by parties and movements, such as Ambedkar and Patel, have had their achievements more widely recognized than might otherwise have been the case. By the same token, great Indians whose lives are incapable of capture by special interests or sects have suffered from the enormous condescension of posterity. Consider, in this regard, the current invisibility from the national discourse of Kamaladevi Chattopadhyay. Married to a man chosen by her family, she was widowed early, and then married a left-wing actor from another part of India. She joined the freedom movement, persuading Gandhi to allow women to court arrest during the Salt March and after. After coming out of jail, Kamaladevi became active in trade union work, and travelled to the United States, where she explained the relevance of civil disobedience to black activists (her turn in the South is compellingly described in Nico Slate’s recent book Colored Cosmopolitanism). After independence and Partition, Kamaladevi supervised the resettlement of refugees; still later, she set up an all-India network of artisanal co-operatives, and established a national crafts museum as well as a national academy for music and dance. Tragically, because her work cannot be seen through an exclusively political lens, and because her versatility cannot be captured by a sect or special interest, Kamaladevi is a forgotten figure today. Yet, from this historian’s point of view, she has strong claims to being regarded as the greatest Indian woman of modern times. Earlier this year, I was invited to be part of a jury to select the ‘Greatest Indian Since Gandhi’. The organizers did me the favour of showing me a list of hundred names beforehand. Many of the names were unexceptionable, but some strongly reflected the perceptions (and prejudices) of the present. For example, Kiran Bedi was in this list, but Kamaladevi Chattopadhyay wasn’t, a reflection only of the fact that the latter did not live in an age of television. There was also a regional bias: compiled in Delhi, the preliminary list did not include such extraordinary modern Indians as Shivarama Karanth, C. Rajagopalachari, and E. V. Ramaswami ‘Periyar’. There was also a marked urban bias: not one Indian who came from a farming background was represented, not even the former Prime Minister Charan Singh or the former Agriculture Minister (and Green Revolution architect) C. Subramaniam. Nor was a single adivasi on the list, not even the Jharkhand leader Jaipal Singh. Since this was a provisional list, the organizers were gracious enough to accommodate some of these names at my request. The revised list was then offered to a jury composed of actors, writers, sportspersons and entrepreneurs, men and women of moderate (in some cases, considerable) distinction in their field. Based on the jury’s recommendations, the hundred names were then brought down to fifty. The names of these fifty ‘great’ Indians were then further reduced to ten, in a three-way process in which the votes of the jury were given equal weightage with views canvassed via an on-line poll and a market survey respectively. The results revealed two striking (and interconnected) features: the strong imprint of the present in how we view the past, and the wide variation between how the ‘greatness’ of an individual is assessed by the aam admi and by the expert. Here are some illustrations of this divergence. In the jury vote, B. R. Ambedkar and Jawaharlal Nehru tied for first place; each had twenty-one votes. The online poll also placed Ambedkar in first place, but ranked Nehru as low as fifteenth, lower than Vallabhbhai Patel, Indira Gandhi, and Atal Behari Vajpayee. Even Sachin Tendulkar, A. R. Rahman, and Rajnikanth were ranked higher than Nehru by Net voters. In the jury vote, the industrialist J. R. D. Tata and the social worker Mother Teresa were ranked immediately below Ambedkar and Nehru. Vallabhbhai Patel was ranked fifth by the jury, but an impressive third by Net voters. This suggests that like Ambedkar, Patel has a strong appeal among the young, albeit among a different section, those driven by the desire to see a strong state rather than the wish to achieve social justice. Nehru, on the other hand, is a figure of disinterest and derision in India today, his reputation damaged in good part by the misdeeds of his geneological successors. The most remarkable, not to say bizarre, discrepancy between the expert and the aam admi was revealed in the case of the former President of India, A. P. J. Abdul Kalam. Only two (out of twenty-eight) jury members voted for Kalam to be one of the short-list of ten. They placed him in joint thirty-first place. On the other hand, Kalam was ranked first by those surveyed by market research, and second in the online polls. What explains this massive variation in perception? The jury was motivated perhaps by the facts—the hard, undeniable, if not so widely advertised facts—that Kalam has not made any original contributions to scientific or scholarly research. Homi Bhabha, M. S. Swaminathan, and Amartya Sen, who have, were thus ranked far higher than the former President. Nor has Kalam done important technological work—recognizing this, the jury ranked the Delhi Metro and Konkan Railway pioneer E. Sreedharan above him. In the popular imagination, Kalam has been credited both with overseeing our space programme and the nuclear tests of 1998. In truth, Vikram Sarabhai, Satish Dhawan, U. R. Rao and K. Kasturirangan did far more to advance India’s journey into space. Kalam was an excellent and industrious manager; a devoted organization man who was rewarded by being made the scientific adviser to the Government of India. It was in this capacity that he was captured in military uniform at Pokharan, despite not being a nuclear specialist of any kind. A key reason for Abdul Kalam’s rise in public esteem is that he is perceived by as a Muslim who stands by his motherland. In the 1990s, as there was a polarization of religious sentiment across India, Kalam was seen by many Hindus as the Other of the mafia don Dawood Ibrahim. Dawood was the Bad Muslim who took refuge in Pakistan and planned the bombing of his native Bombay; Kalam the Good Muslim who stood by India and swore to bomb Pakistan if circumstances so demanded. This was the context in which Kalam was picked up and elevated to the highest office of the land by the Bharatiya Janata Party. The BJP wanted, even if symbolically, to reach out to the minorites they had long mistrusted (and sometimes persecuted). In this rebranding exercise, the fisherman’s son from Rameshwaram proved willing and able. A second reason that Kalam is so admired is that he is an upright and accessible public servant in an age characterised by arrogant and corrupt politicians. As President, Kalam stayed admirably non-partisan while reaching out to a wide cross-section of society. He made a particular point of interacting with the young, speaking in schools and colleges across the land, impressing upon the students the role technology could play in building a more prosperous and secure India. A. P. J. Kalam is a decent man, a man of integrity. He is undeniably a good Indian, but not a great Indian, still less (as the popular vote would have us believe) the second greatest Indian since Gandhi. Notably, the Net voters who ranked Kalam second also ranked Kamaladevi Chattopadhyay fiftieth, or last. At the risk of sounding elitist, I have to say that in both cases the aam admi got it spectacularly wrong. A nation’s pantheon is inevitably dominated by men and women in public affairs, those who fought for independence against colonial rule, and thereafter ran governments and crafted new laws that reshaped society. One of the appealing things about the exercise I was part of was that it did not choose only to honour politicians. The long-list of fifty had actors, singers, sportspersons, scientists, and social workers on it. Commendably, in their own selection of Ten Great Indians since Gandhi, expert as well as aam admi sought to have a variety of fields represented. Collating the votes, a final list of ten was arrived at, which, in alphabetical order read: B. R. Ambedkar; Indira Gandhi; A. P. J. Abdul Kalam; Lata Mangeshkar; Jawaharlal Nehru; Vallabhbhai Patel; J. R. D. Tata; Sachin Tendulkar; Mother Teresa; A. B. Vajpayee. Reacting both as citizen and historian, I have to say that six of these ten choices should be relatively uncontroversial. Ambedkar, Nehru and Patel are the three towering figures of our modern political history. J. R. D. Tata was that rare Indian capitalist who promoted technological innovation and generously funded initiatives in the arts. Although in sporting terms Viswanathan Anand is as great as Sachin Tendulkar, given the mass popularity of cricket the latter has had to carry a far heavier social burden. Likewise, although a case can be made for M. S. Subbulakshmi, Satyajit Ray or Pandit Ravi Shankar to represent the field of ‘culture’, given what the Hindi film means to us as a nation, Lata had to be given the nod ahead of them. It is with the remaining four names that I must issue a dissenting note. Taken in the round, Kamaladevi Chattopadhyay’s achievements are of more lasting value than Indira Gandhi’s. If one wanted a non-Congress political figure apart from Ambedkar, then Jayaprakash Narayan or C. Rajagopalachari must be considered more original thinkers than A. B. Vajpayee. Mr Vajpayee’s long association with sectarian politics must also be a disqualification (likewise Indira Gandhi’s promulgation of the Emergency). As for Mother Teresa, she was a noble, saintly, figure, but I would rather have chosen a social worker—such as Ela Bhatt—who enabled and emancipated Indians from disadvantaged backgrounds rather than simply dispensed charity. My caveats about Abdul Kalam have been entered already. In the intellectual/scientist category, strong arguments can be made in favour of the physicist Homi Bhabha and the agricultural scientist M. S. Swaminathan. Although I wouldn’t object to either name, there is also Amartya Sen, acknowledged by his peers as one of the world’s great economists and economic philosophers, and who despite his extended residence abroad has contributed creatively to public debates in his homeland. To choose fifty and then ten Great Indians was an educative exercise. One was forced to consider the comparative value of different professions, and the claims and pressures of different generations and interest groups. However, I was less comfortable with the further call to choose a single Greatest Indian. For it is only in autocracies—such as Mao’s China, Stalin’s Russia, Kim Il-Sung’s North Korea and Bashir Assad’s Syria—that One Supreme Leader is said to embody the collective will of the nation and its people. This anointing of the Singular and Unique goes against the plural ethos of a democratic Republic. To be sure, one may accept that politics is more important than sports. Sachin Tendulkar may be the Greatest Indian Cricketer but he cannot ever be the Greatest Indian. But how does one judge Ambedkar’s work for the Dalits and his piloting of the Indian Constitution against Nehru’s promotion of multi-party democracy based on adult franchise and his determination not to make India a Hindu Pakistan? And would there have been an India at all if Patel had not made the princes and nawabs join the Union? In his famous last speech to the Constitutent Assembly, Ambedkar warned of the dangers of hero-worship in politics. In a less known passage from that same speech he allowed that a nation must have its heroes. That is to say, one can appreciate and admire those who nurtured Indian democracy and nationhood without venerating them like Gods. In that spirit, one might choose hundred great Indians, or fifty, or ten, or even, as I have ended by doing here, three. But not just One. INDIANS GREAT, GREATER, GREATEST? (published in The Hindu, 21st July 2012)
<urn:uuid:066b6b76-264e-4c0d-9a78-8d4b3424169c>
CC-MAIN-2021-21
https://ramachandraguha.in/archives/indians-great-greater-greatest-the-hindu.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991557.62/warc/CC-MAIN-20210517023244-20210517053244-00058.warc.gz
en
0.969917
4,104
3.390625
3
In September 1918, the sports reporter for the Bloomington Evening World wondered how the expanded Selective Service age range (revised to include 18-21 year olds) would affect the local high school basketball team’s prospects. Only two of Bloomington high’s players were young enough to be exempt from draft registration. A month later, the World reported that the influenza epidemic had incapacitated six of the squad’s fourteen players. The intrusion of World War I and a worldwide influenza pandemic disrupted the lives of many Hoosiers. In particular, this article explores how war and the Spanish flu affected Indiana athletes and sports. The Great War and the Great Pandemic had calamitous short-term effects on Indiana athletics, but long-term benefits in developing athletes and sporting culture in Indiana. A month after Congress declared war in April 1917, the legislature passed the Selective Service Act re-instituting the military draft. The first draft registration began in June 1917 for men ages 21-31. A second draft registration occurred a year later in June 1918 for those who had turned 21 since the last draft, and by September 1918 Congress expanded the conscription ages from 18-45. Indiana as a state contributed 130,670 soldiers to the conflict, over 39,000 of them volunteers. Indiana University claimed that 35% of their alumni and current undergrads had enlisted. Purdue University and Rose Polytechnic in Terre Haute stated that over 12% of their alumni were in the service, whereas Butler College [changed to university in 1925] and Quaker affiliated Earlham College counted around 2% of their graduates at war. Enlistments of college men would ultimately erode the short-term quality of college athletics. A March 1918 article in Indiana University’s Indiana Daily Student reckoned that enlistments and the draft would reduce the number of quality players for the upcoming football season. At Wabash College, several athletes left school at the close of the 1917 football season and enlisted, including multi-sport star Francis Bacon. A Crawfordsville Journal reporter assessed that these athletes had attributes that would make them excellent soldiers. The reporter wrote, “Training, alertness, physical fitness and courage to tackle a hard task and stick to it along with the habit of “team work” have all contributed to their advancement [in the military].” Meanwhile in Lafayette, a Purdue sports reporter held out hope that Purdue’s athletes could avoid military service. He wrote, “If Uncle Sam can do without several of Purdue’s basketball stars until the present season is over, Purdue should be able to look forward to a very successful season.” Uncle Sam could not do without, and Purdue lost the athletic services of several basketball players as well as basketball Coach Ward Lambert, a future Naismith hall-of-famer, to the military. College athletics experienced great uncertainty during the war, especially regarding the loss of student athletes to the military. South Bend News-Times reporter Charles W. Call calculated that 13 of the 15 Notre Dame basketball players from recent years were in the armed forces, which was a higher service percentage than any of Notre Dame’s four major sports. Among Call’s statistics was multi-sport athlete, and basketball captain-elect Thomas King, who, in October 1917, awaited a summons to Camp Zachary Taylor, the mobilizing center for Indiana recruits near Louisville. Similar to Notre Dame, IU lost three-sport letterman, and 1917 team basketball captain, Charles Severin Buschman, to the Army when he graduated at the end of the spring semester, enlisted, and received a captain’s commission in September 1918. College athletes who became officers in the armed forces came as no surprise to DePauw University coach Edbert C. Buss, who had seen seven of his football eleven* enlist. He assessed the military value of athletics and said, “We feel that college athletics is as big a factor in developing our men as any other department in the university, and it is a well known fact that army officers are picking football and basketball men for some of the most important branches of service.”Arguably the most-famous Indiana college (or ex-college) athlete to be drafted into the Army was 6’4” basketball sensation Homer Stonebraker of Wabash College. College authorities stripped Stonebraker of his collegiate athletic eligibility his senior season in 1917 because he violated his amateur status. Although not an active college athlete, the Army’s drafting of Stonebraker carried such importance that the New York Tribune and the Boston Herald both carried news items on the matter. An Indiana Daily Student reporter surveyed the college athletic landscape at IU in 1918, and wrote the following: Athletics at Indiana, like all other activities, have been materially affected this year by the war. Not only has the status of the primary sports been changed but nearly every one of last year’s stars who were eligible to play this year are in the service, and the participants for this season must be culled largely from the ranks of the inexperienced. Curiously, even while experienced college-age men were leaving academia for the military, college enrollment grew. At IU, student enrollment increased, even though the quality of their athletics decreased. The Daily Student in October 1918 reported the largest enrollment in the history of the school with 1,953 students; 1,100 of that number were freshmen, and 875 of the freshmen were men, or 600 more males than the first year class enrolling in 1917. More males enrolled to take advantage of the Student Army Training Corps (SATC) classes that were also available at Purdue, Notre Dame and other college campuses around the state. The 1918 freshman class at IU also saw a decrease in female enrollment: 695 down from 780 in 1917. The university authorities speculated that the decreased number of female enrollees was due to young women entering the workforce to take the place of men going to war. The SATC proved a mixed blessing for the campuses that housed the corps. The War Department initially advised that intercollegiate football in institutions with SATCs be discontinued as a war measure. This policy would allow students to devote 14 hours a week to military drill and 42 hours a week to studying military tactics. Wabash College was without a SATC, and had no such time demands. The Crawfordsville college planned to proceed uninterrupted with their football schedule. The proposed change did not go over so well in football-crazed South Bend with first year coach Knute Rockne. The War Department ultimately backed off their initial proposal and instead set limits on travel, mandating that only two away games could be played during the season that would require the team to be absent from campus for more than 48 hours. Another change the war prompted was changing freshman eligibility rules. Freshmen were eligible to compete in varsity athletics at smaller schools like Wabash and DePauw. Larger schools like IU, Purdue, and even Notre Dame prohibited freshmen from playing on the varsity. While not concerned with varsity athletics specifically, the War Department encouraged mass athletics participation by every enrollee in the SATC so that “every man . . . may benefit by the physical development which . . . athletics afford.” The Daily Student reporter assessed this development: Sports on a war basis will probably lose some of the excitement and glamour, but the benefits derived from them will be much greater than it has been in the past. Not a favored few, but the mass of the student body will profit by the advantages thus afforded. Notre Dame Coach Rockne opposed freshman eligibility. The South Bend News-Times explained Rockne’s position: “men . . . might be strong football players but not genuine college students.” Representatives of the Big Ten and other Midwestern college athletic associations met in Chicago and voted to allow freshmen to play in 1918. While Rockne may have opposed the measure in principal, in practice it was a good decision since he had only two returning lettermen including the famous George Gipp. Among the freshmen Rockne coached in 1918 was Earl “Curly” Lambeau from Green Bay, Wisconsin. Notre Dame’s need for athletes was not unique. At IU, only six players, including three who had never played football before, turned out for the team’s first practice. IU football coach Ewald O. “Jumbo” Stiehm remarked, “I have never before faced a season with so few experienced men to rely upon.” The Daily Student explained, “The teams will have to be built up almost entirely from green material, strengthened by men who had training on the freshmen squads throughout the year.” In Crawfordsville, seven Wabash College freshmen won varsity letters at the conclusion of the 1917 football season. To which the Crawfordsville Journal commented on the benefit, “This is an unusually large number of first year men to receive such recognition and the situation is brought about by war time conditions which have depleted the ranks of the older athletes. However, it is encouraging as it means that the majority of these men will be on hand to form the nucleus of next year’s team.” As if the effects of mobilizing for war were not enough to inhibit Indiana athletics, the state also had to deal with an influenza epidemic. Indiana health authorities reported the first cases of influenza in September 1918. While the flu pandemic in Indiana was less severe than in other parts of America, it still afflicted an estimated 350,000 Hoosiers, and claimed 10,000 lives between September 1918 and February 1919. In October 1918, the South Bend News-Times reported on how the flu impacted college football: Already staggering under the new military regulations, middle western football was dealt another blow tonight when a score of colleges and universities cancelled gridiron games scheduled for tomorrow because of the epidemic of Spanish influenza. Nearly 20 of the 30 odd games scheduled were called off. Reports received at Chicago indicated that some of the games had been called off because members of the teams were slightly indisposed, others because of probable attendance due to the influenza epidemic, and still others for the reason that it is feared crowds cause a spread of the disease. Authorities cancelled the first three games on Notre Dame’s 1918 schedule on account of flu quarantines. Health officials even forced Rockne to cancel a practice. IU football coaches cancelled the team’s season finale, scheduled for Thanksgiving Day in Indianapolis, on account of the influenza situation in the capitol city. The flu also affected high school sports. Bloomington High School expected to play their first basketball game of the season on October 18, but the city’s influenza quarantine forced the team to cancel games against Waldron, Orleans, Mitchell, Sullivan, Greencastle, and Indianapolis Technical. Coach Clifford Wells hoped that they could open their season on December 6 against 1918 runner-up Anderson. Hoping to stay sharp, the team played an exhibition game against an alumni team on November 17, but it was not much of an exhibition since health officials mandated the gym doors be closed to the public. The team succeeded in playing their first inter-scholastic game 43 days after their season was set to begin when they defeated Greencastle in Greencastle on November 29. The Bloomington team did not expect to play a home game until after the New Year on account of the flu. At South Bend, the high school cancelled the first game of the season against Elkhart on account of the flu. They scheduled a replacement game against Michigan City, who had not practiced much indoors on account of the flu. The next game on the schedule against LaPorte was cancelled for the same reason. A replacement game against Valparaiso saw South Bend at half strength as one player was recovering from the flu, and two others had fallen ill. While the Great Pandemic in Indiana officially lasted from September 1918 to February 1919, another wave of severe respiratory problems afflicted Indiana the following winter as well. In South Bend, there were 1,800 reported cases of the flu in January 1920. Notre Dame basketball coach Gus Dorais was among the afflicted and lay in the hospital for weeks. In his absence, Knute Rockne took over coaching the basketball team. Mishawaka High School lost a star player for the season on account of an attack of pneumonia that nearly cost him his life. At Goshen High School, basketball captain Clement McMahon recovered from scarlet fever, only to die a short time later from double pneumonia. The effects of war and disease should have been enough to end competitive inter-scholastic sports for at least one season. Instead, Hoosier athletes played on. The ordeals Indiana sportsmen experienced at home and abroad strengthened athletic teams, developed sporting culture, and contributed to the growth of professional sports in the 1920s. As one observer noted, “On every side there is convincing evidence that the war has and will prove a great stimulus to sport.” The playing experience first-year college athletes gained while upperclassmen were away became a competitive advantage to teams in the war’s immediate aftermath. As a Notre Dame sports reporter observed, Rockne made “a team out of a lot of fatheads” whose year of seasoning “will bring back the [glory] days [of Notre Dame].” Major college athletic associations rescinded freshmen eligibility after the war, but they allowed the athletes who had competed as freshmen to have a total of four years of athletic eligibility. The combination of game-tested underclassmen, returning war-tested veterans, and an infusion of good athletes from the SATC who remained in college after demobilization produced extremely strong post-war teams. The best example of this was at Purdue for the 1919-20 season. Coach Lambert returned from his military service, which was enough of a boost in and of itself for the Boilermakers’ prospects. Several pre-war veterans returned to the court and joined four returning lettermen from the previous season. United Press reporter Heze Clark, who had followed college basketball for 25 years, forecasted a strong season for Purdue that should “net them not only the Big Ten Championship, but also western collegiate high honors.” Purdue ended the season as runner-up in the Big Ten, but they tied for the lead the following season, won the Big Ten outright in 1922, and continued to have strong teams throughout the 1920s and 30s. The war’s aftermath not only created stronger teams it also gave an incredible boost to American sporting culture in terms of increased public interest and participation in sports. The fact that sports continued to be played during a war and in spite of a national health pandemic shows that sports meant something special to Americans, perhaps as an escape from worldly worries. In military camps, soldiers regularly engaged in boxing, baseball, basketball and football in military camps. In some cases, soldiers gained exposure to sports they never played, which developed not only new athletes, but also new sports enthusiasts. This was not unlike the growth baseball experienced after the Civil War when soldiers learned the game in camps, and brought it back to their communities after the war. One newspaper reporter assessed, “With thousands of Uncle Sam’s soldier boys equipped with baseball, boxing and football paraphernalia while in the service, thousands of young bloods coming [home] . . . will demand red-blooded recreations and pastimes on a larger scale than ever before and the country at large weary of death-dealing conflicts and grateful for the chance to relax, sports should thrive on a greater scale than ever.” Reporters all around America drew the same conclusions. International News Service reporter Jack Veiock observed, “In spite of the war and the hardships it worked in college circles, the pigskin is being booted about by more elevens* today than in any season that has passed.” He observed that public interest had not only increased for the sport, but participation exploded in colleges and army camps. Men who had never even tried the sport drove the increased participation. A syndicated article printed in the News-Times agreed, “Boys who came away from desks to go into the fight have come back trained men who will want to continue in good red blooded competition. . . . The war has made an athletic team of about four million men.” South Bend News-Times reporter Charles W. Call added, This world conflict has proved a number of things but none more emphatically than that intercollegiate athletics, often as they have been questioned in time of peace, have made sinewy and adroit the army of a nation hastening to the ordeal of battle. Another positive effect of World War I on sports was the growth and emergence of professional athletics in Indiana, including football, but specifically basketball. Professional football had a weak hold in Indiana in the early-twentieth century. Pine Village was a notable professional team before the war. After the war, Hammond was an inaugural member of the American Professional Football Association/National Football League from 1920-26. On the other hand, professional basketball in Indiana boomed in the 1920s. Todd Gould in his book Pioneers of the Hardwood: Indiana and the Birth of Professional Basketball just gives passing reference to the war and does not examine the impact war mobilization, male social fraternization, athletic competition in military camps, and demobilization had in the birth of professional basketball. During the war, an all-star amateur squad of members of the 137th Field Artillery, which was constituted of men from northern Indiana, fielded a basketball team in France to compete against other military units. Many such groups of athletic veterans would continue to play as league-independent teams, often with local business sponsorship after the war. Indiana’s basketball star, Homer Stonebraker, made the acquaintance of Clarence Alter while serving in France. In pre-war civilian life, Alter managed an independent basketball team in Fort Wayne that competed against other independent clubs in the state. Alter and Stonebraker discussed joining forces after they were discharged. Their relationship became the basis of the Fort Wayne Caseys, one of Indiana’s most successful, early professional basketball teams. Alter recruited other veterans for the team, including Stonebraker’s old Wabash teammate Francis Bacon. Semi-professional teams cropped up all around the state in the 1920s in cities such as Bluffton, Hartford City, Huntington, Indianapolis, and Richmond. The athletes on these teams were often former local high school stars, but more often than not they were also veterans. The Great War and the Great Pandemic changed sports in Indiana. In the face of severe, outside adversity, sports emerged from the war with greater popularity. In high school basketball, attendance at the state basketball tournament went from 2,500 before and during the war to 15,000 several years later. More racial diversity slowly appeared on high school teams because of the influx of African-American emigrants from the South during the war (although segregated black high schools were barred from IHSAA competition until 1942, individual black athletes could be on teams at non-segregated schools). Some military veterans returned to college and gave a boost to college sports fandom, if not actually contributing on the field of play. The veterans who returned home probably had a greater appreciation if not love of sports from being exposed to them in camp life. This rise in post-war interest in sports strongly contributed to the “Golden Age of Sports” in the 1920s, and the adulation of sports heroes like Babe Ruth, Jack Dempsey, Red Grange, and Rockne. *“Elevens” is a term commonly used at this time to refer to the eleven players on a football team. Similarly, baseball teams were often called “nines” and basketball teams “fives” or “quintets.”
<urn:uuid:943c2230-5ae4-4a90-8c5b-a61cf55c87dc>
CC-MAIN-2021-21
https://blog.history.in.gov/tag/ward-lambert/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988696.23/warc/CC-MAIN-20210505203909-20210505233909-00058.warc.gz
en
0.977057
4,118
3.328125
3
This is a research article published as information for health care professionals and public officials, and for an open peer review. It is not medical advice. I reviewed the scientific literature on hydroxychloroquine (HCQ), azithromycin (AZ), and their use for COVID-19. My conclusions: - HCQ-based treatments are effective in treating COVID-19, unless started too late. - Studies, cited in opposition, have been misinterpreted, invalid, or worse. - HCQ and AZ are some of the most tested and safest prescription drugs. - Severe COVID-19 frequently causes cardiac effects, including heart arrhythmia. QTc prolonging drugs might amplify this tendency. Millions of people regularly take drugs having strong QTc prolongation effect, and neither FDA nor CDC bother to warn them. HCQ+AZ combination, probably has a mild QTc prolongation effect. Concerns over its negative effects, however minor, can be addressed by respecting contra-indications. - Effectiveness of HCQ-based treatment for COVID-19 is hampered by conditions that are presented as precautions, delaying the onset of treatment. For examples, some states require that COVID-19 patients be treated with HCQ exclusively in hospital settings. - The COVID-19 Treatment Panel of NIH evaded disclosure of the massive financial links of its members to Gilead Sciences, the manufacturer of a competing drug remdesivir. Among those who failed to disclose such links are 2 out of 3 of its co-chairs. - Despite all the attempts by certain authorities to prevent COVID-19 treatment with HCQ and HCQ+AZ, both components are approved by FDA, and doctors can prescribe them for COVID-19. Hydroxychloroquine (HCQ) was accepted as a COVID-19 treatment by the medical community in the US and worldwide by early April. 67% of the US physicians said they would prescribe HCQ or chloroquine CQ for COVID-19 to a family member (Town Hall, 2020-04-08). An international poll of doctors rated HCQ the most effective coronavirus treatment (NY Post, 2020-04-02). On April 6, Peter Navarro told CNN that “Virtually Every COVID-19 Patient In New York Is Given Hydroxychloroquine.” This might explain decrease in COVID-19 deaths in the New York state after April 15. The time lag is because COVID-19 deaths happen on average 14 days after showing symptoms. But on April 21, several perfectly coordinated events took place, attacking HCQ’s use for COVID-19 patients. - The COVID-19 Treatment Guidelines Panel of the National Institute of Health issued recommendations with negative-ambivalent stance regarding the use of HCQ as a COVID-19 treatment. This surprising stance was taken contrary to the ample evidence of the efficacy and safety of HCQ and despite absence evidence of its harm. The panel also strongly recommended against the use of hydroxychloroquine with azithromycin (AZ), the combination of choice among practitioners. - On the same day, a paper (Magagnoli, 2020) was posted on a pre-print server medRxiv, insinuating that HCQ is not only ineffective, but even harmful. This not-yet peer reviewed paper, by unqualified authors with conflicts of interest, received wall-to-wall media coverage, as it if were a cancer cure. It used data from Veterans Administration hospitals, spicing its effects. The paper has shown to be somewhere between junk science and fraud. - Rick Bright, a government official who was probably more responsible for the low level of preparedness to the epidemic than most others, and had been re-assigned to a lower position earlier, emerged as a “whistleblower.” He claimed he had been demoted for opposing hydroxychloroquine, the claim to be soon debunked by documents bearing his signature. The media also gave him a wall-to-wall coverage. On April 24, the FDA struck its own blow, issuing a stern warning against use of HCQ for COVID-19 treatment. While these warnings are not binding to doctors, they do produce a chilling effect. Consequently, either patients do not receive necessary treatment, or they receive it with a delay, sharper decreasing its effect. This allows detractors to question HCQ efficacy even more aggressively. Below, I review problems in the NIH COVID-19 Treatment Guidelines and other sources, used to wage anti-HCQ propaganda. NIH Panel Guidelines The relevant section of (COVID-19 Treatment Guidelines Panel, 2020) is Potential Antiviral Drugs. The antiviral treatment recommendations (more accurately, failure to provide recommendations) include: - There are insufficient clinical data to recommend either for or against the use of the investigational antiviral agent remdesivir for the treatment of COVID-19 (AIII). Clinical Data to Date: Only anecdotal data are available.“ “AIII” means a strong position based on expert opinion rather than on evidence. “Chloroquine or Hydroxychloroquine - There are insufficient clinical data to recommend either for or against using chloroquine or hydroxychloroquine for the treatment of COVID-19 (AIII). - When chloroquine or hydroxychloroquine is used, clinicians should monitor the patient for adverse effects (AEs), especially prolonged QTc interval (AIII). Clinical Data in COVID-19 The clinical data available to date on the use of chloroquine and hydroxychloroquine to treat COVID-19 have been mostly from use in patients with mild, and in some cases, moderate disease; data on use of the drugs in patients with severe and critical COVID-19 are very limited. [Follows is a description of some studies]“ Notice that CQ and HCQ are addressed together, although these are two different drugs, and HCQ is clearly superior to CQ both in efficiency and safety. Also notice that the basic recommendation of “insufficient clinical data to recommend either for or against” is given to both HCQ and Remdesivir. However, the recommendation for HCQ goes further to state that when using HCQ, “clinicians should monitor the patient for adverse effects (AEs), especially prolonged QTc interval”. Practically, this means that HCQ should be used only in hospital settings. No such restrictions are set for Remdesivir, for which there is no clinical data available. It goes against all logic. The demand to use HCQ only in hospital settings means: - HCQ treatment will be delayed until a patient decides to be admitted to a hospital, thus lowering HCQ’s efficiency - Hospitals will quickly become overwhelmed with COVID-19 patients Then the Panel nixes HCQ+AZ: “Hydroxychloroquine plus Azithromycin - The COVID-19 Treatment Guidelines Panel recommends against the use of hydroxychloroquine plus azithromycin for the treatment of COVID-19, except in the context of a clinical trial (AIII).“ This drug combination is the most effective and widely used treatment for COVID-19, and the Panel recommends against it! The Panel criticizes some studies of patients’ treatment with HCQ+AZ for the absence of a control group. Stephen McIntyre tweeted about this argument long before the Panel used it: “there’s a very large control group of COVID19 patients not receiving this drug combination: hospitals and morgues are full of them.” There are only two studies, quoted by the Panel against HCQ+AZ, (Molina, 2020) and (Chorin, 2020). Both are misinterpreted by the Panel. Molina et al. Despite (Molina, 2020)’s angry tone and aggressiveness, it reports no results contradicting efficiency of HCQ or HCQ+AZ. The paper describes treatment of 11 hospitalized COVID-19 patients, five of which had cancer, one had AIDS, and almost all were in a bad shape: “at the time of treatment initiation, 10 of the 11 patients had a fever and received nasal oxygen therapy.” Using HCQ+AZ, 10 of the patients’ lives were saved. The article’s point of contention is that when they tested these patients, 5-6 days after the treatment initiation, they still found CoV2 RNA in 8 out of 10. Virus RNA is a molecule. Some viral RNA remains in patients for weeks after full recovery, but it is neither harmful nor infectious. Detecting viral RNA depends on the sensitivity of the testing equipment. The study’s title is No evidence of rapid antiviral clearance or clinical benefit with the combination of hydroxychloroquine and azithromycin in patients with severe COVID-19 infection seems to be lost on the Panel. Chorin et al. The Panel also quotes (Chorin, 2020) as evidence that HCQ+AZ therapy causes QTc prolongation. QTc prolongation is not a health condition itself, but a warning sign that a person is at higher risk of torsades de pointes (TdP), heart arrhythmia, or tachycardia, which might lead to cardiac arrest and death (Simpson, 2020). Nevertheless, none of the patients, treated with HCQ+AZ, suffered TdP or arrhythmia. Four patients died, but none of them had an arrhythmia. Other studies, in which COVID-19 patients are treated with HCQ+AZ, reported taking patients off this medicine after QTc exceeds 500ms. But the treatment may have already had its effect at that time or later, while HCQ remained in the bloodstream. This study has no control group. It provides no information on whether QTc prolongation was caused by the disease or the therapy. (FDA WARNING, 2020), issued on April 24, piggybacks on the COVID-19 Panel Guidelines. It says Hydroxychloroquine and chloroquine can cause abnormal heart rhythms such as QT interval prolongation and a dangerously rapid heart rate called ventricular tachycardia. This statement is confused, and probably not true about hydroxychloroquine. See below. Be aware that there are no proven treatments for COVID-19 … I think that HCQ+AZ is a proven treatment for COVID-19. There is a difference between proven treatment and approved treatment. HCQ+AZ is not approved but proven, because many patients have been treated with this combination and have recovered. We have reviewed case reports … concerning serious heart-related adverse events and death in patients with COVID-19 receiving hydroxychloroquine and chloroquine, either alone or combined with azithromycin or other QT prolonging medicines. These adverse events were reported from the hospital and outpatient settings for treating or preventing COVID-19, and included QT interval prolongation, ventricular tachycardia and ventricular fibrillation, and in some cases death. These are manifestations of COVID-19! See (Bansal, 2020) and (Wang, et al., 2020). The media hysteria played its role, too. The articles about the supposed dangers of HCQ, with detailed description of the symptoms, triggered complaints even before the April 24 warning. And there are people who tried to self-medicate – in the situation when authorities make it difficult to obtain prescription for HCQ – and took the wrong drug or overdosed. Also, QT interval prolongation is not an event, but an early warning. To help FDA track safety issues with medicines, we urge patients and health care professionals to report side effects involving hydroxychloroquine and chloroquine or other medicines to the FDA MedWatch program, using the information in the “Contact FDA” box at the bottom of the page. Such an urging and advertisement guarantee that the FDA will receive mountains of complaints. HCQ and AZ Safety HCQ, CQ, and AZ HCQ & CQ are two different drugs. HCQ is clearly superior to CQ. HCQ has already been selected over CQ. Discussing these two drugs as if they were co-equal in COVID-19 treatment is misleading and a sign of bad faith. HCQ and AZ are some of the most widely prescribed drugs and have been prescribed for decades. HCQ is as safe as a prescription drug can be. AZ is an antibiotic, and it is as safe as an antibiotic can be. Because these drugs have been prescribed so widely, their adverse effects have been studied. A few adverse events associated with them have been reported. Combining these few anecdotal cases, some medical researchers have raised some concern, as a precaution. Doctors understand this. Statisticians understand this. But unscrupulous media uses this information to mislead the naïve public and even public figures Remdisivir is the opposite. It has been developed very recently and has been scarcely used. There is little information about its adverse effects. The corrupt news networks present this lack of evidence of adverse effects as evidence of the absence of adverse effects. The leading objection against HCQ / HCQ+AZ is possible QTc prolongation. Most professionals refer to (CredibleMeds.org, 2020) which puts both HCQ and AZ in the category of Known Risk of TdP (KR). I think that HCQ was listed in that category by mistake. A review of the literature reveals only few anecdotal cases. Some of them are poisoning by large overdoses of HCQ. Then there are patients who were on HCQ for years, suddenly got sick and recovered when HCQ was withdrawn. While there are millions of people continuously taking HCQ, only a few cases of cardiac events have been reported. Even if HCQ was the cause of these rare cases, which is usually unknown, it is still statistically insignificant. It is much safer than driving. Other antivirals are known to cause QTc prolongation too but are not being pulled from practice. In the case of HCQ, it seems that a precaution principle has prevailed over statistical reasoning and common sense. AZ is in the KR category, just like many other antibiotics, including Erythromycin. I have never heard of patients requiring QTc monitoring, when taking Erythromycin. Attention of the Trump Derangement Syndrome crowd: many widely used psycho-active drugs are also listed in the KR category. That includes anti-psychotic Haloperidol, anti-depressants Escitalopram (Cipralex, Lexapro) and Citalopram (Celexa). American College of Cardiology The most reliable source of information about arrhythmia risks is the American College of Cardiology. (Simpson, 2020) in the Cardiology Magazine: Chloroquine, and its more contemporary derivative hydroxychloroquine, have remained in clinical use for more than a half-century as an effective therapy for treatment of some malarias, lupus, and rheumatoid arthritis. … Despite these suggestive findings, several hundred million courses of chloroquine have been used worldwide making it one of the most widely used drugs in history, without reports of arrhythmic death under World Health Organization surveillance. HCQ is even milder than CQ. Azithromycin, a frequently used macrolide antibiotics lacks strong pharmacodynamic evidence of iKr inhibition [associated with QT prolongation]. Epidemiologic studies have estimated an excess of 47 cardiovascular deaths which are presumed arrhythmic per 1 million completed courses, although recent studies suggest this may be overestimated. In other words, after over 50 years of effective use, HCQ and AZ have proven their safety and efficacy. There is no reason for fear, except the fear itself. But some people might be vulnerable, so the article explains how to calculate an individual Risk Score for QTc prolongers. Individuals with higher Risk Score might need QTc monitoring. Also, the authors suggest avoiding other QTc prolonging medications in the time of HCQ+AZ treatment. The cardiologists who wrote this article did not dismiss the concern. They explained the science pertaining to it and suggest proper mitigation measures. Other literature also suggests low risk of HCQ and AZ. (Prutkin, 2020): Limited data on hydroxychloroquine suggest it has a low risk of causing TdP, based on its use for rheumatoid arthritis, systemic lupus erythematosus, and antimalarial therapy. … For these medications [HCQ and AZ], their time window of use is short duration, which is another reason the risk of TdP may be lower HCQ and AZ have other known contra-indications, but they are out of the scope here. COVID-19 caused Arrhythmia Many studies show that COVID-19 causes heart arrhythmia. Cardiac arrest, not directly caused by respiratory damage, is one of the leading direct causes of COVID-19 deaths. (Bansal, 2020) is a review. It finds that COVID-19 is primarily a respiratory illness but cardiovascular involvement can occur through several mechanisms. Acute cardiac injury is the most reported cardiovascular abnormality in COVID-19, with average incidence 8-12% Both tachy- and brady-arrhythmias are known to occur in COVID-19. A study describing clinical profile and outcomes in 138 Chinese patients with COVID-19 reported 16.7% incidence of arrhythmia. The incidence was much higher (44.4%) in those requiring ICU admission … It also notes that CoV2 virus might cause cardiac injury directly or indirectly. The possibility of a treatment impact is mentioned as a less likely one. (Wang, et al., 2020) finds that 44% of the patients transferred to ICU developed arrhythmia. None of them received HCQ or CQ. Most of the patients received an unrelated anti-viral and an antibiotic. Only in 18% of the patients the antibiotic was AZ. At least some of the patients developed an arrhythmia before the treatment. Doctors have found that the infection can mimic a heart attack. They have taken patients to the cardiac catheterization lab to clear a suspected blockage, only to find the patient wasn’t really experiencing a heart attack but had COVID-19. Thus, the hypothesis that CVOID-19 patients experience QTc prolongation and arrhythmia because of the disease, rather than due to HCQ+AZ treatment, is well founded. AZ may increase the odds of QTc prolongation in COVID-19 patients, who would otherwise die from cardiac arrest or multiple organs failure. The media and professional publications report a sharp increase of mortality from cardiac arrest at home in the last few weeks. Some of these cases are known to be COVID-19, but most of them are not tested. Could many of them be happening due to the cardiac damage caused by COVID-19? Can the cardiac impact of COVID-19 be aggravated by strong QTc prolongers that many people take regularly? There are countless variables confounding this statistic. There is an especially sharp increase in home cardiac arrests in New York, which is usually explained by people’s reluctance to call an ambulance or ER. (Kochi, 2020) provides in-depth explanation of the cardiac effects of respiratory infections and interaction with QTc prolongation medications. Positive Cardiac Effects of HCQ Gone unmentioned are HCQ’s positive cardiac effects. They were widely reported before HCQ had misfortune of being mentioned by President Trump. For example, Taking Hydroxychloroquine for RA or Lupus Can Reduce Heart Risk by 17% If you take the anti-malarial drug hydroxychloroquine (Plaquenil) as part of your treatment for lupus or rheumatoid arthritis (RA), you may be getting cardiovascular protection as an added bonus. The article is based on (Jorge, 2019). These findings might be applicable only to long term taking of HCQ, not a 5-day course for COVID-19, but the same can be said about the alleged negative cardiac effects. Articles/Studies criticizing HCQ Listed here are several other papers, influential in the media, but not in the science. These papers span the range from erroneous to … non-existent. Magagnoli et al. (Magagnoli, 2020) is a not peer-reviewed pre-print. It makes a retrospective statistical comparison of the outcome in COVID-19 patients, who received HCQ or HCQ+AZ treatment prior to April 11, in Veterans Affairs hospitals. In the Abstract, it claims that a larger percentage of HCQ treated patients died compared to untreated patients. This ignores the fact that HCQ or HCQ+AZ treatment was given only in the most desperate cases, frequently as compassionate care. Deep inside of the manuscript, it does acknowledge that initial conditions of the HCQ and HCQ+AZ groups was much worse than those of the untreated group, but then ignores it The original version (archived) of the “study” was published on April 21. It received crushing criticism in the comments and was replaced with another one on April 23, hiding those comments. Casting even further doubt on the credibility of this study, one of the authors disclosed Gilead funding for another research. This work was funded by a NIH grant. Despite its multiple flaws, lack of peer review, and obscurity of the authors, this pre-print immediately received wall-to-wall media coverage. Given these circumstances, this work looks like a criminal fraud, rather than a scientific one. Tang et al. (Tang, 2020) is a not peer-reviewed pre-print. It reports results of a clinical trial in China, in which HCQ was given to patients 16-17 days after onset of the disease. This is too late for an anti-viral to work. Thus, this study describes the incorrect use of HCQ, rather than efficacy or safety of the drug. From the comments: With an average delay of 16 days from symptom onset to enrollment and treatment in this trial, those patients are pretty much past the viral phase of the disease, where an antiviral treatment would have the most value, and are well on their way to pneumonia and a cytokine storm problem, which is ultimately what kills. Once again, despite its obvious errors, the study was widely covered, including the New York Times and LA Times. Neither headline nor article addresses the obvious lateness of the drug’s application. Mahevas et al. (Mahevas, 2020) is another not peer-reviewed pre-print. Didier Raoult and his colleagues replied to it with a bluntness, rare in scientific journals: Scientific fraud to demonstrate the lack of efficacy of hydroxychloroquine compared to placebo in a non-randomized retrospective cohort of patients with Covid: Response to MAHEVAS et al. , MedRxiv, 2020. (Brouqui, et al., 2020). (Mahevas, 2020) also gathered many negative comments on MedRxiv. Oral Statements of Holtgrave & Cuomo A study of 600 patients at 22 hospitals in New York is being conducted by the University at Albany School of Public Health under the management of dean David Holtgrave. Although the study was not finished, Mr. Holtgrave already announced that the results are negative: “We don’t see a statistically significant difference between patients who took the drugs [HCQ, HCQ+AZ] and those who did not,” according to CNN. New York Governor Andrew Cuomo referred to the results as neither positive nor negative, per CNN and ABC. No paper, or even pre-print, reporting these results, has been published, as of April 29 (searches on Google Scholar, PubMed, and medRxiv were conducted for Holtgrave hydroxychloroquine; Holtgrave COVID-19). New York and other “resistance” states make patients jump through hoops to obtain HCQ. As an anti-viral, it should be taken as soon as possible. Dr. Vladimir Zelenko explained that in his letter, which is worth reading in its entirety: It is essential to start treatment against Covid-19 immediately upon clinical suspicion of infection and not to wait for confirmatory testing. There is a very narrow window of opportunity to eliminate the virus before pulmonary complications begin. The waiting to treat is the essence of the problem. He refers to patients in the high-risk category – older than 60, having certain health conditions, or shortness of breath. The resistance states established onerous requirements that delay HCQ treatment for days. This sharply lowers the efficiency of the treatment, and possibly increases TdP risks. The mixed results, promised by Mr. Holtgrave, might be caused by this delay. On March 28, Russia announced a COVID-19 treatment based on Mefloquine. Mefloquine, invented in the US in 1970s, is another anti-malaria drug, similar to HCQ. In the West, Mefloquine was withdrawn from use after a controversy about its long-term effects. Russia might also use HCQ. From a Russian brochure (Nikiforov, 2020): These drugs have a comprehensive negative effect on the coronavirus. It may take years of scientific experimentation to understand how and what exactly they affect. Now the fact of a positive effect has been established, and the drugs should and will be used. The mechanisms of HCQ and HCQ+AZ action are explained (Hache & Raoult, 2020). On March 27, WHO erected another roadblock to treating COVID-19 patients with HCQ. WHO stated that HCQ was not only insufficiently tested (which was true at that time), but that it was considered for COVID-19 at much higher doses than for malaria. In the context of the COVID-19 response, the dosage and treatment schedules for chloroquine and hydroxychloroquine that are currently under consideration do not reflect those used for treating patients with malaria. The ingestion of high doses of these medicines may be associated with adverse or seriously adverse health outcomes. This is dangerous misinformation. HCQ dosage for COVID-19 is the same or lower than for malaria (Drugs.com, 2019). WHO was aware of this, because it was already conducting clinical trials including HCQ and a number of other Big Pharma drugs. Yet, as of April 29, this paragraph still appears there. This act alone justifies not only defunding but ignoring WHO. Google and Facebook adhered to WHO on everything related to COVID-19. Together with Twitter, they purged information favorable to HCQ. These is outrageous behavior for telecommunications and computational services providers. - It seems that the main contra-indication for HCQ treatment of COVID-19 is that no treatment is needed for healthy individuals below age 50. - Persons in the President’s circle were claiming that HCQ / HCQ+AZ are unproven treatments. That might have been true a month ago, but not now. These drugs are proven by practice and by failure of its opponents to disprove their efficacy and relative safety. - The Guidelines are accompanied by a financial disclosure of the panel members. Weirdly, this disclosure covers a period of 11 months: May 1, 2019 to March 31, 2020. The latest three weeks were excluded for some reason. Nevertheless, 9 out of 50 members of the panel disclosed financial ties to Gilead. Gilead’s Remdesivir is an inferior competitor to HCQ – more expensive, almost untested, and less efficient (as far as the little testing with it has shown). HCQ is a generic drug with low profit margin. Gilead Sciences directly participates in WHO trials of Remdesivir as a COVID-19 treatment. - HCQ / HCQ+AZ are prescribed by a doctor. They are not OTC and should not be used for self-medication. - HCQ+AZ is the most common treatment. HCQ acts on its own but is much more effective with Zinc; AZ is an antibiotic and a source of Zinc. See Dr. Zelenko’s regimen is HCQ+AZ+Zinc. - There is a live document by Michael J. A. Robb, M.D., tracking effectiveness of HCQ-based treatments https://drive.google.com/file/d/1w6p_HqRXCrW0_wYNK7m_zpQLbBVYcvVU/view Bansal, M., 2020. Cardiovascular disease and COVID-19. Diabetes & Metabolic Syndrome: Clinical Research & Reviews, 25 March. Brouqui, P., Million, M. & Raoult, D., 2020. Scientific fraud to demonstrate the lack of efficacy of hydroxychloroquine compared to placebo in a non-randomized retrospective cohort of patients with Covid: Response to MAHEVAS et al. , MedRxiv, 2020. Mediterranee Infection, 24 04. Chorin, E. e. a., 2020. The QT Interval in Patients with SARS-CoV-2 Infection Treated with Hydroxychloroquine/Azithromycin. medRxiv, 3 April. COVID-19 Treatment Guidelines Panel, 2020. COVID-19 Treatement Guildelines, s.l.: s.n. CredibleMeds.org, 2020. COMBINED LIST OF DRUGS THAT PROLONG QT AND/OR CAUSE TORSADES DE POINTES (TDP). [Online] Available at: https://crediblemeds.org/pdftemp/pdf/CombinedList.pdf Drugs.com, 2019. Hydroxychloroquine Dosage. [Online] Available at: https://www.drugs.com/dosage/hydroxychloroquine.html FDA WARNING, 2020. FDA cautions against use of hydroxychloroquine or chloroquine for COVID-19 outside of the hospital …. [Online] Available at: https://www.fda.gov/drugs/drug-safety-and-availability/fda-cautions-against-use-hydroxychloroquine-or-chloroquine-covid-19-outside-hospital-setting-or Gautret, P. & Raoult, D. e. a., 2020. Clinical and microbiological effect of a combination of hydroxychloroquine and azithromycin in 80 COVID-19 patients with at least a six-day follow up: A pilot observational study. Travel Medicine and Infectious Disease, 4 April. Hache, G. & Raoult, D. e. a., 2020. Combination of hydroxychloroquine plus azithromycin as potential treatment for COVID 19 patients: pharmacology, safety profile, drug interactions and management of toxicity.. Mediterranee Infection, 22 April. Hawryluk, M., 2020. Mysterious Heart Damage Hitting COVID-19 Patients. WebMD, 06 April. Jorge, A. e. a., 2019. Hydroxychloroquine Use and Cardiovascular Events Among Patients with Systemic Lupus Erythematosus and Rheumatoid Arthritis. American College of Rheumatology. Kochi, A. e. a., 2020. Cardiac and arrhythmic complications in patients with COVID-19.. Journal of Cardiovascular Electrophysiology, 08 April. Magagnoli, J. e. a., 2020. Outcomes of hydroxychloroquine usage in United States veterans hospitalized with Covid-19. medRxiv, 23 April. Mahevas, M. e. a., 2020. No evidence of clinical efficacy of hydroxychloroquine in patients hospitalized for COVID-19 infection with oxygen requirement: results of a study using routinely collected data to emulate a target trial. medRxiv, 14 April. Molina, J. M. e. a., 2020. No evidence of rapid antiviral clearance or clinical benefit with the combination of hydroxychloroquine and azithromycin in patients with severe COVID-19 infection. Médecine et Maladies Infectieuses, 28 March. Nikiforov, B. B., 2020. Modern Approaches to COVID-19 Therapy. [Online] Available at: http://fmbaros.ru/upload/medialibrary/53f/Nikiforov-_-Sovremennye-podkhody-etiotr.-i-patogeneticheskoy-terapii-_2_.pptx Prutkin, J. M., 2020. Coronavirus disease 2019 (COVID-19): Arrhythmias and conduction system disease. UpToDate, 24 April. Simpson, T. e. a., 2020. Ventricular Arrhythmia Risk Due to Hydroxychloroquine-Azithromycin Treatment For COVID-19. [Online] Available at: https://www.acc.org/latest-in-cardiology/articles/2020/03/27/14/00/ventricular-arrhythmia-risk-due-to-hydroxychloroquine-azithromycin-treatment-for-covid-19 Tang, W. e. a., 2020. Hydroxychloroquine in patients with COVID-19: an open-label, randomized, controlled trial. medRxiv, 14 April. Wang, D., Hu, B. & Hu, C., 2020. Clinical Characteristics of 138 Hospitalized Patients With 2019 Novel Coronavirus–Infected Pneumonia in Wuhan, China. JAMA Network, 7 February.
<urn:uuid:1ea4fd96-0577-4441-b01b-f57d234a09ae>
CC-MAIN-2021-21
https://i-uv.com/pseudo-science-behind-the-assault-on-hydroxychloroquine/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988802.93/warc/CC-MAIN-20210507181103-20210507211103-00457.warc.gz
en
0.933428
7,217
2.609375
3
Is there an instrument that comes with more cultural baggage than the banjo? For many, it evokes a stereotyped image of the rural white Southerner, as in the scary hillbillies of Deliverance and many a comedy sketch. In the 19th century, by contrast, the banjo served as a caricature of enslaved Africans, gaining wide popularity through blackface minstrel shows. The instrument’s deeper story moves around and between the stereotypes. This is a timbre that cuts to some of the deepest seams of America’s past. To a number of contemporary banjo players and composers, the well of history and associations surrounding the banjo becomes a musical parameter to be bent, subverted, or used to evoke a particular landscape or time. The Birth of the Banjo The banjo has its roots in West African instruments such as the ngoni, and possibly some Near Eastern stringed instruments which also feature a stretched membrane over a gourd resonator. African slaves on plantations in southern Maryland were documented playing gourd banjos as far back as the 17th century. Later on, white musicians learned the banjo from freed blacks and slaves and incorporated it into minstrel shows in the 19th century, resulting in the first uniquely American popular music. The popularity of the minstrel show, coinciding with the start of the Industrial Revolution, led to the mass production of banjos using wooden hoops and metal brackets—materials more easily sourced than the traditional gourds. Minstrel Joel Walker Sweeney, the first white person known to play a banjo on stage, has been credited with adding a fifth string to the instrument. While many believe that Sweeney introduced the characteristic drone string, tuned above the other strings with its tuning peg jutting up from the neck, historical evidence appears to contradict this claim. Sweeney’s more likely contribution is the addition of a lower string, as well as the shift from gourds to drum-like resonating chambers. Beginning in 1848, 5-string banjos made by William Boucher in Baltimore were sold through mail order catalogs. Other companies soon followed, as the banjo was “refined” through ornate decorations and promoted as a parlor instrument for the upper class (accompanied by a de-Africanized repertoire and technique, referred to as “classical” style). Eventually these instruments made their way into the mountains and were quickly embraced by the predominantly English, Scottish, and Irish settlers. Minstrel songs, incorporating rhythms and melodic tropes from transplanted African music, took their place alongside the old English fiddle tunes, old ballads, and new ballads composed by Appalachian settlers to express the social and economic realities of their environment. This hybrid music came to be known as old-time. More directly transmitted influences from African-American music, particularly spirituals and the blues, continued to enter this repertoire into the 20th century. The Folk Revival The popularity of old-time music in its native environment had faded somewhat by the 1940s due to a population shift to factory jobs in cities, along with the widespread distribution of commercial music by radio. Yet even while old-time music was becoming an endangered tradition in its birthplace, it began to be rediscovered by folklorists outside of Appalachia. These scholars, including the Seeger family (composers Charles and Ruth Crawford Seeger, their son Mike Seeger and his half-brother Pete Seeger) along with John and Alan Lomax, sought out and recorded folk musicians, learning and transcribing their songs. Seeing the Appalachian ballad tradition as expressing the voices of the downtrodden, Alan Lomax and Pete Seeger adopted this music as a rallying cry for social justice. Lomax organized concerts that brought together many of the folk musicians that he discovered through his travels while field recording, and sang the old ballads himself in union halls as well as ethnomusicological conferences. New songs in the older styles were written by Seeger, Woody Guthrie and others, and thus old-time music began to reach a wider audience. Pete Seeger’s banjo became a symbol of the 1950s and ’60s folk music revival, a new political awakening of the union movement, the civil rights struggle, and later of protest against the war in Vietnam. A Path Through the Bluegrass In the midst of this folk revival centered in New York City, an independent revival of the banjo occurred around Nashville, Tennessee. In the 1920s and ’30s, the Grand Old Opry established itself as a weekly live stage and radio show devoted to country music, an urban transplant of old-time traditions to serve the many people who had moved to Nashville from the hills. The radio broadcasts also reached those still living in the country, and served to inspire many younger people to play this music. In the mid-1940s, the musical acts featured on this show began to increase the tempo of old songs to match the energy of the urban environment, most notably mandolinist and singer Bill Monroe and his Bluegrass Boys. In 1948 a young banjo player named Earl Scruggs stepped into Monroe’s band and proceeded to redefine everyone’s conception of what the banjo could do. Scruggs developed a three-finger technique of picking, which allowed for a more agile rhythm in the execution of melody than the older downstroke style known as clawhammer. The instrument grew in prominence on the stage from anachronistic musical prop to a lead voice in the new style that emerged as bluegrass. In the early 1960s, the Scruggs technique of bluegrass playing reached a national audience through his recording of the theme for the TV show Beverly Hillbillies. The fast, energetic finger picking established by Scruggs has become the banjo’s dominant sound image for most people. Depending on the geography and cultural environment in which this sound is received, the bluegrass banjo is often associated with a particular vision of America—either associated positively with the rural landscape, pride, and connection to cultural roots, or negatively to social conservatism or ethnic exclusivity. It is a strong sonic flavor, whichever mix of associations it has for the listener. Bluegrass technique, defined by crisp rolls (arpeggiation and melodic embellishment across multiple strings) using metal finger picks, became the foundation for many innovative banjo players. In the 1970s, Tony Trischka developed the “melodic style” of bluegrass banjo playing. This style shifts focus away from arpeggiation to full attention on the lead melody, with chromatic embellishments. As a teacher, Trischka has been widely influential, releasing many instruction books and videos, as well as having some prominent players study under him. One of Trischka’s students was a young Béla Fleck. Toward the end of the ’70s, Fleck adapted the bluegrass technique to harmonic and contrapuntal models from jazz and classical music, leading to a style that has become known as progressive bluegrass or new grass. Fleck is highly regarded as a master of banjo technique on the level of a classical musician, which he has applied to transcriptions of Bach partitas as well as his own compositions, exhibiting a wide stylistic palette. His collaborative exploration of the African origins of the banjo, traveling to West Africa to perform and record with master musicians there, may be experienced in the 2008 documentary Throw Down Your Heart. Connections to the musical traditions of Africa may be traced more easily from the pre-bluegrass clawhammer style, which is the dominant tradition of old-time banjo playing. Maintaining a strong rhythmic groove through downstrokes with the back of a fingernail, interspersed with syncopated drone notes on the shorter fifth string (released by the thumb in between downstrokes), creates a strong rhythmic foundation for dance tunes traditionally played by the fiddle. Similar playing techniques with plucked string instruments may be found among griots of the Wasulu people. This connection may be plausibly traced through the little known history of black string bands in the late 19th and early 20th century. Few if any recordings exist, but we have photographs, letters, and sheet music collections from black banjo players and fiddlers. One example is the Snowden Family Band of Knox County, Ohio—the group that may have taught the song “Dixie” to their white neighbor Dan Emmett, a minstrel singer. The meaning of the song’s lyrics change dramatically when viewed through the lens of this possible history, connected to Ellen Snowden’s childhood experience as a slave in Nanjemoy, Maryland. At a young age she was transplanted with one of the slave master’s relatives to Ohio, while her father remained behind. The black string band legacy has been reclaimed in the past decade through events such as the Black Banjo Gathering in Boone, North Carolina. This conference gave rise to the most famous group of black musicians playing old-time music, the Carolina Chocolate Drops. Modern Perspectives on Old-Time Music After the initial folk revival of the 1950s and ’60s, old-time banjo went underground. Mike Seeger played an important role in maintaining the fire by finding and promoting master musicians from the hills, revitalizing forgotten performance traditions such as gourd banjo and minstrel banjo through his own recordings, and passing on the craft to younger musicians. The record label Folkways, founded by Moses Asch in the late 1940s and acquired by the Smithsonian Institution in 1987, has released many recordings of outstanding artists in this musical lineage who had been discovered and recorded by the folklorists. Meanwhile, the mantle of old-time music has been taken on by a small but strong community that resembles in many ways the dedication and DIY ethos of the new music community. As a composer and a self-taught banjo player, I have been drawn to the old-time music tradition for a number of reasons. I appreciate the wide expressive palette and range of tempo between dance tunes and murder ballads. I enjoy the ways that a tune can take on a very different sound and feel in the hands of different players, and appreciate that the tradition encourages this kind of personalization. I am also attracted to the variety of tunings used in old-time banjo playing beyond the standard G tuning (gDGBD, the small letter indicating the higher pitched fifth string) that bluegrass players tend to stick to. Particular songs have given rise to tunings named after them, such as “Cumberland Gap” (gEADE), “Willie Moore” (gDGAD), and “Last Chance” (fDFCD). My own playing and composing for banjo has gravitated toward the relatively more common “Sawmill” or “Mountain Minor” tuning (gDGCD) and the “Double C” tuning (gCGCD, often transposed up a whole step to “Double D” for playing along with a fiddle tune). These tunings in old-time banjo serve to reinforce open-string drones and maximize the sympathetic vibrations within the instrument. Sometimes these drones result in interesting dissonances that are exploited for expressive effect and do not conform to traditional tonal harmony. I enjoy lowering the fifth string to an F# to produce a tritone relationship with the fourth string (bass), following the practice of the old master Dock Boggs. Old-time banjo players sometimes refer to these different tunings as “atmospheres.” On a more fundamental level, I am drawn to the banjo as a means of grounding creative experimentation within a deep history that is relevant to connections that I am trying to make in my music. The legacy of slavery in the United States is one which is pushed fairly far back in our collective consciousness. The trauma of that institution still reverberates today in our economic structure, systems of social control, and self-segregation within our population. The banjo came into its own as an American instrument in the midst of that experience of slavery. It was brought into the white mainstream consciousness through the blackface minstrel show, a format which also continues to reverberate in mainstream American entertainment. In the process of this African instrument being adopted by popular society in America, it also took on the musical heritage of the English, Scottish, and Irish immigrants. It was embraced as an instrument of the Everyman, especially in the hollers and mining towns of Appalachia, where the banjo became a main outlet for expressing life’s troubles as well as a way of laying them aside through homespun entertainment. For the banjo to carry so many stories within it, charged with painful legacies and conflicting identities, makes it a potentially powerful medium for new music that creatively bends the associations with it. This understanding of the banjo as an encapsulation of social history is one that makes sense to me when I think about my neighborhood of Hampden, Baltimore. The great bluegrass/country singer Hazel Dickens lived on one of these streets when she first moved to Baltimore from West Virginia, in search of factory work in the 1950s. While living here she met Mike Seeger at a rowhouse basement jam session, and was encouraged to become a songwriter. She remained in Baltimore and Washington DC for most of her life, and yet her songs express a constant sense of longing for the landscape of her childhood home. This tension of country identity and the urban environment is still palpable in the neighborhood today. When I play banjo out on my front stoop I often imagine Hazel’s experience, almost as an immigrant from another country, trying to navigate a new social structure in the crowded city. Hampden was built around textile mills that hired exclusively white workers from the Appalachian/Piedmont region during the 19th century. For many years this community has attempted to maintain an insular sense of itself, built upon its cultural background, as distinct from the city of Baltimore, which annexed it in the late 19th century. After the mills and then the factories pulled out, Hampden went into decline for a few decades. Some of the social tension that followed was translated into racism and suspicion of outsiders. Ku Klux Klan representation in community parades is noted as late as the 1970s. Today, underneath the economic regeneration of the neighborhood’s main street thanks to gourmet restaurants and boutique shopping, there remains a sense of racial tension in relation to the rest of the (predominantly black) city. One of my goals while living here is to start a pirate radio station and live show that will bring together old-time music and hip hop, among other hybridized folk music that mixes identities. It is my hope that through this medium I can make music that dissolves prejudice. Hill Hop Fusion The fusion of old-time music with hip hop is a concept that I first encountered through a radio program from the Appalshop organization in Whitesburg, Kentucky, called “From the Holler to the Hood.” This program arose from a perceived need to reach out to the population housed in the numerous prisons that have sprung up in the wake of the declining coal economy in Eastern Kentucky. The prisoners are predominantly African Americans transferred from outside of the region. Appalshop began programming a show called “Calls from Home” during which family members could call in and dedicate songs to loved ones in prison. As the requested songs were mostly hip hop, programmers at Appalshop became interested in the idea of setting up collaborations between hip hop artists and traditional Appalachian musicians. In 2003, a friend of mine from Kentucky played me a tape of one of these collaborations, between old-time musician Dirk Powell and hip hop producer Danjamouf. Since then, the hip hop subgenre known as “hill hop” has been carried forward by the group Gangstagrass, among a few others. Sometimes the use of the banjo is as simple as the desire to evoke a landscape. Since the 1990s the banjo has made occasional appearances in indie rock as a signifier of a different age, or to cast a rustic or countrified hue over a song. “Chocolate Jesus” (1999) by Tom Waits is a prime example, where the banjo is incorporated as an element of a sound that Waits described as “sur-rural.” Other examples may be found in the work of The Magnetic Fields, Feist, and The Books. In these instances, the raw sound of the banjo stands as an alternative to the technology and pacing of the modern urban environment and to invoke a common folk language. Because of the banjo’s sonic links to ancient instruments from Africa and even further East, the banjo can take on the role of a shape-shifter in its cultural associations. Multi-instrumentalist Jody Stecher brought the banjo into the field of “world music” in 1982 with his album Rasa, which features Indian sitarist Krishna Bhatt, along with vocals by Stecher’s wife Kate Brislin. Through this album, Stecher, Brislin and Bhatt reveal a natural affinity between old-time/early country tunes and the melodic ornamentation of Indian classical music. Béla Fleck made his own contribution to cross-cultural banjo fusion with his 1996 album Tabula Rasa, a collaboration with Chinese erhu player Jie-Bing Chen and Indian mohan veena player Vishwa Mohan Bhatt. On this album, musical sources from each of the cultures have a turn at center stage while the other instruments provide tightly composed reinforcement and counterpoint. Through the tight interaction of these three players, we can hear a hybrid of complimentary sounds, transcending the specific associations of any culture individually. The erhu, as a bowed string instrument, may remind us of the fiddle that is so often paired with banjo in traditional Appalachian music. The mohan veena is a stand-in for the guitar, another frequent banjo partner. Fleck’s banjo playing defines a well-balanced meeting point and assimilation of different influences. Played with a bow, the nasal tone and sympathetic vibrations can sound a bit like a sarangi from India or the Iranian rabab. Played with a pick to produce single-string rhythms and tremolos, it can sound like a Berber gimbri. In Morocco, the banjo has effortlessly found its place in the traditional music of that country. A fine example of this cross-cultural assimilation of the banjo may be heard in the music of the Moroccan group Imanaren, with banjoist Hassan Wargui. In the context of Imanaren’s music, the banjo doesn’t appear to reference its American legacy at all. Instead it seems to be a native timbre to their Berber melodies. In experimental and modern classical music, the banjo’s historical weight is treated with a variety of approaches. Eugene Chadbourne has used the banjo in a way that naturally and seamlessly spans country music, punk rock, and free jazz, with a somewhat antagonistic stance toward the white rural culture commonly associated with the instrument. Equally at home within the structure of blues-based chord changes and uptempo drum beats as within irregular rhythms and spasmodic gestures, Chadbourne’s performances convey an intentionally skewed but well-defined aesthetic that he has pieced together for himself. On another side of the spectrum, the music of Paul Elwood moves between old-time/bluegrass sources and modernistic chamber ensemble sonorities. These two worlds are not always reconciled with each other, occasionally treated as juxtaposed blocks of music (original passages vs. quotation/arrangement), and sometimes heard as superimposed, warring influences over the direction of a long-form composition. When the banjo moves beyond familiar bluegrass riffs and explores a greater sense of rhythmic space and pitch direction, Elwood’s music reaches some passages of incredible transcendence. As a listener, I feel that I have been on a journey of clashing cultures and eventually discover a unified sonic field that moves beyond the past. On occasion the banjo seems to be treated as a stand-in for a mandolin, which has a longer history in the context of classical concert music. In this approach, the instrument is treated purely as an interesting timbre without any overt inference of folk music or traditional playing techniques. George Crumb’s 1969 song cycle, Night of the Four Moons, is one example of this ahistorical use of the banjo. In this work, it is one distinctive tone color among many in a mixed ensemble, supporting poetic images from the selected texts by Federico García Lorca. Through this set of four songs, the banjo explores a variety of textural relationships with the alto voice, alto flute, electric cello, and percussion. Avoiding the rhythmic propulsion of traditional banjo playing, Crumb creates a new identity for the instrument through isolated gestures, and textures based on call-and-response between the banjo and the other instruments in the ensemble. At times the banjo is made to sound vaguely Eastern, though a particular set of intervals used as a mode. Elsewhere, it fulfills an accompaniment role that suggests an older idiom of Western classical music, but nothing tied to the history of the banjo itself. The kinship with sonorities from the Middle East and beyond may be easily recognized in the playing of Paul Metzger. This Minnesota-based artist focuses on improvisation and composition with a self-modified banjo which has been expanded to include 23 strings. His playing techniques span classical guitar finger style to orchestral bowed textures, touching on many different sound worlds. Within a single piece there seem to be hints of a number of different cultural heritages, woven together to produce a unified landscape. To hear the full range of Metzger’s banjo palette, take a listen to his 2013 album Tombeaux on the label Nero’s Neptune. Another improviser, Woody Sullender is a multi-media artist, electronic composer, and banjo player based in Brooklyn, New York. While his most recent work at the time of writing focuses more on installations and electronics, he is one of the most adept improvisers in the somewhat specialized field of experimental banjo. His approach is particularly aware of the instrument’s past associations and seeks to both evoke and counter them. Mountain music is suggested in some of the hammer-ons and other musical gestures, which gravitate to open fifths and minor modes. Yet rhythmically and dynamically, listeners are being guided in another direction. His album with harmonica player Seamus Carter, When We Get to Meeting, is available as a free download. Baltimore-based musician Nathan Bell states that he uses the banjo “as a shapeshifting tool,” describing a fluidity between stylistic associations along with a range of timbres that he draws from the instrument. Bell shifts easily between different styles of playing: old-time clawhammer technique, finger picks, and bowed banjo all occupy a place in his personal soundscape. Auxiliary percussion, such as antique cymbals suspended from the neck of his banjo, are also frequent companions to the sounds drawn from his main instrument. His 2011 album COLORS is an excellent example of Bell’s use of the banjo as a vehicle for defining a landscape that draws on memory and nostalgia connected with the instrument, while coloring our experience of it with effects processing, noise elements, and slowly moving background voices. Bell’s recorded projects may be heard and purchased from his Bandcamp page. Renegade banjoist Brandon Seabrook of Brooklyn, New York, also comes to the instrument from a guitar background. He claims not to listen to other banjo players and explains his choice of instrument as a way to bring another level of challenge and difficulty into his music, due to the banjo’s shorter sustain time relative to guitar tones. Above all, his playing is defined by dissonance, intensity, and speed. Repetitive chromatic patterns cut quickly to measured tremolos and dynamic builds, always maintaining a sense of urgency. Seabrook brings an aggressive, punk-meets-free-jazz type of energy to his playing, like a prolongation of the most intense passages in Eugene Chadbourne’s music, sounding nothing like the bluegrass type of banjo virtuosity. In the realm of notated music, Washington DC-based banjoist and composer Mark Sylvester is deeply committed to promoting the banjo in the concert hall. Sylvester comes to the banjo from a classical guitar background, and while he teaches and is proficient in bluegrass and clawhammer styles of banjo, his own compositions place the instrument squarely in a classical chamber music context. Sylvester’s Trio #1 and Trio #2 occasionally employ finger picking patterns familiar to bluegrass audiences, such as ostinati featuring hammer-ons and pull-offs, but largely gravitate toward a style of writing that could easily be conceived for guitar. Progressions of chromatic harmony predominate over more familiar banjo harmonies derived from the open strings. Continuing the development of notated compositions for banjo as chamber music, a new album by the Boulder, CO-based Jake Schepps Quintet, Entwined, features long-form classical compositions for the traditional bluegrass string band instrumentation of banjo, mandolin, violin, guitar, and double-bass. The featured composers—Marc Mellits, Matt McBane, Mark Flinner (the group’s mandolinist), and Gyan Riley—explore tight ostinato grooves, expansive melodies, and extended techniques, applied within a comfortable blend of styles. Multi-movement works such as Marc Mellits’s Flatiron provide room to range from ballad-like sections featuring a nostalgic harmonic vocabulary to more contemporary-sounding minimalist syncopated rhythmic layers. While enriching the soil of bluegrass/classical fusion, first tilled by Béla Fleck as well as Marc O’Connor and Edgar Meyer, the Jake Schepps Quintet articulates a wider sound palette without anything sounding self-conscious in its merging of musical cultures. The sound of these instruments together is already well-defined in most listeners’ ears, so that modern classical approaches to form can take advantage of expectations of particular roles within the ensemble while exposing alternate timbres from the instruments. This instrumentation may yet become as enduring for composers as the classical string quartet. The banjo is suggestive of many different things to different people. It clear that it has had a lasting power beyond just one cultural place and time, and that musicians continue to develop new ways of conceiving its sound. Whether it is overtly addressed or not, classically trained composers creating new music with the banjo enter into dialogue with a folk tradition, a history, and a set of expectations on the part of the listener. To use the instrument in a vastly different way from these expectations is a potential tool for shaking up old ideas about its stylistic limitations or caricatured image. To embrace certain musical aspects of the folk lineage and place them in new contexts may be seen as part of a general shift away from an exclusive view of the classical tradition as purveyor of innovation. Today musical experimentation, complexity, and the development of a personal style can be founded on many sounds that are not connected to the concert hall tradition. While the adoption of instruments from other cultural contexts into classical music has been occurring for centuries, this has only recently taken on some characteristics of a two-way communication between musical cultures. Experimental hybrids are continually being created by musicians coming from folk, rock, hip hop, and many other backgrounds. Composers and new music performers are collaborating with musicians from these other backgrounds, often participating in non-classical performance traditions, and collectively shaping new ways of listening to and participating in the music. Examples may be heard in collaborations between Brian Harnetty and Bonnie “Prince” Billy (Silent City, 2009), or Nico Muhly and Sam Amidon (The Only Tune, 2008). Where classical instruments and musical structures have been founded on an aristocratic legacy, supported by royal courts or the church, the banjo’s historical evolution has grown out of struggle and conflicting cultures. It can be painful to look back on the history of slavery or the ongoing situations of injustice faced by the people of Appalachia. The banjo may be a reminder of these things, and personal reactions to such a reminder may also bring up prejudices towards one group of people or another. Yet the hybrid cultural heritage of the banjo, kept alive by traditional players and continually reinterpreted by musicians from many different backgrounds, may be uniquely equipped to break through the divisions that separate people. It is an instrument that was originally embedded in the lives of enslaved Africans as well as the rural white settlers later on, and it has assimilated musical elements from both cultures. The tangled thread of minstrelsy that endures in popular media to this day is one that needs to be examined and understood in all of its complexity. Artists and musicians should attempt to examine that shadow and address it in a conscious way in contemporary art. The banjo stands squarely at the intersection of Anglo and African cultures at a formative period in American history, spanning different conceptions of heritage. Perhaps it can also be a tool to help to unravel the pain or prejudice and uplift us to better way of coexisting and collaborating in this world.
<urn:uuid:4188fcd4-526d-4835-a4b3-8a7d04e28f4e>
CC-MAIN-2021-21
https://nmbx.newmusicusa.org/the-banjo-faces-its-shadow/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991801.49/warc/CC-MAIN-20210515100825-20210515130825-00378.warc.gz
en
0.957546
6,186
3.9375
4
In the Shelf life Introduction in the series about Shelf Life, all possible microbiological types of spoilage that can occur in bakery products were discussed. Moulds and bacteria were the biggest concern for the bakery industry and in a lesser extent yeasts. Rhizopus species, Penicillium species and Aspergillus species are moulds that are the most prevalent and can produce mycotoxins such as aflatoxins. Bacteria that create the most problems in bakery products are Bacillus species, which can cause ‘ropy’ bread and furthermore Staphylococcus aureus and Salmonella outbreaks are worth mentioning. Part 2 and Part 3 are focused on the internal- and external factors that are influencing shelf life. In particular, a focus to prevent or delay the germination of microorganisms and other possible processes that cause quality losses in bakery products is applied. These factors be hurdles to take for microorganisms and compounds that affect shelf life. In combination, preservation factors (hurdles) reinforce each other, whereby individual techniques can be moderated due to synergetic effects. The Hurdle Technology will be further extensive explained in part 4, the final part of this series about Shelf Life. This article will focus on the internal factors that influence microorganisms and thus shelf life of bakery products including pH, redox potential, raw materials, product formulation, product make-up and structure and water activity (aw). Water is often the major constituent in foods. Even relatively 'dry' foods like bread usually contain more than 35% water. The state of water in a food can be most usefully described in terms of water activity. Next to temperature (an external factor), aw is considered as one of the important parameters in food preservation and processing. Figure 1 shows the relationship between Mould Free Shelf Life (MFSL) and water activity at different temperature14. Where Figure 2 shows deterioration reactions at different water activity levels. Figure 1: Relation between MFSL in days, water activity and temperature Figure 2: Food stability as a function of water activity The concept of water activity is more than 60 years old when William James Scott showed that Staphylococcus aureus has a limiting aw level for growth1. Water activity (aw) of a food is the ration between the vapour pressure of the food, when in a completely undisturbed balance with the surrounding air, and the vapour pressure of pure water under identical conditions2. Water activity, in practice, is measured as Equilibrium Relative Humidity (ERH), the humidity of the surrounding air. During Shelf life, the ERH will change, hence the equilibrium at the moment of measuring. For simplicity reasons, we will remain using water activity. The relation between ERH and water activity can by calculated by using equation 1: aw = ERH/100 ERH = aw * 100% If it’s not possible to determine the relative humidity of the surrounding air, the water activity can be predicted in theory by several formulas. The most conventional for bakery products is the Grover equation. This model has been successfully applied to bakery products such as bread3. However, it works not properly for products with a low water activity, a large deviation is the outcome. The Grover equation is based on the sucrose equivalent (SE). By setting the value of sucrose to 1, other ingredients can be compared towards ‘the effect that the ingredient has on water activity compared to the effect an equal quantity of sucrose would have’14. An overview of the sucrose equivalents of common bakery compounds are displayed in Table 1. The other parameter that needs to be obtained is mi, the moisture content in grams of water per grams of the ingredient. In other words; the moisture content of each ingredient. Grover’s equation is written below. Table 1: Sucrose Equivalents of some common bakery ingredients |Water, fat, and whole egg||0.0| |Glucose syrup 42 DE**||0.7| |Starch (DE < 50)||0.8| |Glucose syrup 64 DE||0.9| |Glucose, fructose (invert sugar)||1.3| |Salt (sodium chloride)||11.0| *depends on salt concentration **DE = dextrose equivalent the Grover equation: aw = 1.04 - 0.1 Es + 0.0045 (Es)² Es = ∑ (SE/mi) There are many humectants that can decrease the water activity drastically and thus extend the shelf life of bakery products. Overall, the most popular are sugars or derived products of sugar such as glycerol, dextrose and glucose syrups. Probably the most important/common ingredient to decrease the water activity is salt (sodium chloride). Reformulation with the help of these humectants will be discussed in paragraph 2.5. Furthermore, acids have an influence on Shelf Life extension by lowering the water activity. Although, the biggest impact of acids on shelf life extension is the pH decrease which will be discussed in the next paragraph. Increasing the acidity of bakery products has been used as a preservation method since ancient times. It is a well-established fact that microorganisms can only multiply within certain pH ranges. The pH of a system is related to the concentration of hydrogen ions which, in case of food, come from ‘acid’ ingredients that dissociate in water, releasing them in the process. This can be achieved by adding acidulates like sorbic- and propionic acids (Table 2). Sorbates are more effective in inhibiting mould growth in bakery products, but have the disadvantage of a poor water solubility and that they are affecting yeasts as well. This last disadvantage can result in a reduction of loaf volume and making dough sticky, which makes it difficult to handle. To avoid this problem, sorbate could be sprayed onto the product’s surface after baking or mixing anhydrates of sorbic acid with fatty acids4. Two properties of food preservatives are the most important for application: the dissociation constant (pKa) and partition coefficient (Poctanol/water). Food preservatives exert antimicrobial effects by interfering with internal metabolism, which requires their passing through the membrane. Only the undisassociated form of organic acids can pass through the membrane. The degree of dissociation is presented by the negative logarithmic ionization constant; pKa. The pH of a food directly affects the proportion of a preservative that can enter a cell. pH = pKa → 50% is undisassociated → active as a preservative pH = 1 unit below pKa → 91% is undisassociated → strongly active as a preservative pH = 1 unit above pKa → 9% is undisassociated → poorly active as a preservative Table 2: Effectiveness, working pH ranges and applications of some widely-used preservatives |Preservatives||Bacteria||Moulds||Yeasts||Working pH range||Applications| |Calcium sorbate||+||+++||+++||3.0-6.5||Cakes & pastries| |Propionic acid||Breads, part-baked breads| |Calcium propionate||+||++||-||2.5-5.5||Pre-packed rolls, buns and pitta| |Sodium propionate||All types of bread| |Acetic acid and acetates||++||+||+||3.0-5.0||All types of bread| |Sodium diacetate||Cakes & some breads| |Benzoic acid and its salts||++||++||+++||2.5-4.0||Fruit fillings, jams| + Decrease the growth of microorganisms, the more, the bigger the effect - No effect on the growth of this type of microorganism The partition coefficient is a ratio for the affection of a compound in fat (octanol) or water. Equation 3: Partition coefficient octanol/water: log Poct/wat = log ([solute] octanol / [solute] un-ionized water) High Poctanol/water → lipophilic → low solubility in water Low Poctanol/water → hydrophilic → high solubility in water Chemical properties and the biological effectiveness of some frequently used preservatives are described in tables 2 and 3. Table 3: Chemical properties, usage and antimicrobial effectiveness of some widely-used preservatives |Property||Sorbic acid||Propionic acid||Benzoic acid||Methyl paraben| |Dissociation constant (pKa)||4.76||4.87||4.20||8.47| |Principal usage|| Many foods, |Yeast-leavened bakery products||Fruit drinks, soda||Beverages| |Relative effectiveness against:| Another, more natural way to preserve can be achieved with the help of bio-preservatives (e.g. lactic acid bacteria), which are commonly found in a sourdough. Now, sourdough is employed in the manufacture process of products such as bread, (some) cakes and crackers. The most prevalent used cultures are Lactobacillus species and can lower the pH of the dough to 3.8-4.55. In this range, most pathogenic bacteria cannot grow. Although, moulds and yeast can grow at this relatively low pH, but it is not optimal. Lowering the pH is normally not the main goal of a fermentation process, improving quality and flavour are. However, there is a sourdough product developed especially for decreasing the pH. The manufacturer claims that it inhibits the growth of moulds by adding it as an ingredient or by spraying it on the surface without taste defects. Table 4: pH range for minimal and maximum growth plus optimum of growth |Type of microorganism||Minimum||Optimum||Maximum| When choosing to work with preservatives it is not always easy to have your product within the working range of the preservative(s) of choice. Either an overdosing is then applied or a combination of two preservatives. One can also consider using an acidulant, some have also a preserving effect. Table 5 shows some applications: |Property||Acetic acid||Lactic acid||Citric acid| |Dissociation constant (pKa)||4.75||3.86||3.14| |Relative effectivenes against:| |Other interactions:|| Flavour (vinegar) | Flavour (mild, sour) | Leavening interaction Chelating effect (binding) |pH 3.5||-||- (3.7:+)||+| |Required dosage to achieve||0.84%||0.67%||2.49%| It is not always achievable to induce adequate changes of pH in bakery products to have a significantly inhibiting effect on microbial activities. In some cases, the nature of the ingredients themselves makes it difficult to achieve pH changes because they may interact with acids. For example; sodium bicarbonate reacts with acids like citric acid that results in the formation carbon dioxide. Furthermore, if hydrocolloids are presented in the recipe, pH is also a critical parameter according to the product stability. Because many hydrocolloids will lose their solubility and thereby gelling properties if the pH is around the isoelectric point (where the net charge is neutral)7. Another important characteristic when using an acidulant is the buffering capacity. The buffering capacity is its ability to resist changes in pH. Foods with a low buffering capacity will decrease the pH immediately when an acidic compound is added. Baked products that lend themselves to manipulation of pH for shelf life extension include those where an acid flavour is an advantage, is not that pronounced (as with lactic acid) or can be masked, like citric acid in fruit fillings. Besides regulating the acidity, citric acid enhances the antioxidation properties of compounds which bring us to the next internal factor: The Redox Potential (Eh). The redox potential is literally the ease by which a substance loses (oxidation) or gains (reduction) electrons. When electrons are transferred from one compound to another, a potential difference is created between these compounds. The difference can be measured with the help of electronic meters and is expressed in mV. A few redox potentials of frequently used bakery ingredients are shown in Table 6. Oxidation is also achieved when a compound reacts with oxygen. The availability of oxygen therefore affects the oxidation reduction (redox) potential. Because of that, the redox potential is crucial to the biochemical reactions in food that require oxygen. If nutrients are exposed to oxidation, it will result in quality losses and thus affecting shelf life of bakery products. Table 6: Redox potential and pH of some bakery ingredients |Type of ingredient||Presence of air||Eh (mV)||pH| |Wheat (whole grain)||-||-320 to -360||6.0| |Butter serum||-||+290 to +350||6.5| Some microorganism requires a relatively low redox potential like anaerobic bacteria (e.g. Clostridium Botulinum), while others require a relatively high redox potential like the ‘bread mould’ Aspergillus niger6 (Table 7). Table 7: Redox potential ranges of categories of microorganisms |Categories of microorganism||Eh (mV)| |Aerobes (e.g. moulds and bacteria like Aspergillus species)||+300 to +500| |Facultative anaerobes (e.g. yeasts and bacteria like Bacillus species||+300 to -100| |Anaerobes (e.g. bacteria like Clostridium species||+100 to -250| Especially lipids in e.g. oil and butter are sensitive for oxidation reactions. The oxidation process of lipids is called rancidity and can be either slightly or intense. Paradoxically, slightly rancidity is a bigger concern for bakeries than intense rancidity. Because small degrees of taste and flavour deviations will not be observed as rancidity, therefore blame is usually given to other possible causes. Meanwhile, intense rancidity is obviously noticeable where after measures can be taken, such as applying antioxidants. Antioxidants is an umbrella name for several compounds which all have the function of preventing or delaying deterioration by oxidation according to the EFSA (European Food Safety Authority). The most widely known antioxidants are vitamin C (ascorbic acid), vitamin E (alpha tocopherol), and flavonoids like lycopene (e.g. present in tomatoes) and Bèta-carotene (e.g. present in carrots)8. Antioxidants significantly delay or inhibit oxidation and thus enhance product stability9. Almost as a rule, the quality of a finished product is a reflection of its raw materials, according to Dominic Man9. Not all the quality characteristics and parameters of a raw material will have an influence on shelf life. Those that do will need to be recognized and their effect on shelf life established. An important characteristic of raw materials is the nutrient content. Like us, microorganisms need beside water, a source of carbon, an energy source, a source of nitrogen, minerals, vitamins and other growth factors such as pH, that we already discussed. Since bakery products are a rich source of these compounds, microorganisms can use them for growth and maintenance of metabolic functions. The inability to utilize a major component of the food material will limit its growth and put it at a competitive disadvantage compared to those that can. In general, moulds have the lowest requirement, followed by yeast and bacteria. Although, because the abundance of nutrients in each bakery products is sufficient, whereby it complicated to estimate the growth of microorganisms and thus shelf life. Many food microorganisms can utilize sugars, alcohols, and amino acids as sources of energy. Few others are able to utilize complex carbohydrates such as starches and cellulose. This ability will favour the growth of an organism on cereals and other farinaceous products. The addition of fruits containing sucrose and other sugars increase the range of available carbohydrates and allows the development of a more diverse spoilage microflora of yeasts3. Another concern in the selection of raw materials is replacing substitutes in for example, E-number free products where eggs can replace hydrocolloids. Eggs will be a new source to grow for microorganisms like Salmonella, which must be considered. This critical point leads to the next paragraph about the influence of the product composition and formulation. The composition and formulation of bakery products is another important shelf life determining factor. In contrast to the use of eggs in E-number free products, many other ingredients have positive effect on shelf life extension. Margarine for example, contains at least 80% fat that limit the growth of most microorganisms. The size of the aqueous phase droplets and the inability of microorganisms to move been droplet reduce the growth of microorganisms. On a side note: margarine can be prone to oxidation that must be taken into account as discussed above. Table 8: Reference cake recipe (A) and Extended shelf life by use of salt (B) |Ingredient||(A) Weight (g)||(B) Weight (g)| |Skimmed milk powder||8||8| |Mould-free shelf life at 21°C||10 days||12 days| The addition of salt in bakery products can also extend shelf life through its powerful water-binding properties which decreases the availability of ‘free’ water (aw level). Reducing the water activity makes it harder for most microorganisms to grow10. A relatively small amount of salt can be added to achieve a large effect on the water activity of the product because of its high sucrose equivalent (11, see Table 1). Although, there is limit to the quantity that can be added to baked products because of its effects on processing. Salt induces changes to the viscoelastic properties of gluten and inhibition of yeast in (yeasted) doughs. In general, salt has a strong effect on flavour and nowadays the addition of salt should be considered because of its relationship with chronic disorders like heart and vascular diseases14. Table 9: Extended shelf life by dextrose (A), by sugar (B), by sugar |Ingredient||(A) Weight (g)||(B) Weight (g)| |Skimmed milk powder||8||8| |Mould-free shelf life at 21°C||12 days||10 days| Furthermore, high refractometric solids in traditional fruit jams results in long ambient shelf life. However, increasing sucrose levels in a cake recipes is not always practical because of the excessive sweetness or possible formulation imbalance within the recipe. Table 10: Relative sweetness and sucrose equivalence of some frequently used sugars |Sugar||Relative sweetness||Sucrose equivalence| |High-dextrose glycose syrup||0.65||0.9| |Regular glucose syrup||0.50||0.8| Humectants are another group of food additives that are used to reformulate products. A humectant can be defined as an ingredient that is hygroscopic, which means that it has the ability to absorb a greater quantity of moisture form the surrounding environment than suggested by its molecular weight. By adding humectants, it is possible to maintain ERH and increase the moisture content, or to reduce the ERH without reducing the moisture content (shown in table 11). Sorbitol and glycerine are examples of the most commonly used humectants in the bakery industry. Table 11: Shelf life extension by glycerol (A); reference recipe (B) |Ingredient||(A) Weight (g)||(B) Weight (g)| |Skimmed milk powder||8||8| |Mould-free shelf life at 21°C||21 days||10 days| As discussed previously, salt, sugars and humectants can be used individually to reformulate products and thus extend shelf life. However, they can also be implemented together in bakery recipes where through less of each individual component needs to be added and thus negative properties can be tempered. An example of a reformulated cake recipe including sugars, salt and humectants whereby water is reduced is shown in table 12. Table 12: Combined strategy for shelf life extension |Skimmed milk powder||8| |Mould-free shelf life at 21°C||31 days| Product- make-up and structure are often underestimated intrinsic parameters that can influence shelf life. Bakery products, which are mainly solid or semi-solid, do not have a truly homogenous and uniform structure. Therefore chemical and physical conditions relevant to microbial growth, chemical- or biochemical reactions can strongly depend on the position in the food. The smaller mobility of micro-organisms in solid foods allows spatial segregation which causes pattern formation. Evidence is given for the fact that taking space into account has an influence on the behaviour of micro-organisms15. Significant differences in lipid oxidation between bulk fat and emulsified fat, which can be presented in cakes and crèmes are observed. This is an example of a biochemical reaction that depends on product structure, in particular the micro structure16. On the other side, macrostructure, better known as product make-up must be considered when determining shelf life. There are several bakery products that consist of different components such as croissants, cakes, pies and muffins that can be filled and/or coated with fruit jams, chocolate, nuts and/or cream. These components all have different chemical and biochemical compositions. Through contact between components migration of moisture, colours, flavourings or oil from one component to another can occur. In fruit pies, migration of moisture from the filling to the pastry leads to a gradual loss of texture. Moisture is exchanged because of the chemical potential difference between the components until the system finally reaches equilibrium water activity (aw) throughout each domain. Diffusion and moisture kinetics play important roles in these types of dynamic systems. Water activity (aw) equilibrium and rate of diffusion are the two main factors influencing moisture migration. The rate of diffusion is defined as Deff (effective moisture diffusivity), which is an overall transport property incorporating all transport mechanisms. Table 13 shows the effective moisture diffusion of some frequently used ingredients in bakeries. Table 13: Water activity, thickness and effective moisture diffusivity of some bakery products and ingredients |Product||aw||Thickness (mm)||Deff (m²/sec) * 10-12| |Non-fat dry milk||0.75||3.1||21.3| The addition of stabilizers can inhibit migration or diffusion of moisture and other compounds between product components. Widely used stabilizers in bakery products are starches and hydrocolloids such as guar gum, xanthan or carboxymethylcellulose (CMC)14. Based on this, control of initial aw and moisture migration is critical to the quality of many bakery products. It is important to note that moisture migration happens from a higher water activity to a lower water activity. This can be recognised by looking at filled croissants or cakes with a standard chocolate/hazelnut (fat based) filling: the croissant or cake becomes dry and hard, while the filling remains more or less soft. What is the best way to test products for shelf life in short time? Our product is plain flat wafer. We are currently shocking our packaged wafers in different environments several times. What is your expert opinion? I manufacture an artisan recipe for hard dough biscuits/crackers. I use natural ingredients only, no preservatives. The main ingredients are wheat flour, vegetable shortening, instant dry yeast for fermentation, sodium bicarbonate, cream of tartar, salt, and sugar. Very simple but nicely accepted. I am introducing the product in mass markets, but I cannot make it last long on shelves. The product is only stable for 3 months; after that, flavor changes and not nicely. I pack products in pouches of BOPP CPP bags, 30 microns, sealed by heat. I would like to keep the formula stable for at least 4 months, preserve flavor without chemical additions to the recipe. Any suggestions? I am opening this topic to ask you for advice on different issues that I have in the production of my Chocolate Cookie. As you can see the picture attached, my product is a chocolate filled biscuit. My challenges are: 1- I can not have a Shelf life of more than 3 months and I wish to have 1 year to be able to compete with biscuits. 2- The major issue is either a very dry product or it loses the crunchiness after 1 month 3 - The filling we are using loses the paste form (becomes very dry, not liquid but hard form) Here are the ingredients we put in the products : - Wheat flour, Sugar, Vegetable Oil, Fresh eggs, Milk Powder, Peanut Paste, Cocoa powder, Iodized salt, Baking powder, Sodium bicarbonate (E-500ii), Sorbitol, Soya lecithin (E-322), Antioxidant (E-319), Vanillin.
<urn:uuid:6054f1e9-0e5d-41cc-83d2-7c5786d09e52>
CC-MAIN-2021-21
https://www.biscuitpeople.com/magazine/post/Internal-factor-influencing-shelf-life-of-bakery-products
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988955.89/warc/CC-MAIN-20210509032519-20210509062519-00417.warc.gz
en
0.888591
5,388
3.359375
3
Chapter 16: Conclusion THE Division, then, had returned to Egypt, whence it had set out early in November with high hopes which were now amply fulfilled. Fulfilled in the advance of nearly 2000 miles from Alamein to Tunis, the longest military progress in history, in which Eighth Army had advanced at the average rate of more than ten miles for each day for almost six months. For the Division the jousting season of the war was over, and no more would battles, exhilarating, costly, disappointing or successful, sway backwards and forwards over the familiar desert. Here the exemplary fortitude and bravery of the troops had been the common factors in all encounters, the variants being supplies, equipment and skill in leadership. Not for many long months, not until after all the bitterness of Cassino, would the troops again experience the exhilaration of pursuing an out-fought, out-manoeuvred enemy. Not until after the Battle of the Senio would they again fight over so much territory in so little time. Never again would the Division, as before El Alamein, stand at the crossroads of history, and by its very presence and quality, with its blood and by its skill, wrest from the battlefield a decision that would influence the strategy of the Allies and the whole course of the war. Hard fighting and physical privations, different in nature but comparable to those experienced in Africa, certainly lay ahead, but the return to Egypt had brought to an end a type of warfare that is unique. Here man’s most modern weapons, in a theatre remote from the obstructions of civilisation, were pitted against each other in a struggle in which the outcome depended on the tactical skill of the commander, the fortitude of the troops and the amplitude of supply. Supply, of water, armour, munitions, motor fuel, confined the war in the desert to particular areas more firmly than did geographical boundaries. In the long run command in the air became decisive, for as Rommel ruefully noted, ‘Anyone who has to fight, even with the most modern weapons, against an enemy in complete command of the air, fights like a savage against modern European troops, under the same handicaps and with the same chance of success.’1 Command of the sea was equally decisive, for he who possessed it deprived his adversary of supply, and at the same time took advantage of this medium for the carriage of the enormous bulk of his own war material. The long advance from Alamein to Tunis was thus a campaign demanding the optimum tactical skill of its commander, for he had so to move his ground forces that the greatest advantage could be gained from predominance in the air and at sea. Landing grounds and ports were of much greater tactical significance than wadis, soft sand and mountain barriers. Finesse in manoeuvre had become the ability to combine the requirements of land, sea and air. Eighth Army had in General Montgomery a commander willing and able to manage these diverse elements which time, the misfortunes of earlier commanders and the workshops of the Allies had united to put into his hands. Other volumes in this series have shown how and why success had eluded his predecessors, either through inability to control one or other of these elements, land, sea and air forces, or because the supply of one, or all, was at some critical period inadequate in quantity or quality. In the period covered by this volume all that was needed was there in abundance – Montgomery was the man who used this abundance to the greatest advantage. If the historian is to judge him he need not go beyond the words of the most famous of the enemy desert generals, Rommel, who, in considering the battle of Alam Halfa – and it could have been Medenine, or Mareth, or Akarit – said: ‘There is no doubt that the British Commander’s handling of this action had been absolutely right and well suited to the occasion, for it enabled him to inflict very heavy damage on us in relation to his own losses, and to retain the striking power of his own force.’2 Montgomery himself often expressed this concept in one word, balance, and the retention of perfect balance became one of the predominant features of all his operations. It is in the field of tactics that the lasting interest in this campaign will be found. There were no strategic surprises, if one excepts the Anglo-American landings in North- West Africa just before it began, and the disappointing development of that enterprise. For even though the self-willed shortsightedness of both Mussolini and Hitler prevented them from seeing the issues at stake, all the rest of the political and military leaders on both sides recognised, after Alamein and the Anglo-American landings, that it was only a matter of time before the Axis was driven from the whole of Africa. The decision to make those landings had in fact been made in July 1942, and by October the watchful Italian Foreign Minister was confiding to his diary that all the information he had led to the conclusion that landings were going to be made, and that, Africa secured, the Allies planned to launch their blows against the Axis. Italy was geographically and logically their first objective.3 Before Alamein, Rommel, while in Rome in September, bluntly informed the Duce that unless supplies were sent on at least the scale he demanded, the Axis would soon have to get out of Africa. He concluded his report to Hitler in similar but more emphatic terms. After Alamein, and after the landings in North- West Africa, Rommel wanted to withdraw to Wadi Akarit, where he could prepare a position against which armour would be little use and which could not be outflanked. He wanted to make only such delaying operations as would not involve him in further losses, and he wanted it accepted that even this drastic step could only serve to gain time, for he now believed that final defeat in Africa was inevitable. His ultimate object was to evacuate the best of his troops to Europe for the continuance of the struggle, and in his judgment Akarit alone would give him sufficient geographical advantage to do this. He calculated, too – an interesting point in view of subsequent criticism of Montgomery’s tardiness – that it would take Eighth Army ‘several months’ to transport sufficient material through the whole of Libya to enable it to attack at Akarit with assured prospect of success.4 As we have seen, Montgomery fought three major operations, and many lesser ones, between Alamein and Akarit, but by using his army on the docks of Tripoli suffered no serious delays waiting for a build-up of supply. The Italian general, Messe, sent in January to command the Italian forces in Tunisia, defined his new duties as ‘commander of the dispersed forces’. Before leaving Italy he confessed that his task was hopeless, and that he thought his appointment was a backhanded blow struck at him by Cavallero to get rid of him, ‘since he, too, must be convinced that there are no prospects for us in Tunisia.’ Messe felt that he was deliberately being deprived of his reputation.5 Rommel’s proposition was an interesting one and his analysis of final defeat correct, for there is no reason to suppose that the Allies would not have retained command in the air and on the sea, but it is interesting to reflect that had he refused to accept battle between Alamein and Akarit unfortunate consequences would have resulted for the Allies, particularly as regards the timetable for the invasion of Sicily, which was decided in January 1943. The enemy position at Akarit could not in fact be outflanked, nor could it be subjected to tank attack. An earlier appearance of Rommel’s forces in this area would have had decided repercussions on the Anglo-American force which was experiencing teething trouble – as would Rommel’s own presence. Axis supply could have been concentrated instead of dispersed, and the Axis air forces would have been in a much better position to support ground operations. The fifty-two tanks lost at Medenine and the casualties and lost equipment at Mareth, to say nothing of motor fuel and ammunition expended, would have been of great value in the Akarit position. But because Mussolini refused to abandon Tripolitania for political reasons, and then failed to recognise the tactical advantage that would accrue if only minimum delay was made at Mareth, and because to Hitler each yard of ground lost was interpreted as a personal affront, Rommel did not have his way. Moreover, as related earlier, in the complicated area of the service and political hierarchy between Mussolini and Hitler and their field commanders, there was no unanimity on this question. The refusal to withdraw to Akarit was, however, the chief, perhaps the only, strategic decision that, if reversed, might have prolonged the final surrender in Africa beyond the actual date of 13 May. This in turn might have delayed the invasion of Sicily, which, unquestionably, would have added grave stresses to the relationship between London, Washington and Moscow. For Stalin was waiting with some acerbity for the promised opening of the Second Front. The two great western Allies had not in fact made the vital decisions concerning the theatre of operations after Africa. This is a matter which has been too well told, in Churchill ‘s The Hinge of Fate, in The White House Papers of Harry Hopkins, and in this series, in a summary by Professor Phillips,6 to require more than brief mention here. The point that must be made is that as late as the end of November 1942, when the end of the war in Africa had become more than a possibility, no decision had been made concerning future operations. Indeed, in Washington, far from Ciano’s prognostication in October, plans for Italy revolved round thoughts of a heavy bombing programme.7 The attack on Sicily was not decided until January, at Casablanca, where the target date, ‘the favourable July moon’, was accepted, and it was not until May, at the Washington Conference after the end of the war in Africa, that instructions were given for plans to be prepared for the invasion of the Italian mainland. From the strategic point of view, then, the only decision to be borne in mind is that the target date for Sicily was early July 1943 – the invasion actually began on 10 July – so that it was necessary for all operations to be completed in time for the units and commanders taking part to be ready. It is stated above that General Montgomery was pre-eminently suited to command Eighth Army at this time. In the years to come, with Montgomery firmly seated among the great captains, and when the voluminous comment and criticism has been sifted by the accumulated wisdom of time, it will be North Africa to which historians will turn for the first flowering of the genius of his command. El Alamein, regardless of the fact that it was Montgomery who galvanised Eighth Army into urgent activity there, and who fought the great offensive battle, may always be associated to some extent with the names of other commanders. Wavell, Auchinleck and, at the end, Alexander, all made their own contributions at this historic battlefield. But as the gap between the old battlefields and the new was widened, as Montgomery gained experience, so it can be seen that his became the sole hand in control. Montgomery was the first to advance beyond El Agheila. Mareth was the first battle which was entirely a Montgomery battle. Tebaga soon followed. Enfidaville was Montgomery’s first failure, and it is Alexander’s name which will be indelibly linked with the final victory. Closely identified with Montgomery’s grasp of the required tactics was his determination to make certain that his will was not impeded by the differing opinions of his immediate subordinates. After El Alamein an alteration that he made to the command structure brought General Horrocks to 10 Corps, in which were the armoured divisions. At this early stage in Horrocks’s career his greatest asset was his approach to the directions of the Army Commander: these were to be carried out to the letter, without question. Montgomery was not hampered by the accumulation of wrong ideas concerning the use of armour which had mutilated earlier desert battles. His guiding rule was co-ordination, whether in defence or attack, and the narratives of the battles at Mareth and at Akarit or at Enfidaville can be searched in vain for evidence of dispersal of effort or lack of co-ordination between all arms. No longer in attack need the infantry fear that their sternest endeavours would be frustrated because the tanks had failed to arrive at the critical moment. Never again, with the exception of PUGILIST, did the armour fail to advance and to fan out when a breach had been made. In the defensive battle at Medenine no tanks clashed against tanks: instead Rommel was forced into that cavalry nightmare – charging an impregnable gun line. It is difficult to imagine these various situations under an earlier regime. The commendable spirit with which General Horrocks, in whose 10 Corps 2 NZ Division fought for most of the campaign, regarded the orders of the Army Commander, left what dissension there was to the divisional commanders; perhaps to two only, Freyberg and Tuker. The record of the various conferences makes it abundantly clear that while Montgomery and Horrocks spoke with one voice, Freyberg frequently, and Tuker sometimes, disagreed. It would be quite incorrect to jump to the conclusion that Freyberg was not very biddable, and to leave it at that. For Freyberg had experience and battle wisdom, as well as service seniority. No other commander in Eighth Army had served so continuously in battle, no other had argued his way so consistently through the ‘bad days’. The Division fighting together as one formation, the set-piece attack, the taped start lines, the lifting barrage, the punched hole and the fanning armour, all these were the epitome of Freyberg and his battle-wise staff. Further, Freyberg was responsible for the prudent use of his Division to his government, which never quite forgot Greece or Crete, and which throughout the whole of this campaign was balancing the requirements of the Mediterranean against the demands of the Pacific. Small wonder that against this background Freyberg frequently raised his voice in dissent, still less wonder that he was listened to with respect. Only on one occasion, during PUGILIST, did Freyberg ‘s ‘independence’ interfere with the complete fulfilment of Montgomery’s plan. During the later Enfidaville battles Freyberg carried on with the Army plan in direct contradiction to his own opinion. But a man must be judged on his total effort, and during the whole of this campaign Freyberg ‘s co-operation with Montgomery, his translation of plan into action, his battle sense and his leadership were such that his name will ever be associated with it immediately after that of his Army Commander. The lasting interest in this campaign will centre on the tactics employed by the commanders on both sides – for the Allies the effort to pin down and destroy the Axis forces, or to keep them on the run, and for the Axis the task of avoiding destruction while offering the maximum delay. Much has been written already of the skill with which Rommel conducted his retreat. While not denying this, it is always necessary to remember that a retreating force has certain advantages, in the selection of ground on which to do battle, and in retreating upon supplies instead of further stretching supply lines. General Freyberg said in Greece that less skill was required to conduct a retreat than an advance, and in Greece the Division learned that relatively small forces can impose damaging delays. Once a small force has made its opponent deploy it retains a certain tactical advantage in that it alone knows how long it is going to delay, and where next it is going to stand. The advancing force must almost invariably be prepared for the worst possible contingency. In Greece, to take an example, the German troops launched three heavy attacks after the New Zealanders had withdrawn, at Servia, at Olympus and at Thermopylae. At Platamon one battalion, the 21st, with no armour or anti-tank guns, forced the deployment of half a tank corps, and then vanished to leave the enemy striking at the air. The problems were the same between Bardia and Enfidaville. Lightly armoured reconnaissance forces led the advance: a handful of tanks, a few 88-millimetre guns and some pockets of motorised infantry delayed it. When contact was made the problem for the advancing force was whether a quick attack would be successful, or whether it was necessary to delay until a larger force was deployed. In Greece the Germans outnumbered the British force by five to one, and had command of the air. In this campaign the Allies had approximately three to one superiority and command of the air. The tactics employed in both cases were identical. Light reconnaissance forces made contact, the closely following corps containing armour, artillery and infantry in its advanced elements went immediately into action, and if this was not enough the resources of the whole corps were deployed. In neither case were light advanced elements needlessly thrown away, for in both a little deliberation could achieve the desired result with small loss. During the whole of the advance covered by this volume it is difficult to see that different action at El Agheila, Nofilia, at Dor Umm er Raml, Azizia, or Takrouna on first contact, would have altered the final date of 13 May by very much. Before leaving the comparison between the campaigns in Greece and in Africa, it is interesting to remember that the advancing force in both theatres had command in the air, and that in neither case was it used to the greatest advantage. In Greece the Germans had virtually uncontested supremacy, but their air force was not able to interfere with the withdrawal of the British force, largely because instead of concentrating on one or two key targets – bottlenecks in the communications network – it dispersed its effort over the whole field. In Africa the Allied air forces were not seriously contested, but although their contribution was very great they did not create the havoc among the retreating columns that their supremacy might indicate. It is probable that the main reason for this was that attacks were made from too great a height, and that by working much closer to their targets our aircraft would have been more certain of their destruction. The task of Eighth Army was to pin down and destroy the enemy forces, or to keep them on the run. Which was the real intention? Time and again the orders repeated the words pin down and destroy, and during the closer examination of the earlier chapters some disappointment was recorded that this had not been done, and some reasons advanced in explanation. But in the over-all picture it is clear that such disappointment is not valid, with one exception to be mentioned later. Immediately after the breakthrough at Alamein, Tripoli became the objective in the minds of most in Eighth Army, including Montgomery.8 Tripoli, so long sought after, so illusory. There was less of the hope that the bulk of the enemy force could be destroyed or captured. Indeed, it was not until after El Agheila that the suspicion was removed from Montgomery’s mind that the enemy might again break out and make back for Egypt, and he made his dispositions with this possibility in view. The New Zealand Division might well have been disappointed that it did not have enough armour, and that refuelling delays – a normal friction – prevented what armour there was reaching the road in time. But on final analysis Montgomery’s own explanation in his Memoirs is probably valid, that he wanted to get the position quickly and that the best way to do this was to bluff and manoeuvre, ‘to bustle Rommel to such an extent that he might think he would lose his whole force if he stood to fight.’ This was certainly what happened, and if a case is to be made that the outflanking force should have been stronger, it must also be possible to demonstrate that the assembly, march and supply of that larger force would not in itself have imposed further delays, and that Montgomery was completely wrong, at the time, in imagining that the notoriously impetuous Rommel would attempt a sudden thrust towards Egypt, as he had done before. Wavell once compared the art of waging war with that of playing contract bridge. He wrote that calling in bridge could be regarded as strategy, the play of the hand, tactics. Strategy, in war as in bridge, can be mastered in a very short time by ‘any reasonable intelligence’, for although in both there is scope for judgment, boldness and originality, both are to a certain degree mechanical and subject to conventions. However, in the end it is the playing of the cards that matters, and in war the hand is always played by the commander in the field. Wavell rated the skilful tactician above the skilful strategist, especially he who played bad cards well.9 This homely analogy, as Wavell called it, can be applied to the battle for the Mareth Line, where Montgomery did the calling and his corps commanders played the hands. The first game went to Montgomery after the model defensive action at Medenine. Montgomery lost the second game by calling PUGILIST, but won the rubber with a grand slam in SUPERCHARGE II. It is the lost game which concerns us here. In calling PUGILIST, Montgomery exercised boldness, originality and judgment. Now that all the cards are face upwards on the table, there can be few quibbles over his bid. Success hinged upon the inability of the Axis to counter-attack on the main Mareth front, and all other factors which eventually militated against that success, the width of the front at Mareth and the consequent difficulty in getting anti-tank guns and armour across Wadi Zigzaou among others, are of lesser importance. As the game was called, and with the cards held, there need not have been a counter-attack. The reasons for the failure of PUGILIST have been examined, and the conclusion reached that General Freyberg played his hand badly: he did not use his good cards well. But that was in the short run. Napoleon’s maxim yet holds good, that the general who wins is the general who makes the fewest mistakes, and among the players Freyberg remained the star performer. With the cards dealt for SUPERCHARGE II firmly in his hand, he did not fail to take a trick. There will always remain the speculation as to the course of events if PUGILIST had succeeded. First and foremost, it would have added greatly to Montgomery’s military reputation, for the plan was finely calculated and its fulfilment would have been spectacular. It is not unreasonable to suppose that much of the enemy would have been cut off and captured, or at least so bustled that reorganisation at Akarit would have been impossible, which in turn raises fresh possibilities for the final battles at Enfidaville. But no one can be certain, for no exact calculations can be made without positive knowledge of the enemy’s reaction to these changed circumstances, and only the events themselves could supply it. In war, each and every circumstance of itself produces unforeseen frictions, a fact which renders so unacceptable the findings of armchair strategists. At Enfidaville, Eighth Army attempted without success to apply the techniques so arduously learned in the desert to a changed topography. This has been made clear in the relevant chapters. On first encounter the whole of Eighth Army went with its commander, and there were no dissenting voices. But after the first unsuccessful engagement the two most seasoned commanders, Freyberg and Tuker, began raising objections which, although extremely pertinent, were disregarded. The interest in the final operations lies, then, not in the strenuous fighting that took place, but in the object of it all. This object was Montgomery’s own determination to drive Eighth Army through to Cape Bon. From the historical point of view, here was Montgomery’s first failure, for PUGILIST was virtually one part of a battle in which Montgomery retained the initiative throughout and can be excepted. Upon final analysis the operations that began on 19 April, and included the notable infantry achievement of the capture of Takrouna, may well be regarded as an ambitious, even an incautious, but nevertheless legitimate ‘try-on’. For the only way to discover if the enemy intends to stand and fight is to attack and find out. That is the most favourable case that can be made. On the other hand it is fair to say that optimism in war must have its limits, and that it might have been reasonable to have realised that the enemy must stand and fight, or perish. Stalingrad, where for the first time since Napoleon a Prussian army had been captured intact, had given no indication that capitulation would not be delayed until the last possible minute. In somewhat similar circumstances, for the enemy had no means of retreat, Montgomery initially attacked a mountainous, essentially defensible area, held, on contemporary calculations, by at least the quantity of troops that had opposed Eighth Army at Akarit, and probably by more. Armour was of little use to either side, and the bulk of the Allies’ air power was being used on the First Army front. Where at Akarit the break-in attack had been made by three divisions, with a fourth division briefed for exploitation, at Enfidaville two divisions attacked, and the major exploitation role was to have been accomplished by one of the attacking divisions. The other two divisions had minor roles. Upon the failure of this operation a new plan was made in which two divisions, one of them inexperienced, were to attack the hill positions, and the New Zealand and 7 Armoured Divisions were to break out. The area held by the Axis was as readily defensible as the Cassino area in Italy and in many ways comparable with it. The line could not be outflanked, as at Mareth, and had considerably more depth than Akarit where initial penetration breached the position. Eighth Army did not have sufficient infantry to capture the Enfidaville position, and without it the plans that were made were quite unrealistic. Without doubt the ‘fight and find out’ theory was here pressed too hard, but perhaps in terms of the experience that is required to test and temper a great military commander, Enfidaville was salutary and necessary. The campaign as a whole was notable for some interesting innovations in battle technique, or for the development of what was best in the old. The most impressive advance was in the field of co-operation, for as observed at the beginning of this chapter, the nature of the campaign demanded that the Eighth Army commander must combine the requirements of land, sea and air. Only by ensuring that tactical objectives included advanced landing grounds and that troops, as well as capturing them, cleared10 them for immediate operations, was supremacy in the air used to the greatest advantage. In similar fashion troops assisted in the rapid re-establishment of ports, so that supplies could be brought in bulk and so that the striking arm of the Navy – MTBs operated from Tripoli, Sfax and Sousse within a day or two of capture – could work close to the front line. Much of the Eighth Army did round-the-clock stevedoring at Tripoli, with the result that the Army was based on that port within weeks of capturing it and the long land haul from Benghazi was eliminated. Many innovations were introduced in the air force’s vital role of assisting the ground troops. The El Agheila operation had demonstrated that there was room for improving the demarcation of bomblines when the air force was required to work close to advancing troops, and the ‘left hook’ at Mareth saw much development in the techniques of using coloured smoke, day and night landmarks, and ground to air communications. The ‘tank-buster’, which was in effect an airborne anti-tank gun, first began its devastating work in Tunisia and probably destroyed more tanks (except at Medenine) than the ground forces. The ‘cabrank’ system, where fighters and tank-busters poised in the air in continuous circuit to be directed on opportunity targets from the ground, was begun at Tebaga. At Enfidaville aircraft were first used in Eighth Army as artillery observation posts, with gunners trained to operate them. Carpet or area bombing, of localities declared of nuisance value to the ground troops, was undertaken, and the closely ranked formations of eighteen or more bombers, at medium height, became a familiar sight to the troops. That busy maid-of-all-work, the Douglas transport, was given a new role, the evacuation by air of sick and wounded. And probably of equal importance was the great impetus to morale that this closer co-operation, these new and diverse duties, gave to all ranks in the air and on the ground. For the troops it seemed that the old days of frustration, when effort and sacrifice, for some reason or other, had been thrown away, were over. The old, deep, grievances that had centred on the use of armour were forgotten and replaced by a new and growing admiration for the determination and skill of the Armoured Corps. The battle at Medenine, where Montgomery relied almost entirely on his anti-tank guns sited in a defensive network in advanced positions, and on his artillery which separated attacking tanks and their supporting infantry, with which it dealt methodically; the battle at Tebaga, where tanks led the infantry in the most perfect example of united action between ground and air that any army, British or German, had yet seen; the spectacular ‘break-outs’ by armour and mobile infantry at Mareth and Akarit; and the everyday, forceful reconnaissance by light armoured and cavalry corps, all of these things provide a fascinating field for study, and in their growth and development gave to the Eighth Army its collective élan and unbridled confidence. In these diverse activities, in this fruitful field of military endeavour, the New Zealand Division, under its much experienced and battle-wise commander, Freyberg, whose name was already inseparable from his division, played its part, and added to, and drew from, the accumulated pool of knowledge. For the men of the Division the great advance was perhaps the high-tide of the war. Not only did the relentless, onwards movement signify success, which is heady wine, but time and the changing panorama of the Mediterranean coast of Africa combined with that success to give perspective to that total experience of war in the desert. All now had meaning. The scars were healing, and although the childhood picture of the gently moving column of dust, which might be a djinn, had faded for ever, the vision of the silent desert under a canopy of stars unbelievably bright would remain always. When a desert veteran thinks of sunrise, he will remember the ethereal beauty of the cold, unearthly clarity of the starlight as it was warmed and suffused by the palest peach, the delicate rose, the richer gold of the rising sun. When he thinks of shade, he will remember the joy of the unexpected oasis. When he yearns for space he will in memory stand at dawn, before the haze of the day, and gaze over the limitless desert which was once all torment, thirst, hatred and blood. Not for nothing had these men come ten thousand miles from their homeland in the new world to play their part in restoring a balance in the old.
<urn:uuid:35047768-c40c-4b97-a36d-9c78c1a367ee>
CC-MAIN-2021-21
http://tothosewhoserved.org/nz/army/nzarmy06/chapter16.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988923.22/warc/CC-MAIN-20210508181551-20210508211551-00337.warc.gz
en
0.983824
6,566
2.65625
3
The unitary region of Dumfries and Galloway was created in 1975, following a reorganisation of Local Government in Scotland. It brought together the historic counties of Dumfriesshire, Wigtownshire and the Stewartry of Kirkcudbright. Though now part of Scotland, the region was once part of the ancient Northern Brittonic Kingdom of Rheged and later the Kingdoms of Strathclyde and Northumbria. Given its location in southwest Scotland, its history has been influenced by the Picts, Romans, Angles, Vikings, Danes, English and Scots. This is reflected in the variety of place-names throughout the region, and one can find examples of Cumbric – a Northern Brittonic language closely related to Welsh – Old Norse, Old English, Gaelic and Scots. There are examples of Roman, Medieval and Ecclesiastical archaeology throughout the region, with many historical sites now popular tourist attractions. The natural resources of the region have long been utilised. While much of the forested areas are of ancient woodland, the Forestry Commission owns extensive plantations in the region, while Forests at Mabie, Ae, and Dalbeattie offer nature reserves for wildlife but also trails for walking and cycling. The Galloway Hydro-Electric Power Scheme was the first large-scale hydro-electric power scheme in Scotland, becoming operational in 1935; though not on the same scale as later hydro-electric schemes in the Highlands, the Galloway Hydro-Electric Power Scheme had a distinct character, with several of its power stations – such as the turbine hall at Tongland – designed in the modernist style. Very much a rural part of Scotland, there is varied landscape throughout Dumfries and Galloway, with agricultural fields and pastures, the rolling Galloway Hills, and many forests and lochs. At the heart of the region is the Galloway Forest Park. Covering nearly 300 square miles, it was first established in 1947 and in 2009 was declared Scotland’s first Dark Sky Park, where light pollution is restricted and conditions are perfect for stargazing. At the southern end of Dumfries and Galloway, its coastline runs along the Solway Firth. One of the largest estuaries in the British Isles, much of the Solway Firth is a designated conservation area and contains several nature reserves. Its coastline is varied and runs from salt marshes and mires in the east; sandy beaches; to the rugged cliffs of the Mull of Galloway in the west. The significance the sea played to local communities is reflected in the number of key towns situated along the region’s coastline – Stranraer, Wigtown, Whithorn, and Kirkcudbright. The region has contributed many significant figures who have made their mark in various fields. J.M. Barrie, Thomas Carlyle, and Robert Burns are among the important literary writers and poets who have lived in the region. Other notable figures include John Paul Jones – commander of the fledgling United States’ Navy – who was born in Kirkcudbrightshire; Thomas Telford – the famous civil engineer – who was born near Westerkirk in Eskdale; the actor John Laurie was born in Dumfries and for many years his family ran a hatter and hosiery business at the town’s Church Place; and Jane Haining, the Scots Missionary who died in Auschwitz concentration camp, was born in Dunscore and attended Dumfries Academy. More recently, the likes of the racing driver and commentator David Coulthard , actor Sam Heughan, and music producer Adam Wiles (aka Calvin Harris) have helped put Dumfries and Galloway on the map. Where to find local collections: Tel: 01387 260285 Book – non fiction One of the distinct geographical features of the Solway Firth is Luce Bay (https://maps.nls.uk/view/197253157). It is a shallow bay that lies between the Machars and the Rhinns of Galloway in Wigtownshire. At its head it is 10.5km wide, while it is 31km at the widest part of its mouth between the Mull of Galloway and Burrow Head. It is now a designated Special Area of Conservation, whose natural habitat is monitored by Scottish Natural Heritage:- Book – fiction Originally published as a weekly serial in The Sunday Mail, Wigtown Ploughman: Part of His Life was the 1939 debut novel by John McNeillie:- It is the tale of Andy Walker, as he attempts to rise through the tenant-farmer system in Galloway, while contending with violence, drunkenness and immorality in the rural southwest. Though the main narrative is written in English, McNellie’s characters use broad Galloway Scots. Gallowa Scots haes a wheen wirds an pronunciations lanerly tae the soothwest, tho these can chynge whan traivelin atween districks. In the mair westren pairt cried the Rhins the Irish influence is mair tae the fore an is whiles kent as ‘Galloway Irish’. Stranraer haes a puckle o braw wirds still in common yaise sic as ‘stenter’ for claes-pole an ‘fecket’ fir jaiket. You can learn mair aboot Wigtown Plougman an’ Gallowa Scots @ https://wee-windaes.nls.uk/wigtown-ploughman-an-gallowa-scots/ The Solway Counties was filmed in 1955 and offers a survey of the three Solway Counties of Wigtown, Kirkcudbright and Dumfries. It was produced by the Scottish Educational Film Association (SEFA), which looked to promote the use of films as educational aids. As such, the film highlights examples of agriculture, forestry, factory work and other industries and trades active in the region:- https://movingimage.nls.uk/film/1792 In 1963, SEFA produced Dumfriesshire Journey, a tour of some of the historic sites throughout the County:- https://movingimage.nls.uk/film/0796 The Sweetheart Breviary is one of the most significant additions to the Library’s collections in recent years. Sweetheart Abbey near Dumfries was founded in 1273 and was the last Cistercian monastery to be established in Scotland. Its founder was Lady Dervorgilla de Balliol, the mother of King John Balliol of Scotland, in memory of her husband; following her death in 1290, she was laid to rest next to her husband’s embalmed heart and so the Abbey was named in her memory. The Breviary was written in the first half of the 14th-Century and, unlike other surviving liturgical manuscripts from the period, is a complete volume. Its 200 vellum leaves contain the text of monastic prayers which would have been used throughout the year in Medieval Scotland:- https://digital.nls.uk/early-scottish-manuscripts/archive/131619877 This map of Gallovidia, vernacule Galloway appears in Joan Blaeu’s Atlas Novus, which he published in Amsterdam in 1654:- The maps in Blaeu’s Atlas of Scotland are based principally the work of Timothy Pont and were accompanied by textual descriptions of Scotland and its regions. Of Gallovidia it notes: The region rises in hills everywhere, which are more productive for feeding herds than growing crops. The inhabitants engage in fishing both in the surrounding sea and in the rivers and lochs which flow everywhere below the hills; from these at the autumnal equinox they catch in boxes an incredible number of very tasty eels, whence they make no less profit than from the tiny horses with compact, strong limbs for enduring toil which are exported from here. Before the establishment of the Ordnance Survey, Estate Plans often offered the most detailed maps of rural parts of Scotland. Of the three counties of Dumfries and Galloway, there are many such maps, dating from the 18th Century onwards, @ https://maps.nls.uk/estates/ The National Library’s Map collections include extensive holdings relating to Dumfries and Galloway, including many pre-Ordnance Survey maps and estate plans. The National Library has collaborated with the Dumfries Archival Mapping Project (DAMP) to help make such maps available for educational, cultural and general interest purposes – https://geo.nls.uk/maps/damp.html There are several newspaper databases which are available through the eResources which offer full text of historical copies of many of the newspapers to have been published in the region:-https://auth.nls.uk/eresources/browse/subject/99 For example, the British Newspaper Archive includes issues from the likes of the Dumfries and Galloway Standard; Annandale Observer and Advertiser; Eskdale and Liddesdale Advertiser; Galloway News and Kirkcudbrightshire Advertiser; and the Galloway Gazette. Apparently still the place to be on a Saturday night out in Dumfries, the Hole I’ The Wa’ Inn is one of the town’s oldest drinking establishments, having first opened in 1620:- At one time the Inn sat on a lane – the Mid Row – which ran parallel to the town’s High Street but the expansion of surrounding buildings mean that it is now accessed from a narrow close off the High Street, hence it’s name. As with many public houses, the Inn used its associations with Robert Burns as a marketing tool, as reflected in this advertisement which appeared in the Dumfries and District Post Office Directory (general, street, and trade directory), for 1911 and 1912:- https://digital.nls.uk/directories/browse/archive/85537680. The Inn’s “Repository of Burns Relics” apparently included “his original honorary Burgess Ticket presented by the Royal Burgh of Dumfries to him” in 1787; autographed songs and letters; as well as “part of his household effects”. Though born at 14 India Street, Edinburgh, James Clerk Maxwell spent much of his life at his family’s Dumfriesshire home, Glenlair House near the village of Corsock. Considered to be the father of modern physics, Maxwell’s contribution to science has long been recognised in professional circles, but in recent years he has received more recognition from the wider public and, as such, was voted Scotland’s top scientist in an online poll for the National Library:- Spedlin’s Tower (https://canmore.org.uk/site/66237/spedlins-tower) is a recently restored tower house on the west bank of the River Annan, north of Lochmaben. It was once the traditional seat of the Jardines of Applegirth, who had judicial responsibilities over the area. In the late 17th-Century, the then lord, Sir Alexander Jardine, had cause to temporarily imprison the local miller, James Porteous, in the Tower’s dungeon. Called away to Edinburgh on urgent business, on arriving at the capital several days later, Sir Alexander discovered he still had the keys to the dungeon and in his haste had forgotten to release Porteous who had now been imprisoned for many days. Jardine dispatched a servant back to Spedlin’s Tower to release Porteous but by the time Jardine’s man reached the dungeon it was too late. Porteous was found dead but, in the throes of starvation, had eaten parts of his own body in a futile attempt to survive. As a result of his tormented death, Porteous’ spectre was said to haunt the Tower and its occupants. So frequent were the ghost’s visitations upon the Jardine family that a clergyman was employed to exorcise the spirit. After a struggle which was claimed to last a full day, the clergyman succeeded in confining the spirit to the dungeon, though its shrieks and cries were often still heard. Legend has it that the ghost’s containment depended on the ancient bible the exorcist had used being kept in the Tower, but when the bible once had to be rebound the ghost’s activities increased once more, and it pursued the family to their new home of Jardine Hall on the east bank of the river – it was only when the bible was restored to the Tower once more that the hauntings ceased. This was one of the tales cited by Sir Walter Scott in the introduction to the 1839 edition of his Minstrelsy of the Scottish border : consisting of historical and romantic ballads, collected in the southern counties of Scotland; with a few of modern date, founded upon local tradition:- It was also referred to by the Reverend Thomas Marjoribanks in the New Statistical Account of the Parish of Lochmaben:- A Castle or other historic building Caerlaverock Castle is a distinctive triangular fortress surrounded by a moat, near the shores of the Solway Firth:- https://maps.nls.uk/view/74942387#zoom=6&lat=6099&lon=8527&layers=BT. Volume 1 of the Topographical, Statistical, and Historical Gazetteer of Scotland offers a description of the Castle and its history:- https://digital.nls.uk/gazetteers-of-scotland-1803-1901/archive/97440666 The Castle was besieged on several occasions by various armies and was eventually abandoned in the 17th-Century. An engraving which shows the ruinous state of the Castle in the 18th-Century can be found within the Blaikie Collection of Jacobite prints and broadsides:- https://digital.nls.uk/75242351 The Castle is now maintained by Historic Environment Scotland, while the surrounding land is a protected nature reserve. For hundreds of years, the Castle was the principal seat of the Maxwell family. The Book of Caerlaverock; Memoirs of the Maxwells, Earls of Nithsdale, Lords Maxwell & Herries by William Fraser was privately printed for William Lord Herries in Edinburgh in 1873. A two-volume work, it details the history of the Maxwell family and their tenure at Caerlaverock played:- Written in Dumfries in 1722, A Large collection of choice recipes for cookrie, paistrie, milks, sauces, candying, confectionating, and preserving of fruits, flowers, &c. is divided into “five books” and contains recipes for “1. Paistrie. 2. Milks, &c. 3. Cookrie. 4. Fruits, Flowers, Pickles & Colouring. 5. Biskets & Cakes”:- “Calotypes” were an early photographic process developed by William Henry Fox Talbot. The Edinburgh Calotype Club is the oldest photographic club in the world and the collections of its first photographs are arranged in two volumes, one held at the National Library of Scotland, with a second volume at Edinburgh Central Library. These were not only of subjects in Edinburgh but include photographs of subjects throughout Scotland; these include this image of Craigielands House, near Beattock: The House appears on OS Maps: https://maps.nls.uk/view/74944279#zoom=5&lat=10154&lon=5859&layers=BT Something about the County Town Although in a rural part of Scotland, Dumfries has not been isolated from international events. During the Second World War, Dumfries became home to the Norwegian Army in exile, with thousands of Norwegian servicemen based around the town. The Rosefield and Troqueer Mills on the southern banks of the River Nith became makeshift barracks for many of the Norwegians:- Servicemen from other occupied nations – Dutch, Czechs and Poles – also took part in manoeuvres in the surrounding countryside. There were often football matches between the various units and local teams. For example, on Saturday, 25th January 1941, an Inter-Allied Football Match took place between a Dutch team and a Norwegian team, as part of a series of matches in “aid of the welfare funds the different Allied units”; the match ended in a 3-3 draw and was played at the town’s Palmerston Park, home of Queen of the South F.C.:- The Norwegians became a significant feature of life in Dumfries and they were given a former restaurant on the town’s Church Place, which became Norge Hus (Norway House), a cultural and social hub for them during their time in the town:- Something about a village or small place The village of Ruthwell was home to the world’s first modern savings bank:- The bank was established in 1810 by Dr Henry Duncan. Duncan had been ordained as Minister of Ruthwell Parish in 1799 and spent much of his tenure attempting to improve the impoverished conditions in which his Parishioners lived. Reviving the local Friendly Society, in 1800 he persuaded the landowner, the Earl of Mansfield, to permit him the use of a derelict cottage in the village. Duncan used the cottage as a hub for his efforts to improve the community’s lot, distributing food, and eventually the savings bank. As well as his Parishioners, Duncan was interested in the history of the Parish and was a keen antiquarian, restoring the Ruthwell Cross, an inscribed Anglo-Saxon Cross dating from the 8th-Century and which had been broken during the Reformation. Duncan provided a detailed account of the Parish of Ruthwell in the New Statistical Account of Scotland published in 1845, and reflects the extent of his interests:- Duncan later became a leading figure in the early Free Church and it was through the “indefatigable exertions of its founder, the merits of banks of the kind for popular use were speedily acknowledged by statesmen and philanthropists of all classes” (Annals of the Free Church of Scotland by Rev. W. Ewing ). Duncan established a Free Church and Manse at Mount Kedar, on the road between Mouswald and Ruthwell, north-west of the Glasgow & South-Western Railway Line; it was described as being “surrounded with gardens and grounds which [Duncan] had laid out with exquisite taste” and “one of the finest residences of the kind in Scotland”. Following Duncan’s death in 1846, a pyramid-shaped monument was erected at Mount Kedar in his memory: The Poets of Dumfriesshire (1910) was written by Frank Miller of Annan – https://search.nls.uk/permalink/f/1jc5lod/44NLS_ALMA51591337810004341. It offers not only a literary history, but also an antiquarian and historical overview of poems and ballads from the region, both ancient and modern. In compiling this work, Miller drew upon records held by the likes of the British Museum, the Advocates’ Library, and other respected collectors. The author draws attention to several poets who are hitherto unknown outside the region; for example, Ballantyne Ferguson, a Gretna farmer who died in 1869, aged seventy-one, and who produced a number of tales and poems, one of which – Young Bridekirk – would be published by the Annandale Observer after his death. A digitised version of Miller’s work has been uploaded onto the Internet Archive:- https://archive.org/details/poetsofdumfriess00mill Studies in the Topography of Galloway (1887) contains details of nearly 4000 placenames, with notes on their origin and meaning. It was written by Sir Herbert Eustace Maxwell (1845-1937). Descended from the Maxwells of Caerlaverock, Maxwell was President of the Society of Antiquaries of Scotland and one of the first Chairs of the National Library of Scotland. He was a keen antiquarian and wrote several important works on the history of the region: – https://digital.nls.uk/82082297 Guy Mannering; or The Astrologer (1815) was the second in the Waverly Novels series written by Sir Walter Scott, and was largely set in Galloway. It proved an immediate success; its Edinburgh printed run sold out in a single day and, owing to its popularity, over the course of Scott’s life there would be 11 editions published. This edition was published in Glasgow in 1870 by David Wilson of Maxwell Street:- https://digital.nls.uk/107396291
<urn:uuid:0fa23722-f155-454f-b411-b00e2e8b59a8>
CC-MAIN-2021-21
https://blog.nls.uk/zoom-into-dumfries-and-galloway/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988696.23/warc/CC-MAIN-20210505203909-20210505233909-00056.warc.gz
en
0.962276
4,425
3.265625
3
OR WAIT null SECS ABSTRACT: Symptomatic rotator cuff disease usually occurs whenbiological factors combine with dynamic events in the setting of staticstructural problems. At risk are persons with compromised blood supplyto the cuff tissues; a history of repetitive overhead or sudden,acute trauma to the rotator cuff; or abnormalities that disrupt thebalance of forces across the cuff. The key to management lies in understandingrotator cuff anatomy and biomechanics. Patients maypresent with pain and weakness, but tears also may be asymptomatic.A combination of physical examination, shoulder testing, and MRIusually will confirm the diagnosis. Evidence suggests that rotator cuffdisease can progress if the patient does not receive treatment. Rest,corticosteroids, and physical therapy are the basics of conservativecare. (J Musculoskel Med. 2008;25:481-488) ABSTRACT: Symptomatic rotator cuff disease usually occurs when biological factors combine with dynamic events in the setting of static structural problems. At risk are persons with compromised blood supply to the cuff tissues; a history of repetitive overhead or sudden, acute trauma to the rotator cuff; or abnormalities that disrupt the balance of forces across the cuff. The key to management lies in understanding rotator cuff anatomy and biomechanics. Patients may present with pain and weakness, but tears also may be asymptomatic. A combination of physical examination, shoulder testing, and MRI usually will confirm the diagnosis. Evidence suggests that rotator cuff disease can progress if the patient does not receive treatment. Rest, corticosteroids, and physical therapy are the basics of conservative care. (J Musculoskel Med. 2008;25:481-488) Recent advances have greatly improved the understanding of and treatment options for disorders of the rotator cuff and associated subacromial pathology. As a result, several concepts have emerged as central tenets of evaluation and treatment. Rotator cuff disease has a constellation of signs and symptoms that are associated with an alteration of normal anatomy and function. The incriminating anatomical structure or precipitating event that causes rotator cuff disease may not only relate to abnormal anatomy of the anterior acromion and coracoacromial arch but also include traumatic events, repetitive overhead activity, or glenohumeral joint imbalance associated with asymmetrical capsular tightness. In addition, poor biological health of the rotator cuff tissue may hinder the ability of tendon tissue to recover from small injuries. An appreciation of the complexity of shoulder anatomy, physiology, and biomechanics is an important part of evaluating and treating patients with suspected disorders of the rotator cuff and subacromial space. A thorough understanding of rotator cuff disease is important to the orthopedists who may provide surgical treatment. It is also important to the primary care physicians, physical therapists, and athletic trainers who are well positioned to make a diagnosis and manage these disorders early in the spectrum of disease progression. In this 2-part article, I discuss and present a rationale for patient evaluation and management of rotator cuff disease.This first part reviews the anatomy and pathogenesis of various types of rotator cuff disease and approaches to patient evaluation. In the second part, to appear in an upcoming issue of this journal, I will describe the surgical technique and rehabilitation of arthroscopic subacromial decompression, as well as repair of the rotator cuff by both arthroscopic and "mini-open" methods. Accurate diagnosis and effective management of rotator cuff injuries requires an understanding of shoulder anatomy and biomechanics. An awareness of the discerning features of a patient's history and physical examination, as well as an understanding of potential contributions from surrounding structures, helps clinicians develop a differential diagnosis and formulate an effective treatment plan. The rotator cuff is formed from the coalescence of the tendon insertions of the subscapularis, supraspinatus, infraspinatus, and teres minor muscles into 1 continuous band over the greater and lesser tuberosities of the proximal humerus (Figure). This arrangement reflects their purpose of functioning in concert. In fact, the name "rotator" cuff may be a misnomer; although the individual muscles of the rotator cuff can rotate the humerus, the major function of the rotator cuff is to depress and stabilize the humeral head, effectively compressing the glenohumeral joint to provide a stable fulcrum for arm movement.1-4 Figure 1 – An understanding of shoulder anatomy and biomechanics helps clinicians make an accurate diagnosis and effectively manage rotator cuff injuries. Although the individual muscles of the rotator cuff can rotate the humerus, the major function of the rotator cuff is to depress and stabilize the humeral head, effectively compressing the glenohumeral joint to provide a stable fulcrum for arm movement. At the rotator cuff interval, the tendon of the long head of the biceps traverses the glenohumeral joint and the cuff is reinforced by the coracohumeral ligament with extensions to a restraining sling around the biceps and to the rotator cuff "cable," a sling formed from a confluence of the glenohumeral and coracohumeral ligaments. Anterolateral pain results from inflammation; it could mimic rotator cuff impingement. Abduction strength, although powered by the deltoid, requires a stable fulcrum provided by a functioning rotator cuff. Passive restraints to glenohumeral translation are lax in mid range, so joint stability in this position relies on functioning rotator cuff muscles, which act as dynamic stabilizers. Dynamic stabilization of the glenohumeral joint is achieved by the coupling of the forces of each rotator cuff muscle5 and the 3 heads of the deltoid muscle.6 The balanced muscle pull increases the concavity compression of the glenohumeral joint. This "force coupling" occurs when the anterior forces from the subscapularis and anterior supraspinatus are balanced by the posterior supraspinatus, infraspinatus, and teres minor. Balancing these forces with even a partial repair of large retracted tears is thought to provide a more stable fulcrum for shoulder motion, leading to functional improvement.3,4 Balancing muscle forces to stabilize the shoulder joint is thought to involve the rotator cuff "cable," an important band of tendon thickening.7 The cable is a normal thickening in the intact supraspinatus and infraspinatus tendons that routinely may be seen arthroscopically from the joint side of a normal rotator cuff. This thickening in the capsule and overlying tendon extends from its insertion just posterior to the biceps tendon to the inferior border of the infraspinatus tendon7 and is thought to allow the forces across the rotator cuff to be dispersed in a manner similar to a suspension bridge.8 This organization of force distribution explains why some patients can maintain reasonable shoulder function in the setting of a painful full-thickness tear. If their rotator cuff cable is maintained, it can allow for balanced kinematics.9 There is considerable debate about the function of the long head of the biceps tendon within the glenohumeral joint. Some reports suggest it plays a role as a humeral head depressor,10,11 but careful clinical analyses has demonstrated that its role as a humeral head depressor may be less than previously thought.12 The tendon of the long head of the biceps traverses the glenohumeral joint at the rotator cuff interval (see Figure). There the cuff is reinforced by the coracohumeral ligament, with extensions to a restraining sling around the biceps, and to the rotator cuff cable.13 This sling is formed from a confluence of the glenohumeral and coracohumeral ligaments.14 Inflammation in this region produces anterolateral pain, which could mimic rotator cuff impingement but is often discernible by tenderness directly over the biceps tendon in several arm positions. Although the rotator cuff interval is a normal-appearing gap between the anterior supraspinatus and the superior edge of the subscapularis, the 2 tendon edges are confluent near their insertion onto the humerus. A tear in this region can disrupt the integrity of the biceps' normal restraining mechanism, and biceps stability must be assessed carefully. The suprascapular artery is a primary vascular supplier to the supraspinatus tendon. The quality of the blood supply to the cuff insertion on the greater and lesser tuberosity is thought to be a factor in the development of rotator cuff disease, particularly in this "critical zone" of the supraspinatus tendon. The microvascular structure of the supraspinatus tendon suggests that there is a region with a tenuous blood supply and an associated limited capacity for intrinsic repair. This is a common location of partial- and full-thickness rotator cuff tears.15 The proximity of this critical zone of the supraspinatus tendon to the long head of the biceps tendon makes it difficult to determine whether pain in this region is coming from the rotator cuff or the biceps tendon. Altered bone and ligament anatomy in the subacromial space may be both a cause and an effect of rotator cuff disease. For example, a hooked type 3 acromion, a thickened or calcified coracoacromial ligament, or an excrescence on the anterolateral corner of the acromion represents abnormal bone anatomy, which can cause abrasion of the bursa and supraspinatus tendon, resulting in the inflammation and pain characteristic of early impingement syndrome. Alternatively, a large retracted rotator cuff tear in which the force coupling has failed will permit superior migration of the humeral head, which may articulate with the coracoacromial arch. Over time, this pathological articulation results in rotator cuff arthropathy in which permanent bone changes occur as the result of a rotator cuff injury. The pathogenesis of rotator cuff disease is multifactorial; underlying biological factors set the stage for pathology by limiting the tendon's ability for self-repair. Aging changes the biology of the tendons, resulting in fiber thickening and granulation tissue. Some regions of the tendons have a naturally tenuous blood supply, reducing the intrinsic ability for healing after small injuries. Diabetes mellitus (DM) and nicotine exposure further harm the microvascular supply and, therefore, the healing capacity of the tendon. Static and dynamic causes of pathology Symptomatic rotator cuff disease usually occurs when these and other biological factors combine with dynamic events that occur in the setting of static structural problems.16 Structural abnormalities of the coracoacromial arch may cause abnormal compression of the cuff, weakening the tendon and leading to cuff disease and injury. In addition, an accumulation of small injuries to the cuff may lead to dysfunction, which causes other structural problems in the subacromial arch and precipitates further progression of the cuff injury. This vicious circle and the interrelationship between dynamic events and static causes of rotator cuff pathology may be inconsistent: primary structural abnormalities do not always lead to cuff pathology, but cuff pathology often leads to structural abnormalities. Static structural causes of cuff disease are related to the coracoacromial arch morphology. Changes or abnormalities in the arch can compress the rotator cuff. For example, an acromion with an anterior hook or a lateral slope can pinch the cuff tendon. Acromioclavicular joint arthritis and medial acromial spurring can impinge the supraspinatus. An os acromial or nonunion of an acromial fracture will compress the underlying cuff by changing the shape of the acromion and by promoting the regional growth of osteophytes. Coracoacromial ligament ossification or hypertrophy changes the flexibility and shape of the anterior scapulohumeral articulation, causing abnormal compression of the cuff in provocative positions. Without a clear-cut high-energy traumatic event, external or internal impingement usually precedes an injury to the rotator cuff. External impingement occurs when the bursal side of the rotator cuff is compressed against the coracoacromial arch. Dynamic causes of external impingement include muscle weakness, which allows for a superior orientation of the humeral head, causing the cuff to abrade on the undersurface of the acromion and coracoacromial ligament. A type 2 superior labral anterior to posterior (SLAP) tear can allow excessive humeral head motion toward the coracoacromial arch. Weak scapular stabilizers can aggravate these problems by placing the acromion at an angle that promotes contact with the underlying cuff tendon. Internal impingement exists when the articular side of the cuff is pinched between the posterosuperior glenoid and an eccentrically articulating humeral head. Internal impingement may result from a tight posterior capsule, which causes the humeral head to ride superiorly and anteriorly, as often seen in adhesive capsulitis or in throwing athletes (particularly baseball pitchers) who have glenohumeral internal rotation deficit.17 As originally described, internal impingement occurs with the arm in the cocked position of 90° abduction and full external rotation.18 In throwers and overhead athletes, this position brings the articular surface of the rotator cuff insertion against the posterosuperior glenoid rim. Repeated forceful contact between the undersurface of the rotator cuff and the posterosuperior glenoid and labrum is said to cause posterior superior labral lesions. With time, undersurface partial thickness rotator cuff tears follow. Despite contact between these 2 structures occurring physiologically, theory holds that repetitive contact with excessive force can produce injury. However, many patients with this constellation of pathological findings are not overhead athletes,19 and the condition does not develop in most throwers even though the position is achieved regularly.20 Burkhart and associates17,21,22 developed a comprehensive evaluation of arthroscopic findings, patient outcomes, and biomechanical experimental data to indicate that the so-called internal impingement found in throwers is caused by a complex syndrome related to scapular dyskinesis and kinetic chain abnormalities that result in scapular malposition, inferior medial border prominence, coracoid pain and malposition, and dyskinesis of scapular movement (ie, SICK scapula).This combination of pathomechanics initiates a cascade that leads to type 2 SLAP tears and partial-thickness rotator cuff tears. Small full-thickness rotator cuff tears exist in shoulders that are somewhat pain-free and have little or no limitation in function. In one study, 23% of 411 asymptomatic volunteers with normal shoulder function were found to have full-thickness rotator cuff tears diagnosed by ultrasonography. In this study,13% were aged 50 to 59 years and 51% were older than 80 years.23 Cadaver dissections have demonstrated rates of up to 60% of partial-thickness or small full-thickness tears in cadavers older than 75 years at the time of death.24 Several factors affect the natural history of various tears; therefore,whether asymptomatic tears will become symptomatic with time often is unpredictable. Yamaguchi and colleagues25 monitored 44 patients with known bilateral rotator cuff tears in an effort to learn the natural history of asymptomatic rotator cuff tears. In this study, 23 (51%) of the previously asymptomatic shoulders became symptomatic over a mean of 2.8 years, and 9 of 23 patients (39%) who underwent repeat ultrasonography had tear progression. The study results indicate that there is a considerable risk that the size or symptoms or both of an asymptomatic tear will progress without treatment. The probability of progression of an untreated cuff tear varies with tear characteristics (eg, size, location, mechanism, chronicity), the biological health of the torn tissue (vascularity, presence of DM, nicotine exposure), the status of force coupling in the shoulder (ie, intact rotator cuff cable), and the activity level of the patient. When pain is the predominant presenting complaint, the vast majority of small symptomatic tears often may be managed effectively with oral antiinflammatory medication, selective use of corticosteroid injections,and rehabilitation of the shoulder to improve range of motion and to strengthen the muscles of the rotator cuff and the periscapular stabilizers. It is important to counsel patients that when a small symptomatic tear is successfully "treated" nonoperatively, the tear persists as an asymptomatic tear, rather than spontaneously healing to bone. Left unmanaged, small painful tears with intact mechanics can enlarge and lead to progressive loss of balanced forces coupling because of violation of the rotator cuff cable. With this progression, the patient begins to experience a significant loss of function in addition to shoulder pain. As the tear enlarges, fat atrophy and retraction of the tear can worsen.26 Further enlargement of the tear may occur with a paradoxical decrease in pain.With extreme tear enlargement and loss of coracohumeral and glenohumeral ligament integrity, the humeral head may rise into the subacromial space, articulate with the acromion and, in time, lead to rotator cuff arthropathy. The probability of a patient progressing through these steps is unpredictable and based on multiple factors, but progression generally occurs at a slow pace over several years. The course usually is more predictable in overhead athletes. Competitive throwing athletes execute many repetitions of highly demanding motions with complex mechanics. Any imbalance in the kinetic chain, including a weak subscapularis, inadequate scapular mobility or stabilization, or even throwing techniques that cause improper foot placement or arm position during throwing, can initiate the cascade toward pathology. Recognized early, these problems can be corrected through rehabilitation and training. Failure to recognize or manage these conditions can lead to progression and cause lesions that require surgical repair, including posterosuperior labral tears, anterior shoulder instability, and articular-sided rotator cuff tears. These lesions should be suspected in a throwing athlete who is not responding to rest, rehabilitation, and appropriate nonoperative treatment. Early symptoms in the throwing athlete include slow warm-ups and stiffness, with no pain. During this time, the shoulder usually responds to rest. If the condition is allowed to progress, cuff-associated pain will occur at the start of the acceleration phase of throwing. Shoulder pain elicited on examination often is alleviated with a relocation of the humeral head to articulate concentrically with the glenoid. These and other subtleties of the shoulder examination are important features of the clinical signs and symptoms of rotator cuff disease. Shoulder complaints often are a combination of "pain," "weakness," and "stiffness." An element of each may be caused by rotator cuff pathology. The presentation of rotator cuff disease varies with the etiology and classification of the cuff pathology, ranging from traumatic rotator cuff tears to articular-sided partial-thickness tears associated with "internal impingement" in a thrower or overhead athlete to a classic impingement process with an insidious onset resulting from repetitive overhead activities. A traumatic tear usually is the result of abrupt traction or impact on the arm that produces immediate pain and weakness in a previously "normal" shoulder. An overhead athlete or thrower with articular-sided rotator cuff tendon pathology will describe a progression of slow warm-ups, stiffness, pain, and loss of performance (eg, fastball velocity or accuracy) as glenohumeral joint imbalance or scapular dyskinesis worsens. Patients who are progressing along the spectrum of cuff impingement syndrome toward a full cuff tear have a different presentation. Structural abnormalities combined with dynamic events (eg, repetitive overhead activity), perhaps with coexisting intrinsic biological problems in the cuff tendon (eg, poor vascular supply15), often lead to a gradual onset of symptoms that worsen with overhead activities or during sleep directly on the affected shoulder. In these patients, the initial symptom is pain, which produces a sense of weakness. True weakness evolves with increased pain as a cuff tear develops and may worsen if the tear enlarges to involve the rotator cuff cable. In large, chronic, unmanaged tears, patients experience both pain and weakness; with a chronically nonfunctioning atrophied cuff, they often complain more of weakness than of pain. When a patient with suspected or known rotator cuff disease is evaluated, a thorough shoulder examination helps the clinician classify the disorder, assess contributing features of the symptoms, and identify potential solutions. Many patients with rotator cuff disease or injury have tenderness at the anterolateral aspect of the humerus at the insertion of the supraspinatus tendon, as well as positive impingement signs, as described by Neer27 and Hawkins and Kennedy.28 In addition, evaluating the long head of the biceps tendon for tenderness and stability is important. Biceps tendon injuries are a common source of anterior shoulder pain; a coexisting subscapularis tear can involve the biceps stabilizing sling, which can lead to painful snapping of the biceps tendon out of the anterior humeral groove.29 Acromioclavicular joint tenderness is a common contributor to shoulder pain. Although degenerative changes seen in this joint on x-ray films often are not painful, bone spurs in this area often promote impingement of the underlying supraspinatus tendon. Strength testing of the rotator cuff tendons should isolate each tendon to the greatest extent possible. A patient with a large tear that results in loss of the balanced force coupling across the humeral head often may be able to only shrug the shoulder in an effort to raise the arm. Specific weakness or pain elicited with resisted internal rotation, abduction, or external rotation may be found with isolated tears of the subscapularis, supraspinatus, or infraspinatus, respectively. Often the pain associated with the use of a torn muscle causes weakness. The subscapularis is responsible for internal rotation of the shoulder and can be tested in isolation. The "belly-press," or Napoleon, test requires that the patient press the hand into the belly.30,31 During this maneuver, the examiner must maintain a straight position of the patient's wrist and prevent the patient from pulling the elbow posteriorly in an effort to compensate for a torn subscapularis.
<urn:uuid:432a64b6-f7ab-48a8-b0ba-1d4440aec07b>
CC-MAIN-2021-21
https://www.rheumatologynetwork.com/view/taking-closer-look-rotator-cuff-disorders
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991772.66/warc/CC-MAIN-20210517115207-20210517145207-00017.warc.gz
en
0.901343
4,740
2.578125
3
Bronze is an alloy consisting primarily of copper, commonly with about 12–12.5% tin and often with the addition of other metals (such as aluminium, manganese, nickel or zinc) and sometimes non-metals or metalloids such as arsenic, phosphorus or silicon. These additions produce a range of alloys that may be harder than copper alone, or have other useful properties, such as strength, ductility, or machinability. The archeological period in which bronze was the hardest metal in widespread use is known as the Bronze Age. The beginning of the Bronze Age in India and western Eurasia is conventionally dated to the mid-4th millennium BC, and to the early 2nd millennium BC in China; elsewhere it gradually spread across regions. The Bronze Age was followed by the Iron Age starting from about 1300 BC and reaching most of Eurasia by about 500 BC, although bronze continued to be much more widely used than it is in modern times. Because historical pieces were often made of brasses (copper and zinc) and bronzes with different compositions, modern museum and scholarly descriptions of older objects increasingly use the generalized term " copper alloy" instead. - bróntion, back-formation from Byzantine Greek brontēsíon (βροντησίον, 11th century), perhaps from Brentḗsion (Βρεντήσιον, ' Brindisi', reputed for its bronze; or originally: - in its earliest form from Old Persian birinj, biranj (برنج, 'brass', modern berenj) and piring (پرنگ) 'copper', from which also came Georgian brinǯi (ბრინჯი ), Turkish pirinç, and Armenian brinj (բրինձ), also meaning 'bronze'. The discovery of bronze enabled people to create metal objects that were harder and more durable than previously possible. Bronze tools, weapons, armor, and building materials such as decorative tiles were harder and more durable than their stone and copper (" Chalcolithic") predecessors. Initially, bronze was made out of copper and arsenic, forming arsenic bronze, or from naturally or artificially mixed ores of copper and arsenic, with the earliest artifacts so far known coming from the Iranian plateau in the 5th millennium BC. It was only later that tin was used, becoming the major non-copper ingredient of bronze in the late 3rd millennium BC. Tin bronze was superior to arsenic bronze in that the alloying process could be more easily controlled, and the resulting alloy was stronger and easier to cast. Also, unlike arsenic, metallic tin and fumes from tin refining are not toxic. The earliest tin-alloy bronze dates to 4500 BC in a Vinča culture site in Pločnik ( Serbia). Other early examples date to the late 4th millennium BC in Egypt, Susa ( Iran) and some ancient sites in China, Luristan (Iran) and Mesopotamia ( Iraq).[ citation needed] Ores of copper and the far rarer tin are not often found together (exceptions include Cornwall in Britain, one ancient site in Thailand and one in Iran), so serious bronze work has always involved trade. Tin sources and trade in ancient times had a major influence on the development of cultures. In Europe, a major source of tin was the British deposits of ore in Cornwall, which were traded as far as Phoenicia in the eastern Mediterranean. In many parts of the world, large hoards of bronze artifacts are found, suggesting that bronze also represented a store of value and an indicator of social status. In Europe, large hoards of bronze tools, typically socketed axes (illustrated above), are found, which mostly show no signs of wear. With Chinese ritual bronzes, which are documented in the inscriptions they carry and from other sources, the case is clear. These were made in enormous quantities for elite burials, and also used by the living for ritual offerings. Though bronze is generally harder than wrought iron, with Vickers hardness of 60–258 vs. 30–80, the Bronze Age gave way to the Iron Age after a serious disruption of the tin trade: the population migrations of around 1200–1100 BC reduced the shipping of tin around the Mediterranean and from Britain, limiting supplies and raising prices. As the art of working in iron improved, iron became cheaper and improved in quality. As cultures advanced from hand-wrought iron to machine-forged iron (typically made with trip hammers powered by water), blacksmiths learned how to make steel. Steel is stronger than bronze and holds a sharper edge longer. Bronze was still used during the Iron Age, and has continued in use for many purposes to the modern day. There are many different bronze alloys, but typically modern bronze is 88% copper and 12% tin. Alpha bronze consists of the alpha solid solution of tin in copper. Alpha bronze alloys of 4–5% tin are used to make coins, springs, turbines and blades. Historical "bronzes" are highly variable in composition, as most metalworkers probably used whatever scrap was on hand; the metal of the 12th-century English Gloucester Candlestick is bronze containing a mixture of copper, zinc, tin, lead, nickel, iron, antimony, arsenic with an unusually large amount of silver – between 22.5% in the base and 5.76% in the pan below the candle. The proportions of this mixture suggest that the candlestick was made from a hoard of old coins. The Benin Bronzes are in fact brass, and the Romanesque Baptismal font at St Bartholomew's Church, Liège is described as both bronze and brass. In the Bronze Age, two forms of bronze were commonly used: "classic bronze", about 10% tin, was used in casting; and "mild bronze", about 6% tin, was hammered from ingots to make sheets. Bladed weapons were mostly cast from classic bronze, while helmets and armor were hammered from mild bronze. Commercial bronze (90% copper and 10% zinc) and architectural bronze (57% copper, 3% lead, 40% zinc) are more properly regarded as brass alloys because they contain zinc as the main alloying ingredient. They are commonly used in architectural applications. Silicon bronze has a composition of Si: 2.80–3.80%, Mn: 0.50–1.30%, Fe: 0.80% max., Zn: 1.50% max., Pb: 0.05% max., Cu: balance. Bronzes are typically ductile alloys, considerably less brittle than cast iron. Typically bronze oxidizes only superficially; once a copper oxide (eventually becoming copper carbonate) layer is formed, the underlying metal is protected from further corrosion. This can be seen on statues from the Hellenistic period. However, if copper chlorides are formed, a corrosion-mode called " bronze disease" will eventually completely destroy it. Copper-based alloys have lower melting points than steel or iron and are more readily produced from their constituent metals. They are generally about 10 percent denser than steel, although alloys using aluminium or silicon may be slightly less dense. Bronze is a better conductor of heat and electricity than most steels. The cost of copper-base alloys is generally higher than that of steels but lower than that of nickel-base alloys. Copper and its alloys have a huge variety of uses that reflect their versatile physical, mechanical, and chemical properties. Some common examples are the high electrical conductivity of pure copper, low-friction properties of bearing bronze (bronze that has a high lead content— 6–8%), resonant qualities of bell bronze (20% tin, 80% copper), and resistance to corrosion by seawater of several bronze alloys. The melting point of bronze varies depending on the ratio of the alloy components and is about 950 °C (1,742 °F). Bronze is usually nonmagnetic, but certain alloys containing iron or nickel may have magnetic properties. Bronze, or bronze-like alloys and mixtures, were used for coins over a longer period. Bronze was especially suitable for use in boat and ship fittings prior to the wide employment of stainless steel owing to its combination of toughness and resistance to salt water corrosion. Bronze is still commonly used in ship propellers and submerged bearings. In the 20th century, silicon was introduced as the primary alloying element, creating an alloy with wide application in industry and the major form used in contemporary statuary. Sculptors may prefer silicon bronze because of the ready availability of silicon bronze brazing rod, which allows colour-matched repair of defects in castings. Aluminium is also used for the structural metal aluminium bronze. Bronze also has low friction against dissimilar metals, making it important for cannons prior to modern tolerancing, where iron cannonballs would otherwise stick in the barrel. It is still widely used today for springs, bearings, bushings, automobile transmission pilot bearings, and similar fittings, and is particularly common in the bearings of small electric motors. Phosphor bronze is particularly suited to precision-grade bearings and springs. It is also used in guitar and piano strings. Unlike steel, bronze struck against a hard surface will not generate sparks, so it (along with beryllium copper) is used to make hammers, mallets, wrenches and other durable tools to be used in explosive atmospheres or in the presence of flammable vapors. Bronze is used to make bronze wool for woodworking applications where steel wool would discolour oak. Phosphor bronze is used for ships' propellers, musical instruments, and electrical contacts. Bearings are often made of bronze for its friction properties. It can be impregnated with oil to make the proprietary Oilite and similar material for bearings. Aluminium bronze is hard and wear-resistant, and is used for bearings and machine tool ways. Bronze is widely used for casting bronze sculptures. Common bronze alloys have the unusual and desirable property of expanding slightly just before they set, thus filling the finest details of a mould. Then, as the bronze cools, it shrinks a little, making it easier to separate from the mould. Bronze statues were regarded as the highest form of sculpture in Ancient Greek art, though survivals are few, as bronze was a valuable material in short supply in the Late Antique and medieval periods. Many of the most famous Greek bronze sculptures are known through Roman copies in marble, which were more likely to survive. In India, bronze sculptures from the Kushana ( Chausa hoard) and Gupta periods ( Brahma from Mirpur-Khas, Akota Hoard, Sultanganj Buddha) and later periods ( Hansi Hoard) have been found. Indian Hindu artisans from the period of the Chola empire in Tamil Nadu used bronze to create intricate statues via the lost-wax casting method with ornate detailing depicting the deities of Hinduism. The art form survives to this day, with many silpis, craftsmen, working in the areas of Swamimalai and Chennai. In antiquity other cultures also produced works of high art using bronze. For example: in Africa, the bronze heads of the Kingdom of Benin; in Europe, Grecian bronzes typically of figures from Greek mythology; in east Asia, Chinese ritual bronzes of the Shang and Zhou dynasty—more often ceremonial vessels but including some figurine examples. Bronze sculptures, although known for their longevity, still undergo microbial degradation; such as from certain species of yeasts. Bronze continues into modern times as one of the materials of choice for monumental statuary. Etruscan tripod base for a thymiaterion (incense burner); 475-450 BC; bronze; height: 11 cm; Metropolitan Museum of Art Pair of French Rococo firedogs (chenets); circa 1750; gilt bronze; dimensions of the first: 52.7 x 48.3 x 26.7 cm, of the second: 45.1 x 49.1 x 24.8 cm; Metropolitan Museum of Art French Neoclassical mantel clock (pendule de cheminée); 1757–1760; gilded and patinated bronze, oak veneered with ebony, white enamel with black numerals, and other materials; 48.3 × 69.9 × 27.9 cm; Metropolitan Museum of Art Pair of French Chinoiserie firedogs; 1760–1770; gilt bronze; height (each): 41.9 cm; Metropolitan Museum of Art Winter; by Jean-Antoine Houdon; 1787; bronze; 143.5 x 39.1 x 50.5 cm, height of the pedestal: 86.4 cm; Metropolitan Museum of Art Before it became possible to produce glass with acceptably flat surfaces, bronze was a standard material for mirrors. The reflecting surface was typically made slightly convex so that the whole face could be seen in a small mirror. Bronze was used for this purpose in many parts of the world, probably based on independent discoveries. Bronze mirrors survive from the Egyptian Middle Kingdom (2040–1750 BC). In Europe, the Etruscans were making bronze mirrors in the sixth century BC, and Greek and Roman mirrors followed the same pattern. Although other materials such as speculum metal had come into use, bronze mirrors were still being made in Japan in the eighteenth century AD. Nearly all professional cymbals are made from bronze, which gives a desirable balance of durability and timbre. Several types of bronze are used, commonly B20 bronze, which is roughly 20% tin, 80% copper, with traces of silver, or the tougher B8 bronze made from 8% tin and 92% copper. As the tin content in a bell or cymbal rises, the timbre drops. Bronze is also used for the windings of steel and nylon strings of various stringed instruments such as the double bass, piano, harpsichord, and guitar. Bronze strings are commonly reserved on pianoforte for the lower pitch tones, as they possess a superior sustain quality to that of high-tensile steel. Bronzes of various metallurgical properties are widely used in struck idiophones around the world, notably bells, singing bowls, gongs, cymbals, and other idiophones from Asia. Examples include Tibetan singing bowls, temple bells of many sizes and shapes, gongs, Javanese gamelan, and other bronze musical instruments. The earliest bronze archeological finds in Indonesia date from 1–2 BC, including flat plates probably suspended and struck by a wooden or bone mallet. Ancient bronze drums from Thailand and Vietnam date back 2,000 years. Bronze bells from Thailand and Cambodia date back to 3,600 BC. Some companies are now making saxophones from phosphor bronze (3.5 to 10% tin and up to 1% phosphorus content). Bell bronze/B20 is used to make the tone rings of many professional model banjos. The tone ring is a heavy (usually 3 lbs.) folded or arched metal ring attached to a thick wood rim, over which a skin, or most often, a plastic membrane (or head) is stretched – it is the bell bronze that gives the banjo a crisp powerful lower register and clear bell-like treble register.[ citation needed] Bronze has also been used in coins; most "copper" coins are actually bronze, with about 4 percent tin and 1 percent zinc. As with coins, bronze has been used in the manufacture of various types of medals for centuries, and are known in contemporary times for being awarded for third place in sporting competitions and other events. The later usage was in part attributed to the choices of gold, silver and bronze to represent the first three Ages of Man in Greek mythology: the Golden Age, when men lived among the gods; the Silver age, where youth lasted a hundred years; and the Bronze Age, the era of heroes, and was first adopted at the 1904 Summer Olympics. At the 1896 event, silver was awarded to winners and bronze to runners-up, while at 1900 other prizes were given rather than medals. - Robert L. Thorp, China in the Early Bronze Age: Shang Civilization, University of Pennsylvania Press (2013). - "British Museum, "Scope Note" for "copper alloy"". British Museum. Archived from the original on 18 August 2014. Retrieved 14 September 2014. - Henry and Renée Kahane, "Byzantium's Impact on the West: The Linguistic Evidence", Illinois Classical Studies 06 (2) 1981, p. 395. - Originally M.P.E. Berthelot, "Sur le nom du bronze chez les alchimistes grecs", in Revue archéologique, 1888, pp. 294–98. - Originally Karl Lokotsch, Etymologisches Wörterbuch der europäischen Wörter orientalischen Ursprungs. (Heidelberg: Carl Winter's Universitätsbuchhandlung, 1927), p. 1657. - Tylecote, R.F. (1992). A History of Metallurgy, Second Edition. London: Maney Publishing, for the Institute of Materials. ISBN 978-1-902653-79-2. Archived from the original on 2015-04-02. - Thornton, C.; Lamberg-Karlovsky, C.C.; Liezers, M.; Young, S.M.M. (2002). "On pins and needles: tracing the evolution of copper-based alloying at Tepe Yahya, Iran, via ICP-MS analysis of Common-place items". Journal of Archaeological Science. 29 (12): 1451–60. doi: 10.1006/jasc.2002.0809. - Kaufman, Brett. "Metallurgy and Ecological Change in the Ancient Near East". Backdirt: Annual Review. 2011: 86. - Radivojević, Miljana; Rehren, Thilo (December 2013). "Tainted ores and the rise of tin bronzes in Eurasia, c. 6500 years ago". Antiquity Publications Ltd. Archived from the original on 2014-02-05. - Smithells Metals Reference Book, 8th Edition, ch. 22 - Clayton E. Cramer. What Caused The Iron Age? Archived 2010-12-28 at the Wayback Machine claytoncramer.com. December 10, 1995 - Oleg D. Sherby and Jeffrey Wadsworth. Ancient Blacksmiths, the Iron Age, Damascus Steels, and Modern Metallurgy Archived 2007-06-26 at the Wayback Machine. Tbermec 2000, Las Vegas, Nevada December 4–8, 2000. Retrieved on 2012-06-09. - Knapp, Brian. (1996) Copper, Silver and Gold. Reed Library, Australia. - "Copper alloys". Archived from the original on 11 September 2013. Retrieved 14 September 2014. - "CDA UNS Standard Designations for Wrought and Cast Copper and Copper Alloys: Introduction". Archived from the original on 24 September 2013. Retrieved 14 September 2014. - plastic bronze definition of plastic bronze in the Free Online Encyclopedia - Adams, Jonathan R. (2012). "The Belgammel Ram, a Hellenistic-Roman BronzeProembolionFound off the Coast of Libya: test analysis of function, date and metallurgy, with a digital reference archive" (PDF). International Journal of Nautical Archaeology. 42 (1): 60–75. CiteSeerX 10.1.1.738.4024. doi: 10.1111/1095-9270.12001. Archived (PDF) from the original on 2016-08-28. - ASTM B124 / B124M – 15. ASTM International. 2015. - "Bronze Disease, Archaeologies of the Greek Past". Archived from the original on 26 February 2015. Retrieved 14 September 2014. - A. Alavudeen; N. Venkateshwaran; J. T. Winowlin Jappes (1 January 2006). A Textbook of Engineering Materials and Metallurgy. Firewall Media. pp. 136–. ISBN 978-81-7008-957-5. Archived from the original on 10 June 2016. Retrieved 25 June 2013. - Resources: Standards & Properties – Copper & Copper Alloy Microstructures: Phosphor Bronze Archived 2015-12-08 at the Wayback Machine - Resources: Standards & Properties – Copper & Copper Alloy Microstructures: Aluminum Bronzes Archived 2013-12-05 at the Wayback Machine - Savage, George, A Concise History of Bronzes, Frederick A. Praeger, Inc. Publishers, New York, 1968 p. 17 - for a translation of his inscription see the appendix in Stephanie Dalley, (2013) The Mystery of the Hanging Garden of Babylon: an elusive World Wonder traced, OUP. ISBN 978-0-19-966226-5 - Indian bronze masterpieces: the great tradition: specially published for the Festival of India, Asharani Mathur, Sonya Singh, Festival of India, Brijbasi Printers, Dec 1, 1988 - Francesca Cappitelli; Claudia Sorlini (2008). "Microorganisms Attack Synthetic Polymers in Items Representing Our Cultural Heritage". Applied and Environmental Microbiology. 74 (3): 564–69. doi: 10.1128/AEM.01768-07. PMC 2227722. PMID 18065627. - Von Falkenhausen, Lothar (1993). Suspended Music: Chime-Bells in the Culture of Bronze Age China. Berkeley and Los Angeles: University of California Press. p. 106. ISBN 978-0-520-07378-4. Archived from the original on 2016-05-26. - McCreight, Tim. Metals technic: a collection of techniques for metalsmiths. Brynmorgen Press, 1992. ISBN 0-9615984-3-3 - LaPlantz, David. Jewelry – Metalwork 1991 Survey: Visions – Concepts – Communication: S. LaPlantz: 1991. ISBN 0-942002-05-9 - "www.sax.co.uk". Archived from the original on 11 August 2014. Retrieved 18 September 2014. - Roger H. Siminoff, Siminoff's Luthiers Glossary (NY: Hal Leonard, 2008), 13. ISBN 9781423442929 - "bronze | alloy". Archived from the original on 2016-07-30. Retrieved 2016-07-21.
<urn:uuid:19322d92-56c4-4646-a8b7-549169f3014c>
CC-MAIN-2021-21
https://webot.org/basic/?search=Bronze
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991269.57/warc/CC-MAIN-20210516105746-20210516135746-00097.warc.gz
en
0.925302
4,877
3.859375
4
Longing, we say, because desire is full of endless distances. Robert Hass, “Meditation at Lagunitas” 1. Whale souls Silent reading, as we know it in the West, is a relatively recent phenomenon, says Ivan Illich, popularized only in the 12th century by the influential French abbot Hugh of St. Victor. Prior to that time, even a monk reading alone in his cell would sound the words out. How else could their full power be felt? The origin of silent prayer is not so easy to pinpoint. The practice of meditation in one form or another is probably as old as the hunter’s profession. Such meditations – of necessity silent, so as not to spook the game – need not be the especial province of men, either. A while back I quoted Tom Lowenstein (Ancient Land, Sacred Whale, FSG, 1993) about the bowhead hunt as formerly practiced by the Inuit of Tikigaq, in North Alaska. The male and female skinboat owners (umialik) had equally potent roles in visualizing and conjuring the prey animal: In contrast to her “Raven” husband’s freedom on the sea, the woman umialik stayed at home in her iglu and did nothing for most of the whale hunt. This, in essence, was her mythic role. Secluded and overtly idle like the uiluaqtaq of the story, the umialik woman was completely passive. . . . Within this inertia lay shamanistic power. How this functioned may be seen in the umialik’s parallel actions. The woman’s springtime ritual in fact started on the sea-ice. On the first day of the hunt, when the male crew left Tikigaq, the woman walked ahead to the open water. With the help of the woman the crew would have found a good place to wait, and the woman lay down on the ice with her head pointing toward Tikigaq while the men embarked and pushed off from the ice. After travelling a short distance the steersman brought his boat round and returned to the ice-edge. Silently, the harpooner leaned over the prow, dipped his weapon in the water opposite the woman and then touched her parka. When she had been “struck”, the woman got up and, without looking back, walked home. The moment she reached her iglu the woman ceased activity, and for the rest of the hunt sat passively on the sleeping bench. . . . While her posture on the ice had resembled the rising whale and the position of her head indicated the direction from which the whale must come, woman had been the whale’s body. In her ritual tranquility she now enacted the whale’s soul. Not only did she transmit to the whales the generous passivity that whales were supposed to feel towards their hunters, but she already was the whale’s soul, resident within her Tikigaq iglu, suspended between the conditions of life and death that the hunt counterpoised and made sacred. It is difficult for most of us to grasp the depth of affection one might feel toward an Animal whose body is not only food but also the Land itself. How the overtly active man and the overtly passive woman together contrive to weave a net of longing for the beloved animal was at the heart of the annual drama of the Tikigaq Inuit. Quiet as the woman remains, she and her husband are in balanced partnership. . . . But the whale brought home through the shared operation implies a third partner in the myth-role. This third partner is the land itself. Land, like the woman, is externally quiet but dynamic within. And like all symbols of the whale hunt the land remains ambiguous. . . . Tikigaq [peninsula] is primal sea-beast, its iglus microcosmic versions of the whale and the sea-beast. When a Tikigaq harpooner strikes the land whale stirs; when the katak [iglu entrance hole] gives birth with the death of a bowhead the whale in the katak is both Tikigaq nuna [land], and bowhead, and just katak. As Lowenstein’s informants put it: These small whales, inutuqs, small round fat ones come to us from down there, from their country south of us. The women sit at home. They are whale souls in their iglus. The whales listen and sing. They hear Tikigaq singing. Listen to the north wind! Listen to the sea-ice! Listen to the inutuq 2. A mindful god According to Jewish tradition, silent prayer was invented by a woman. Now there was a certain man of Ramathaimzophim, of mount Ephraim, and his name was Elkanah, the son of Jeroham, the son of Elihu, the son of Tohu, the son of Zuph, an Ephrathite. And he had two wives; the name of the one was Hannah, and the name of the other Peninnah: and Peninnah had children, but Hannah had no children. And this man went up out of his city yearly to worship and to sacrifice unto the LORD of hosts in Shiloh. And the two sons of Eli, Hophni and Phinehas, the priests of the LORD, were there. And when the time was that Elkanah offered, he gave to Peninnah his wife, and to all her sons and her daughters, portions. But unto Hannah he gave a worthy portion; for he loved Hannah: but the LORD had shut up her womb. And her adversary [i.e. Peninnah] also provoked her sore, for to make her fret, because the LORD had shut up her womb. And as he did so year by year, when she went up to the house of the LORD, so she provoked her; therefore she wept, and did not eat. Then said Elkanah her husband to her, Hannah, why weepest thou, and why eatest thou not, and why is thy heart grieved? Am not I better to thee than ten sons? So Hannah rose up after they had eaten in Shiloh, and after they had drunk. Now Eli the priest sat upon a seat by a post of the temple of the LORD. And she was in bitterness of soul, and prayed unto the LORD, and wept sore. And she vowed a vow, and said, O LORD of hosts, if thou wilt indeed look on the affliction of thine handmaid, and remember me, and not forget thine handmaid, but wilt give unto thine handmaid a man child, then I will give him unto the LORD all the days of his life, and there shall no razor come upon his head. And it came to pass, as she continued praying before the LORD, that Eli marked her mouth. Now Hannah, she spake in her heart; only her lips moved, but her voice was not heard: therefore Eli thought she had been drunken. And Eli said unto her, How long wilt thou be drunken? Put away thy wine from thee. And Hannah answered and said, No, my lord, I am a woman of a sorrowful spirit: I have drunk neither wine nor strong drink, but have poured out my soul before the LORD. Count not thine handmaid for a daughter of Belial: for out of the abundance of my complaint and grief have I spoken hitherto. Then Eli answered and said, Go in peace: and the God of Israel grant thee thy petition that thou hast asked of him. And she said, Let thine handmaid find grace in thy sight. So the woman went her way, and did eat, and her countenance was no more sad. And they rose up in the morning early, and worshipped before the LORD, and returned, and came to their house to Ramah: and Elkanah knew Hannah his wife; and the LORD remembered her. Wherefore it came to pass, when the time was come about after Hannah had conceived, that she bare a son, and called his name Samuel, saying, Because I have asked him of the LORD. And the man Elkanah, and all his house, went up to offer unto the LORD the yearly sacrifice, and his vow. But Hannah went not up; for she said unto her husband, I will not go up until the child be weaned, and then I will bring him, that he may appear before the LORD, and there abide for ever. And Elkanah her husband said unto her, Do what seemeth thee good; tarry until thou have weaned him; only the LORD establish his word. So the woman abode, and gave her son suck until she weaned him. And when she had weaned him, she took him up with her, with three bullocks, and one ephah of flour, and a bottle of wine, and brought him unto the house of the LORD in Shiloh: and the child was young. And they slew a bullock, and brought the child to Eli. And she said, Oh my lord, as thy soul liveth, my lord, I am the woman that stood by thee here, praying unto the LORD. For this child I prayed; and the LORD hath given me my petition which I asked of him. Therefore also I have lent him to the LORD; as long as he liveth he shall be lent to the LORD. And he worshipped the LORD there. Thus the King James Bible, 1 Samuel 1. For Hannah’s song of thanksgiving – model for Miriam’s song, the Magnificat (Luke 1:46-55) – let’s turn to the Anchor Bible translation by P. Kyle McCarter, Jr. My heart exults in Yahweh! My horn is raised by my god! My mouth is stretched over my enemies! I rejoice in my vindication. For there is no holy one like Yahweh, And no mountain like our god! Do not speak haughtily Or let arrogance out of your mouth. For Yahweh is a mindful god, And a god who balances his actions: The bows of the mighty are broken, While the feeble are girded with armor; The sated have hired out for bread, While the hungry are fattened on food; The childless wife has borne seven, While the mother of many sons is bereaved. It is Yahweh who slays and quickens, Who sends down to Sheol and brings up. It is Yahweh who makes poor and makes rich, Who debases and also exalts; Who raises the poor from the dust, From the scrap heaps lifts the needy, To give them a seat with noblemen And grant them a chair of honor. For the straits of the earth are Yahweh’s . . . 3. Red wedding Behold, a female anthropologist married a god, a warrior diety of a people doubly exiled: first from Africa, then from Haiti. Like the god of Israel in exile ramifying into the ten-fold sefirot, this god too has subdivided. As Sen Jak Majè (Saint James the Elder), Ogou is a “man of war” who fights for what is right and just. As Ogou Panama, he is a pèsònaj (an important person) who demands to be treated with ceremony and deference. As Ogou Ferray, he is fierce and uncompromising. As Ogou Badagri, he is shy, handsome, brave and loyal. Yet, as Ogou Yamson, he is an unreliable drunkard who finds power in booze and swaggering talk; and, as Agèou, he is a liar and beggar. And when Ogou is called by the names Achade or Shango (the two are sometimes conflated into one character), he is said to be a sorceror. (Karen McCarthy Brown: Mama Lola: A Vodou Priestess in Brooklyn, University of California Press, 1991) There are many more Ogou besides these, says Brown. What qualities unite them? Ogou teaches that to live one must fight. Pride, endurance, self-assertion, discipline, and a firm commitment to justice are qualities that bring success. But in one turn of the screw, pride can become braggadocio, endurance can become stubbornness, self-assertion fades into mere bullying, and discipline is transformed into tyranny. An overly developed sense of justice, one that is tempered neither by humor nor by graceful resignation, can lead to suicidal rage. . . . Because the constructive and destructive parts of Ogou’s character are so close together, none of the various Ogou is good or evil, right or wrong, in a simple, unqualified way. Each contains his own paradoxes of personality, which are teased out in possession-performance and in song. In July of 1979, for example, [Brown’s priestess] Alourdes’s community sang a lively song for all the Ogou: Papa Ogou, tou piti kon sa. Papa Ogou, anraje. Papa Ogou, all children are like that. Papa Ogou, enraged. Such lean phrasing, replete with double and triple entendre, is characteristic of Vodou songs. From one perspective, Ogou is counseled in this song to show forbearance toward his children, his followers. From another, Ogou is a strutting banty rooster who throws childish tantrums when he cannot have his way. As with the storm-god Yahweh’s evolution into the LORD of Rabbinical Judaism and Christianity, when the African gods crossed the ocean, they “submerged their connections to the natural world and elaborated their social messages.” But they did not at the same time retreat into an ever-more remote heaven, accessible only to true believers and only in the afterlife. Quite the opposite: the gods became more down-to-earth and accessible, entering directly into the bodies of their followers for frequent dramatic performances that mingled high seriousness and low comedy. And whereas the People of the Book stress the believer’s inner intention, all one needs to bring to the Vodou spirits (in addition to the appropriate offerings) is an open mind. “Try it and see if it work for you,” the priestess Alourdes urges her clients. Vodou practitioners have little use for abstractions. “Vodou seldom halts its kinaesthetic and sensory drama to force its wisdom into concept or precept; proverbs, anecdotes, ancestral tales, and songs are the only vehicles subtle and flexible enough to cradle the messages when the truths of Vodou are put into words,” Brown notes. In this respect, it resembles indigenous and village-based religions the world over. In some sense, can we not agree with the ancient Chinese philosophers who argued that when religion has to promulgate formal precepts, it’s simply a sign that society has entered a crisis phase, a breakdown in communal norms? Viewed from the ethnocentric perspective of inhabitants of an urbanized civilization, we tend to view “tribal” religions as earlier stages in a progressive evolution leading (of course) to us, and perhaps beyond. But which kind of religion tends more closely to reflect the true, mind-boggling complexity of nature? Is the desert- or alpine-dweller’s longing for transcendent Godhead, moksha or nirvana automatically superior to the Vodou priestess’s regular experiences of immanence within a rainforest-like profusion of sacred roles? But of course, there’s no reason to see transcendence and immanence as necessarily opposed; Haitians certainly don’t. Virtually all Vodou devotees consider themselves good Catholics. They would disagree strongly with my use of the term “god” for Ogou – he is considered a spirit. Bondye (God) is singular and supreme in Haitian Vodou. He is a deity with roots in the Christian god as well as in the so-called high gods of West Africa. Yet in the Haitian view of things, Bondye, like his African models, rarely gets involved with individual human lives. Attention to the everyday drama of life is the work of his “angels,” the Vodou spirits. . . . In Vodou, as in virtually all religions, “the spirits select their special devotees, not the reverse.” In fact, I suggest that if we are to draw any meaningful distinction whatsoever between religion and magic, this question of who selects whom would make an excellent criterion. The sorcerer commands and attempts to exert control over the animating forces of the universe with little concern for their own sovereignty or well-being. The religious person petitions, offers sacrifice, bows in thanksgiving, offers devotion. The religious person partakes; the sorceror consumes. For in many, many cultures the relationship to the sacred finds symbol and expression in the most essential forms of union: eating and making love. Within Alourdes’s group of special spirits, one stands out. He is her mèt tèt (the master of her head), Ogou Badagri. But the dominance of Ogou Badagri in her life does not go unchecked. . . . For example, even if a situation has called out the aggression of the Ogou in Alourdes, Gede can possess her and put the matter in an entirely different light through his iconoclastic humor. . . . Because Alourdes has gone through the Vodou marriage to Ogou Badagri, she calls him her “husband.” She sets aside one night a week for him. On this night, she receives the handsome soldier in dreams, and no human lover shares her bed. . . . The most striking part of Ogou Badagri’s character is his ability to endure in the face of trials that would break many others. . . . Forsaking attack, Alourdes, like Badagri, chooses wakefulness. She draws her power around her like a cloak, holding it close to her body. She does not dream of extending herself outward and conquering the world. Rather, she controls what experience has taught her she is able to control – herself. The anthropologist too has Ogou around her head. From the very beginning of her involvement with Vodou, she says, “every priest or priestess who chose to make a diagnosis told me that Papa Ogou was my mèt tèt.” Although I had witnessed many Vodou marriages and been fascinated by them, I originally had no intention of going through the ritual myself. Then, one day in 1980 when I was alone in my apartment and full of rage (I had some things to be angry about at that period of my life), I found myself muttering, “Stop trying to make the anger go away. It only makes it worse. It’s yours. Marry it!” I picked up the phone and called Alourdes. Brown resolved to do, as she put it, “fieldwork on my own psyche.” Alourdes performed divination, diagnosing her as suffering from a blockage of will or energy. She thinks too much, acts too timidly. As Brown explains, “a life of energy or flow” is the Vodou ideal. “The goal of all Vodou ritualizing is to echofe (heat things up) so that people and situations shift and move, and healing transformations can occur.” The marriage took place the next month at Ogou’s regularly scheduled July birthday party. Around two o’clock in the morning, when the songs summoning Ogou began, I excused myself from the twenty-five or so people gathered around Alourdes’s sumptuous altar tables. I went upstairs to change into my wedding clothes – a bright red sundress purchased especially for the occasion and, on my head, a red satin scarf. When I came down the stairs half an hour later, everyone oohed and aahed over my fine attire. Everyone, that is, except Papa Ogou. He had mounted Alourdes in my absence, and I found him decked out in his own finery, his red velvet military jacket with the gold epaulets. But Ogou ignored me. I stood by patiently while he talked to one person after another without acknowledging my presence. No matter how I maneuvered, he always managed to keep his back to me. Everyone was getting nervous. One woman said, “Papa Ogou, your beautiful bride is here, behind you. Don’t you want to talk to her?” Ogou ignored the question. Then a man whispered in my ear, “Go on!” and gave me a shove in front of Ogou. The spirit looked at me with a cold eye. “What do you want?” he asked. I found my voice. “I am here to marry you. You promised you would marry me. You have made me wait a long time. I am ready.” Papa Ogou threw back his head and laughed. It was a deep, rich laugh. “Begin the ceremony!” he shouted, and, taking my arm, he propelled me toward the largest of the altar tables. Once again, Ogou had taught me the warrior’s lesson: know what you want and fight for it. 4. Pronouncing no name The African American poet Lucille Clifton composed a moving series of poems on her husband Fred’s death from leukemia at the age of 49. They are included in her book Next (BOA Editions, 1987). Toward the end, Lucille’s own voice has become submerged in the voice of her dying husband: leukemia as dream/ritual it is night in my room. the woman beside me is dying. a small girl stands at the foot of my bed. she is crying and carrying wine and a wafer. her name is the name i would have given the daughter i would have liked to have had. she grieves for herself and not for the woman. she mourns the future and not the past. she offers me her small communion. i roll the wafer and wine on my tongue. i accept my body. i accept my blood. eat she whispers. drink and eat. something is growing in the strong man. it is blooming, they say, but not a flower. he has planted so much in me. so much. i am not wiling, gardener, to give you up to this. the death of fred clifton i seemed to be drawn to the center of myself leaving the edges of me in the hands of my wife and i saw with the most amazing so that i had not eyes but and, rising and turning through my skin, there was all around not the shapes of things but oh, at last, the things “i’m going back to my true identity” i was ready to return to my rightful name. i saw it hovering near in blazoned script and, passing through fire, i claimed it. here is a box of stars for my living wife. tell her to scatter them pronouncing no name. tell her there is no deathless name a body can pronounce.
<urn:uuid:10015d3a-ea25-4d35-b96d-181bee68b17b>
CC-MAIN-2021-21
https://www.vianegativa.us/2004/10/page/7/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.80/warc/CC-MAIN-20210507090724-20210507120724-00378.warc.gz
en
0.965978
5,065
2.734375
3
Rationale Medical masks are commonly used by sick individuals with influenza-like illness (ILI) to prevent spread of infections to others, but clinical efficacy data are absent. Objective Determine whether medical mask use by sick individuals with ILI protects well contacts from related respiratory infections. Setting 6 major hospitals in 2 districts of Beijing, China. Design Cluster randomised controlled trial. Participants 245 index cases with ILI. Intervention Index cases with ILI were randomly allocated to medical mask (n=123) and control arms (n=122). Since 43 index cases in the control arm also used a mask during the study period, an as-treated post hoc analysis was performed by comparing outcomes among household members of index cases who used a mask (mask group) with household members of index cases who did not use a mask (no-mask group). Main outcome measure Primary outcomes measured in household members were clinical respiratory illness, ILI and laboratory-confirmed viral respiratory infection. Results In an intention-to-treat analysis, rates of clinical respiratory illness (relative risk (RR) 0.61, 95% CI 0.18 to 2.13), ILI (RR 0.32, 95% CI 0.03 to 3.13) and laboratory-confirmed viral infections (RR 0.97, 95% CI 0.06 to 15.54) were consistently lower in the mask arm compared with control, although not statistically significant. A post hoc comparison between the mask versus no-mask groups showed a protective effect against clinical respiratory illness, but not against ILI and laboratory-confirmed viral respiratory infections. Conclusions The study indicates a potential benefit of medical masks for source control, but is limited by small sample size and low secondary attack rates. Larger trials are needed to confirm efficacy of medical masks as source control. Trial registration number ACTRN12613000852752; Results. This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/ Statistics from Altmetric.com Strengths and limitations of this study Medical masks are commonly used to prevent spread of infection from sick individuals to others; however, data on the clinical efficacy of this approach are sparse. A cluster-randomised control trial was conducted to examine the efficacy of medical masks as source control. The sample size was small and the study was underpowered to detect a statistically significant difference in outcome in the intention-to-treat analysis. Removal of masks in the intervention arm during meal times may have reduced efficacy and biased the results towards the null. Medical masks are commonly used in healthcare settings for two main purposes: (1) by well healthcare workers (HCWs) to protect them from infections transmitted by droplet route and splash and spray of blood and body fluids; and (2) by sick individuals to prevent transmission to others (source control).1 ,2 There are currently major gaps in our knowledge about the impact of masks on the transmission of respiratory infections.3 Most clinical trials have been focused on the protection of the well wearer, rather than on source control.3 Cloth and medical masks were originally developed as source control to prevent contamination of sterile sites by the wearer in operating theatres (OTs);4 ,5 however, their effectiveness in preventing surgical site infections is yet to be proven.6–8 Although masks are also widely used in the community to prevent spread of infection from sick and infectious people,4 ,9–12 the majority of data on their use are observational and derived from outbreaks and pandemics. Among the nine randomised controlled trials (RCTs) in household and community settings until now,3 only one examined the role of masks as source control and was inconclusive.13 In other clinical trials, masks were either used by both sick patients (index cases as source control) and their household members14–16 or only by household members.17–19 Most of these studies failed to show any efficacy of mask use in preventing spread of infections from the sick individuals. Masks are also used to prevent surgical site infections in the OT,3 although most studies failed to show any efficacy against this indication.6–8 ,20 Only one clinical trial reported high infection rates after surgery if masks were not used by the surgeon in the OT.21 Among the five clinical trials in the healthcare setting to test the efficacy of masks/respirators as respiratory protection,3 none examined the use of masks as source control. Laboratory studies generally support the use of medical masks to prevent spread of infections from patients with influenza and tuberculosis (TB) to their contacts.22–24 Mask use as source control in healthcare settings has now been included in standard infection control precautions during periods of increased respiratory infection activity in the community, yet there is no clinical efficacy evidence to support this recommendation. The aim of this study was to determine whether medical mask use by people in a community setting with influenza-like illness (ILI) protects well contacts from infection. An RCT was conducted in fever clinics in six major hospitals in two districts of Beijing, China. The fever clinics are outpatient departments for the assessment and treatment of febrile patients. The recruitment of participants was started on 18 November 2013 and completed on 20 January 2014. Adults who attended the fever clinic were screened by hospital staff to identify if they were eligible for the study. A study staff member approached eligible patients when they presented in the clinic and invited them to participate in the study. Recruited patients meeting the case definition of ILI (see below) were referred to as index cases, which was the first case in a potential chain of infection transmission. Patients aged 18 years and older (index cases) with ILI (defined as fever ≥38°C plus one respiratory symptom including cough, nasal congestion, runny nose, sore throat or sneezes) who attended a fever outpatient clinic during the study period, had no history of ILI among household members in the prior 14 days and who lived with at least two other people at home were recruited for the study. ILI was used as a selection criterion to achieve high specificity for index cases. Patients who were unable or refused to give consent, had onset of symptoms >24 hours prior to recruitment, were admitted to hospital, resided in a household with <2 other people, or had other ill household members at home were excluded from the study. After providing informed consent, 245 index cases were included and randomly allocated to intervention (mask) and control (no-mask) arms. A research team member (YZ) performed the random allocation sequence using Microsoft Excel and doctors enrolled the participants randomly to intervention and control arms. Patients had an equal chance to be in the either intervention or control arm. One hundred and twenty-three index cases and 302 household contacts were included in the mask (source control) arm and 122 index cases and 295 household contacts were included in the control arm (figure 1). Cases and their household contacts were assigned together as a cluster to either the intervention or control arm. The mask or no-mask intervention was applied to the index cases and respiratory illness was measured in household contacts. Index cases (patients with ILI) in the intervention arm wore a medical mask at home. Index cases were asked to wear a mask (3M 1817 surgical mask) whenever they were in the same room as a household member or a visitor to the household. They were allowed to remove their masks during meal times and while asleep. Index cases were shown how to wear the mask and instructed to wash their hands when donning and doffing the mask. Index cases were provided with 3 masks per day for 7 days (21 masks in total). They were informed that they could cease wearing a mask once their symptoms resolved. Index cases in the control arm did not receive any intervention. Mask use by other household members was not required and not reported. Respiratory illness outcomes were measured in household contacts of the index cases. Primary end points measured in household contacts included: (1) clinical respiratory illness (CRI), defined as two or more respiratory symptoms (cough, nasal congestion, runny nose, sore throat or sneezes) or one respiratory symptom and a systemic symptom (chill, lethargy, loss of appetite, abdominal pain, muscle or joint aches); (2) ILI, defined as fever ≥38°C plus one respiratory symptom; and (3) laboratory-confirmed viral respiratory infection, defined as detection of adenoviruses, human metapneumovirus, coronaviruses 229E/NL63 and OC43/HKU1, parainfluenzaviruses 1, 2 and 3, influenza viruses A and B, respiratory syncytial virus A and B, or rhinovirus A/B by nucleic acid testing (NAT) using a commercial multiplex PCR (Seegen, Seoul, Korea).25–27 If any respiratory or systemic symptoms occurred in household members, index cases were instructed to notify the study coordinator. Symptomatic household members were asked to complete ‘sick follow-up’ questionnaires and anyone who met the CRI definition was tested for laboratory-confirmed viral respiratory infections. Data collection and follow-up At baseline, detailed clinical and demographic information including household structure was collected from index cases and their household members. This included age, sex, smoking history, comorbidities, medications, hand washing practices, influenza vaccination and normal practices around the use of masks. Follow-up period (7 days): Each index case was asked to keep a diary to record activities, symptoms and daily temperatures for 7 days. Symptoms in the household members were also recorded in the diary cards and index cases were asked to report any symptom. The index cases were asked to contact the study coordinator if any of the following symptoms appeared in household members: cough, nasal congestion, runny nose, sore throat, sneezes, chill, lethargy, loss of appetite, abdominal pain and muscle or joint aches. The study coordinator then assessed the household member and completed a follow-up survey. Samples were obtained from all symptomatic cases. All index cases in the intervention and control arms were also asked to document compliance with mask use.26 ,27 Diary cards to record mask use were given to each index case, and they were asked to carry them during the day. Diary cards were returned to the investigators at the end of the study. The study coordinator also contacted index cases via telephone on every alternate day to check whether any household member developed symptoms. Assessors were not blinded, because the intervention (mask wearing) was visible. However, laboratory testing was blinded. Sample collection and laboratory testing Samples were collected from index patients at the time of recruitment and from symptomatic household members during follow-up. Household members were provided with an information sheet and written consent was sought before sampling. Only those household members who provided consent were swabbed. If the sick household member was aged <18 years, consent was obtained from a parent or guardian. Swabs were taken at the home by trained investigators. Double rayon-tipped, plastic-shafted swabs were used to swab both tonsillar areas and the posterior pharyngeal wall of symptomatic participants. The swabs were then transported immediately after collection to the Beijing Centre for Disease Control (CDC) laboratories, or stored at 4°C within 48 hours if transport was delayed. Viral DNA/RNA was extracted from each respiratory specimen using the Viral Gene-spin TM Kit (iNtRON Biotechnology, Seoul, Korea) according to the manufacturer's instructions. Reverse transcription was performed using the RevertAidTM First Strand cDNA Synthesis Kit (Fermentas, Ontario, Canada) to synthesise cDNA. Multiplex PCR was carried out using the Seeplex RV12 Detection Kit (Seegen, Seoul, Korea) to detect adenoviruses, human metapneumovirus, coronavirus 229E/NL63 and OC43/HKU1, parainfluenzaviruses 1, 2 or 3, influenza viruses A or B, respiratory syncytial virus A or B, and rhinovirus A/B. A mixture of 12 viral clones was used as a positive control template, and sterile deionised water was used as a negative control. Viral isolation by Madin Darby Canine Kidney (MDCK) cell culture was undertaken for some of the influenza samples that were NAT positive. Specimen processing, DNA/RNA extraction, PCR amplification and PCR product analyses were conducted in different rooms to avoid cross-contamination. In this cluster-randomised design, the household was the unit of randomisation and the average household size was three people. Assuming that the attack rate of CRI in the control households was 16–20% (based on the results of a previously published household mask trial),17 with a 5% significance level and 85% power and a minimum relative risk (RR) of 0.5 (intervention/control), 385 participants were required in each arm, which was composed of 118 households and, on average, 3 members per household. In this calculation, we assumed that the intracluster correlation coefficient (ICC) was 0.1. An estimated 250 patients with ILI were recruited into the study to allow for possible index case dropout during the study. Descriptive statistics were compared in the mask and control arms and respiratory virus infection attack rates were quantified. Data from the diary cards were used to calculate person-days of infection incidence. Primary end points were analysed by intention to treat across the study arms and ICC for clustering by household was estimated using the clchi2 command in Stata.28 RRs were calculated for the mask arm. The Kaplan-Meier survival curves were generated to compare the survival pattern of outcomes across the mask and control arms. Differences between the survival curves were assessed through the log-rank test. The analyses were conducted at the individual level and HRs were calculated using the Cox proportional hazards model after adjusting for clustering by household by adding a shared frailty to the model. Owing to the very few outcome events encountered, a multivariable Cox model was not appropriate. We checked the effect of individual potential confounders on the outcome variable fitting univariable Cox models. Since there were 10 cases of CRI, we included this variable in a multivariable cluster-adjusted Cox model. Multivariate analyses were not performed for ILI and laboratory-confirmed viruses because of low numbers. A total of 43 index cases in the control arm also used a mask during the study period (at least 1 hour per day) and 7 index cases in the masks arm did not use a mask at all, so a post hoc sensitivity analysis was carried out to compare outcomes among household members of index cases who used a mask (hereafter ‘mask group’) with those of index cases who did not use a mask (hereafter ‘no-mask group’). All statistical analyses were conducted using Stata V.13 (StataCorp. Stata 12 base reference manual. College Station, Texas, USA: Stata Press, 2011). A total of 245 index patients were randomised into the mask arm (n=123) or the control arm (n=122). The mask arm had on average 2.5 household contacts per index case (n=302), while the control arm had 2.4 household contacts per index case (n=295). Characteristics of index cases and household members are presented in table 1. There was no significant difference between arms, and most characteristics, including medication use (data not shown), were generally similar. Viruses were isolated from 60% (146/245) of index cases. Influenza was the most common virus isolated from 115 (47%) cases—influenza A—100, influenza B—11 and influenza A and B—4. Other viruses isolated from index cases were rhinovirus,13 NL6311 and C229E.7 More than one virus was isolated in 48 (20%) index cases, including 17 coinfections with influenza. Table 2 shows the intention-to-treat analysis. CRI was reported in four (1.91/1000 person-days) household members in the mask arm, compared with six household members (2.95/1000 person-days) in the control arm (RR 0.65, 95% CI 0.18 to 2.29). Only one case (0.48/1000 person-days) of ILI was reported in the mask arm, compared with three cases (1.47/1000 person-days) in the control arm (RR 0.32, 95% CI 0.03 to 3.11). Two laboratory-confirmed infections were identified among symptomatic household members from a separate household. One household member had the same infection (influenza H1N1) as the respective index case. Rhinovirus was isolated from another household member. However, no pathogen was isolated from the respective index case. The two cases of laboratory-confirmed viral respiratory infections of household members occurred in separate study arms (RR 0.97, 95% CI 0.06 to 15.5). The Kaplan-Meier curves showed no significant differences in the outcomes between two arms (p>0.050; figure 2). The duration of contact of index cases with household members was 10.4 and 11.1 hours in the mask and control arms, respectively. On average, participants in the mask arm used a mask for 4.4 hours, while participants in the control arm used a mask for 1.4 hours. In a univariable Cox model, only the age of the household contact was significantly associated with the CRI (table 3). There was no association between mask use by the index cases and rates of infectious outcomes in household members (table 3). Although the risks of CRI (RR 0.61, 95% CI 0.18 to 2.13), ILI (RR 0.32, 95% CI 0.03 to 3.13) and laboratory-confirmed viral infections (RR 0.97, 95% CI 0.06 to 15.54) were lower in the mask arm, the difference was not statistically significant. Tables 4 and 5 show a sensitivity analysis comparing outcomes among household members of index cases using a mask (mask group) with those of index cases who did not use a mask (no-mask group). Overall, 159 index cases (65%) used a mask during the trial period including 43 participants from the control arm. Three hundred and eighty-seven household members were included in the mask group and 210 were included in the no-mask group. Rates of all outcomes were lower in the mask group, and CRI was significantly lower in the contacts of the mask group compared with the contacts of the no-mask group. The Kaplan-Meier curves (figure 3) showed a significant difference in the rate of CRI among the mask and no-mask groups (p 0.020). After adjusting for the age of household contacts, the risk of CRI was 78% lower in the contacts of the mask group (RR 0.22, 95% CI 0.06 to 0.86), compared with contacts of the no-mask group. Although the risks of ILI (RR 0.18, 95% CI 0.02 to 1.73) and laboratory-confirmed viral respiratory infections (RR 0.11, 95% CI 0.01 to 4.40) were also lower in the mask group, the difference was not statistically significant. Masks are commonly recommended as source control for patients with respiratory infections to prevent the spread of infection to others,2 ,3 but data on the clinical efficacy of this approach are sparse. We did not find a significant benefit of medical masks as source control, but rates of CRI and ILI in household members were consistently lower in the mask arm compared with the control arm. The study was underpowered to detect a statistically significant difference. The additional analysis by actual mask use showed significantly lower rates of CRI in the mask group compared with the no-mask group, suggesting that larger trials should be conducted to further examine the efficacy of masks as source control. Our findings are consistent with previous research in community and household settings, where the efficacy of masks as source control was measured. Until now, only one RCT has been conducted in the community setting to examine the role of masks in preventing spread of infection from wearers.3 Canini and colleagues conducted an RCT in France during the 2008–2009 influenza season and randomised index patients into medical mask (52 households and 148 contacts) and control arms (53 households and 158 contacts). ILI was reported in 16.2% and 15.8% of contacts in the intervention and control arms, respectively, and the difference was not statistically significant (mean difference 0.40%, 95% CI −10% to 11%, p=1.00). The trial was concluded early due to low recruitment and the subsequent influenza A (H1N1)pdm09 pandemic.13 In addition, masks were also used by index cases and household members in some community-based RCTs with mixed interventions.14 ,15 Cowling and colleagues conducted two RCTs in Hong Kong to examine the efficacy of masks, and index cases were randomised into medical mask, medical mask plus hand hygiene, hand hygiene and control arms. Both index cases and household members used masks. The rates of laboratory-confirmed influenza and ILI were the same in the intervention and control groups in the intention-to-treat analysis.14 However, in the second trial, mask use with hand hygiene was protective in household contacts when the intervention was applied within 36 hours of onset of symptoms in the index case (OR 0.33, 95% CI 0.13 to 0.87).15 Since masks were used by sick patients and their household members in these studies, the effect of mask being ‘source control’ is more difficult to quantify precisely. Masks are not designed for respiratory protection and are commonly used in the healthcare setting to prevent spread of infections from the wearer, whether worn by a sick patient or well staff member.1 ,3 One such use is the wearing of masks by well surgeons and other OT staff to protect patients from contamination during surgery. Presumably, the exhaled pathogen load would be much higher in a sick patient compared with a well surgeon, and therefore the use of a mask for source control in sick patients may have more benefit than OT use of source control. This study has some limitations. The sample size was small and the study may have been underpowered to detect a statistically significant difference in outcome in the intention-to-treat analysis. Post hoc analysis, however, showed a potential benefit of medical masks for source control. It is possible that infection transmission may have occurred during meal times (when patients were not required to wear a mask). This would have the effect of biasing the results towards the null. In the sample size calculations, we assumed a 16–20% attack rate of CRI in the control arm, based on the results of a previously published household mask trial.17 However, the secondary attack rates were much lower in this study which might be due to testing only symptomatic cases. In a univariable Cox model, only the age of household contact was significantly associated with the CRI. All other variables were uniformly distributed among the study arms, so we only adjusted for the age of the household contact in the analysis of CRI as an outcome. Multivariate analyses were not performed for ILI and laboratory-confirmed viruses. However, some variables may have an impact on the number of events. For example, the rates of hand hygiene were higher among the ‘control’ arm compared with the mask arm (109/122, 89.3% vs 98/123, 79.7%), which may have had an impact on the number of outcome events. Owing to the low event rates and non-significant difference of hand hygiene among the two arms, we did not adjust for hand hygiene in any analysis. Further, inclusion of hand hygiene in the model did not change the HR. Finally, post hoc analyses are potentially biased due to loss of randomisation and it was added as a sensitivity analysis in this study because of deviations from protocol in mask wearing. Despite a lack of evidence, most health organisations and countries recommend the use of masks by sick patients as source control.1 ,2 Masks are used commonly by patients with TB, although clinical trials have not been conducted for this indication. There is a need to conduct larger trials to confirm the suggestion of benefit in our study. If source control is effective in reducing hospital transmission of infection, this may have a practical benefit to mitigate the problem of poor compliance with mask wearing among well HCWs.3 Compliance with any intervention for someone who is well and asymptomatic is far more challenging than compliance in people who are unwell,29 so source control may have an important role in hospital infection control. Reducing the transmission of respiratory pathogens by source patients could also have further benefits in the community in preventing transmission of infection to close contacts such as those in the same household, and should be studied further. The authors thank the staff at the Beijing Centre for Disease Control and hospitals staff. They also acknowledge the support of patients and their families. This study was supported by a UNSW Goldstar award. Contributors CRM was the lead investigator, and was responsible for conception and design of the study, analysing data and writing the manuscript. YZ was involved in implementation and database management. AAC was involved in statistical analysis and drafting of manuscript. HS, DZ, YC and HZ were involved in recruitment and training, manuscript revision. BR contributed to the statistical analysis and revision of manuscript. QW was involved in implementation, contribution to design, analysis and drafting of paper. Funding This study was supported by a UNSW Goldstar award. Competing interests All authors have completed the Unified Competing Interests form (available on request from the corresponding author) and declare that: CRM has held an Australian Research Council Linkage Grant with 3M as the industry partner, for investigator driven research. 3M have also contributed supplies of masks and respirators for investigator-driven clinical trials. She has received research grants and laboratory testing as in-kind support from Pfizer, GSK and Bio-CSL for investigator-driven research. HS had an NHMRC Australian based Public Health Training Fellowship at the time of the study (1012631). She has also received funding from vaccine manufacturers GSK, bio-CSL and Saniofi Pasteur for investigator-driven research and presentations. AAC had testing of filtration of masks by 3M for PhD. Patient consent Obtained. Ethics approval Beijing Center for Disease Prevention and Control IRB and the Human Research Ethics Committee of the University of New South Wales (UNSW), Australia (HREC approval number HC13236). Provenance and peer review Not commissioned; externally peer reviewed. Data sharing statement No additional data are available. If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
<urn:uuid:93c053bb-e851-4027-a2a2-e378ae42830d>
CC-MAIN-2021-21
https://bmjopen.bmj.com/content/6/12/e012330
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988837.67/warc/CC-MAIN-20210508031423-20210508061423-00535.warc.gz
en
0.968192
5,700
2.640625
3
Battle of Bunker (Breed's) Hill June 17, 1775 News of the skirmishes of April 19, 1775, at Lexington and Concord spread rapidly. As couriers fanned out through the small towns of Connecticut on the way to Hartford, and from there to New York and Philadelphia, the men of Connecticut grabbed their guns and set off for Massachusetts, but not as a mob of disorganized individuals. Although they did not wait for the Governor to issue the call to arms, the local militia units formed ranks and marched as if they had received formal instructions. Some were under way within 48 hours of the firing of the first shot on Lexington Green. Word of the British retreat to Boston reached these men at different points along the road. Some who had not gone very far simply disbanded and went home, but others, who had already traveled some distance, continued on to Boston. Jedediah Hyde was among the men who marched from the town of Norwich under the command of Col. Jedediah Huntington. He remained in service for at least 12 days. During the rest of April and May, Connecticut set about the task of formally calling up her troops. On May 1, Jedediah Hyde was commissioned a 1st Lieut. in Capt. Coit's Company of Gen. Parson's Regiment. While most of this regiment remained at New London, one company was sent to the Northern Department (Ticonderoga & environs), and two companies (including Coit's) were immediately assigned to the effort to keep the British bottled up in Boston. It is possible that Coit's Company was actually formed at Boston by Connecticut men who had marched there in response to the Lexington Alarm and never returned home. By the beginning of June, the British commander at Boston, Gen. Thomas Gage, was in a most embarrassing position. The rebel army surrounding the city was not going to go away -- instead it was growing ever larger as new recruits continued to arrive. As a practical matter Boston was under siege. Gage asked for more men; he was sent three major generals, John Burgoyne, Henry Clinton, and William Howe, who arrived on May 25. A plan was laid to break out of Boston, but it was impossible to maintain any kind of secrecy in that hostile city, where hundreds of solid citizens with ample cause to hate the British gleefully reported every troop movement to the American commanders. Warned that the British were preparing to march, the Americans ordered the fortification of Bunker Hill, but for some reason the militiamen set to work on adjacent Breed's Hill instead. Historians have never managed to discover how this came about, but the most likely explanation is that the goal was to place several small cannons within firing range of Boston and that the men in the field (who had not yet learned to follow orders) selected Breed's Hill because it was closer to the intended target. Both hills were on the Charlestown peninsula approached by a narrow neck of land; Bunker Hill was the higher of the two and overlooked Breed's Hill. The British, whose ships controlled the bay, could easily have surrounded the peninsula, cut off and besieged the Americans there and pounded them into surrender. In such a move the British might have been aided by the Americans' failure to secure the higher ground of Bunker Hill, which left the men on Breed's Hill exposed to potential fire from above. As things turned out, however, these details became irrelevant because Gage decided against a siege and elected, instead, to mount a full frontal assault on the American position. Perhaps Gage feared that the time required for a siege might allow for something to go wrong. Perhaps he thought that a victory by such timid means would not adequately demonstrate the futility of continued resistance to His Majesty's authority. In all probability he believed that green troops fresh from the farms of New England would break and run in the face of the superior discipline of the professional soldiers of the British Army. His decision turned out to be a world class mistake, one which nearly lost the battle and which went a long way toward losing the entire war. The British army was trained according to a theory of warfare which relied upon developing forces of sufficient numbers and discipline to be able to absorb volley after volley of enemy fire and still advance as a unit, continuing until the enemy position was overwhelmed. Marksmanship was less important than the ability to accomplish the time-consuming task of loading, firing, and reloading weapons in unison, so as not to slow down or interrupt the momentum of the advance. The forces would eventually approach closely enough so that some of their bullets could not help but inflict damage on the enemy's side. The battle would end in hand to hand combat, in which the bayonet formed the most useful part of the soldier's gun. About military theory and discipline the Americans knew very little. Personal discipline, individual initiative, devotion to the duties of community life, and respect for fellow members of the community -- these were the cornerstones of life in the New England colonies. To the hardworking and thrifty Yankee farmers who awaited the British regulars on Breed's Hill, wasting anything was a sin, and wasting anything as valuable and hard to come by as ammunition was unthinkable. For years their lives had depended upon the ability to hit what they aimed at, whether it was game for the dinner table, or Frenchmen and Indians on the warpath. Some of them had discovered that a musket ball seated in a piece of greased paper and rammed into a grooved or "rifled" gun barrel, developed a spin which increased both accuracy and range. Very few of them possessed a bayonet. Col. William Prescott of Massachusetts commanded the American forces on the Hill, which initially consisted of about 1,000 men, most or all of whom were from Massachusetts. One of these was Dr. Joseph Warren of Boston, an outstanding physician for whom both the Boston patriots and the British had great respect. Although he held the superior rank of general, Dr. Warren elected to serve in the ranks under Prescott, whose abilities he recognized. Prescott's men were reinforced by a group from New Hampshire led by John Stark, and some detachments from Connecticut led by Israel Putnam -- bringing the total to about 1,500 Americans in all. Capt. Coit's Company was among the Connecticut detachments present. On the morning of June 17, 1775, the British ships opened fire. Charlestown was set ablaze, its inhabitants forced to flee. The Americans continued to dig trenches and pile up brush along a fence running from the main fortification to the water's edge. The main British force landed at high tide and, led by Maj. Gen. William Howe, commenced their attack at about 1:00 in the afternoon. Because of the shortage of ammunition the Americans were under strict orders to aim at the waistcoats of the British soldiers, but to hold their fire until ordered otherwise. (Lt. James Dana of Mansfield, CT, threatened to personally shoot any man who fired before the command was given.) As the British continued their steady march up the hill, they could see the Americans inexplicably silent behind the fence. Then, suddenly, as the advancing British reached a point about fifty yards from the American line, a furious and continual firing erupted, whose deadly accuracy was impossible to withstand. The stunned British were forced to fall back. In ordering a frontal assault, Gage had risked the reputation of the British army on the ability of Howe's troops to break the American line. He could no longer afford to settle for a siege. The men were ordered to advance again, marching forward over the bodies of their dead and wounded comrades, but the result was exactly the same as before. A third assault was ordered. There was no other choice -- failure to take the hill by storm would be considered a victory for the Americans. The courage displayed by both sides at this moment is almost unbelievable. As they started up the hill for the third time, the British soldiers had no way of knowing that the Americans had used up just about all of their ammunition. For their part, the Americans stood firm despite their desperate situation, giving every indication that they were ready to fight on indefinitely. As the British closed in, only a few shots were fired. The American ammunition was exhausted, but the intrepid Yankees continued their resistance. Using their muskets as clubs to ward off the British bayonets, they forced the British to fight for every inch of ground. It was an organized retreat, not the rout the British had expected when the day began. The Americans suffered 453 casualties, mostly in the retreat -- 139 killed (including Dr. Warren, who was shot in the head while covering the retreat), 278 wounded, and 36 missing -- but approximately 70% of them escaped unscathed. The British lost approximately 1,150 -- 226 killed and 928 wounded -- or about 45% of the men engaged. "A dear bought victory," Clinton is reported to have said, "another such would have ruined us." The effect of this battle was to electrify both sides of the Atlantic. The Yankee farmers had held their ground. They had been defeated, not by the professional soldiers drawn up against them, but by a lack of ammunition. Throughout the colonies, Americans began to believe that independence from Britain was not only desirable, but also possible. In England, a stirring (and highly prejudiced) American account of the battle caused an immediate sensation and a groundswell of sympathy for the embattled colonists. The official British report arrived almost two weeks after the American version, and was, by comparison, so dull that almost no one paid any attention to it. Gage was called home in disgrace, leaving Howe in command. The British at Boston made no further attempt to leave the safety of the city until the following April, when they woke one morning to find Dorchester Heights crowned with Ticonderoga's cannon. Realizing that their position had become untenable, they took to their ships and sailed away from Boston. Two Eyewitness Accounts Lt. Col. Storrs (an officer in Israel Putnam's regiment) made the following entry in his diary on June 17, 1775, the day of the battle: "At sunrise this morning a fire began from ye ships, but moderate. About ten went down to ye Hill to Genl. Putnam's Post, who has ye command. Some shot whistled around us. Tarried there a spell and Returned to have my company in readiness to relieve them -- One killed and 1 wounded when I came away. About 2 o'clock there was a brisk cannonade from ye ships on ye Battery or Entrenchment. At ____ orders came to turn out immediately, and that the Regulars were landing at sundry places. Went to Head Quarters for our Regimental ____. Received orders to repair with our Regiment to No. 1 and defend it. No enemy appearing -- orders soon came that our People in the Intrenchment were retreating and for us to secure ye retreat. Immediately marched for their relief. The Regulars did not come off from Bunker's Hill but have taken possession of the Intrenchments and our People make a stand on Winter Hill and we immediately went to entrenching. Flung up by morning an entrenchment about 100 feet square. Done principally by our Regiment under Putnam's direction. Had but little sleep the night." Dorothea Gamsby was ten years old in 1775. She was in Boston when the war began. Military activity made it unsafe to travel back to her parents in the countryside, so she remained in Boston with her uncle, Sir George Nutting, and his wife, who were Loyalists. When the British left Boston, the Nuttings were forced to go with them, as it was no longer safe for Loyalists to remain. They took Dorothea with them, and it was many years before she and her family were reunited. Dorothea watched the battle from her aunt and uncle's house, and never forgot what she saw that day. This is the way she told the story to her granddaughter: "Months passed like the dreams of childhood while the Colonys were ripening to rebelion, bloodshed and civil war. They sent a host of troops from home. Boston was full of them and they seemed to be there only to eat and drink and enjoy themselves, but one day there was more than usual commotion. Uncle said there had been an outbreak in the country, and then came a night when there was bustle, anxiety, and watching. Aunt and her maid walked from room to room, sometimes weeping. I crept after them unable to sleep when everyone seemed wide awake and the streets full of people. It was scarcely daylight when the booming of cannon on board the ships in the harbour shook every house in the Citty. My uncle had been much abroad lately and had only sought his pillow within the hour but he came immediately to my aunt's room, saying he would go and learn the cause of the fireing and come again to inform us. He had not left the house when a servant in livery called to say that the rebels had colected in force on Breed's Hill, were getting up fortifications, and that Governor Gage requested his presence. 'There must be a brush,' he said, 'for General Howe has ordered out the troops to dislodge them.' We were by this time thoroughly frightened but uncle bade (us) keep quiet, said there was no danger, and left us. You may depend we sought the highest window we had as soon as the light of advancing day gave us reason to hope for a sight of the expected contest. There they were, the audacious rebels! hard at work, makeing what seemed to me a monstrous fence. 'What is it they are going to do, aunt, and what are they makeing that big fence for?' 'They mean to shoot our King's soldiers, I suppose,' she said, 'and probably the fireing is intended to drive them away.' 'But, Aunt, the cannon balls will kill some of them. See, see, the soldiers and the banners! O, Aunt, they will be killed! Why can't they stay out of the way?' The glittering host, the crashing music, all the pomp and brilliance of war moved up toward that band of rebels, but they still laboured at their entrenchment; they seemed to take no heed. The bullets from the ships, the advanceing column of British warriors were alike unnoticed. 'I should think they would begin to get out of the way,' said my aunt. Every available window and roof was filled with spectators, watching the advanceing regulars. Every heart, I dare say, throbed as mine did and we held our breath, or rather it seemed to stop and oppress the labouring chest of its own acord, so intensely we awaited the expected attack, but the troops drew nearer and the rebels toiled on. At length one who stood conspicuously above the rest waved his bright weapon; the explosion came attended by the crash of music, the shrieks of the wounded, the groans of the dying. My aunt fainted. Poor Abby, (the maid), looked on like one distracted. I screamed with all my might. The roar of artilery continued, but the smoke hid the havoc of war from our view. The housekeeper attended to my aunt and beged for somebody to go for Dr. (Joseph) Warren, but everybody was to much engaged with watching the smokeing battlefield. O, how wild and terrific was that long day! Old as I am, the memory of that fearful contest will sometimes come over my spirit as if it had been but yesterday. Men say it was not much of a fight, but to me it seems terrible. Charleston was in flames, women and children flying from their burning houses sought reffuge in the citty. Dismay and terror, wailing and distraction impressed their picture on my memory, never to be effaced. By and by drays, carts, and every description of vehicle that could be obtained were seen nearing the scene of the conflict and the roar of artillery seaced. Uncle came home and said the rebels had retreated. Dr. Warren was the first who fell that day. Then came the loads of wounded men attended by long lines of soldiers, the gay banners torn and soiled, a sight to be remembered a lifetime. I have read many times of the glory of war but this one battle taught me, however it be painted by poet or novelist, there is nothing but wo and sorrow and shame to be found in the reality. Want, utter destitution to many folowed and when the 12 of August came round and the British troops with the loyal citizens of Boston attempted to celebrate the birthday of their young Prince; scant and course was the cheer their stores afforded. They were temperance people then from sheer necessity. The winter passed, I cannot tell how, but when the spring came everybody went on board the shiping in the harbour, at least so it seemed to me, for the officers and soldiers went and everybody that I knew or cared for, except my Father's family, seemed huddled together in a vessel so small that no room was left for comfort." Notes on Troop Strength THE RECORD OF CONNECTICUT MEN IN THE MILITARY AND NAVAL SERVICE DURING THE WAR OF THE REVOLUTION (1775-1783), Edited by Henry P. Johnston, A.M., under the authority of the Adjutant General of Connecticut; Printed by The Case, Lockwood & Brainard Company, Printers and Binders, Hartford, CT; 1889. [p.58] NOTE ON BUNKER HILL. The number of Connecticut troops present at this engagement was about four hundred. As far as letters and meagre records show they were detailed as follows: on the evening of June 16, 75, a body of one thousand men from the Massachusetts and Connecticut regiments around Cambridge, under the immediate command of Col. Prescott, was ordered to Charlestown Neck to fortify Bunker's (Breed's) Hill. Of this number two hundred were from Conn. under the command of Capt. Knowlton, the detachment being made up of details of one subaltern officer and about thirty men from companies in Putnam's and Spencer's regiments. Lt-Col. Storrs, of Putnam's, states that he sent from his company, "Lieut. Dana, Serjt. Fuller, Corp. Webb, and 28 Privates. Capt. Chester, of Spencer's, states that thirty-one went from his company, probably under Lieut. Stephen Goodrich. Putnam's own company was represented by Lieut. Grosvenor and thirty men. Prescott's command, working all night, completed a redoubt which threatened the British shipping. Lord Howe determined to drive the "rebels from it, and the battle of Bunker Hill followed, June 17. During the progress of the action, Captain Knowlton and the Connecticut men, with others, were sent to the left where they posted themselves behind a stone wall and inflicted heavy loss upon the enemy. Reinforcements from the American camp arrived both before and during the battle. Among these were the whole or portions of at least three companies of Connecticut troops. Captain Chester reached the stone wall with the rest of his company, perhaps sixty men, and Captains Clark and Coit, of Parson's regiment also arrived. These with the two hundred detailed the evening before would make about four hundred as Connecticut's representation at the battle. Among the Connecticut officers mentioned as present in the action were Gen. Putnam, in general command, Major Durkee, Captains Chester, Clark, Coit, Lieuts. Dana, Keyes, Hide, Webb, Grosvenor, Bingham (of Norwich), and Ensigns Hill and Bill (of Lebanon). A few of the men's names are also reported, namely: Roger Fox, William Cheeney, Asahel Lyon, Benjamin Rist, Samuel Ashbo, Gershom Smith, Matthew Cummings, Daniel Memory -- killed; Philip Johston, Wilson Rowlandson, Lawrence Sullivan, William Robinson, Benjamin Ross -- prisoners; Gershom Clark, of Lebanon, wounded; James Law, of Lebanon -- right arm broken; John Arnold, Ebenezer Clark, Elijah Abbe, William Clark, Beriah Geer, Nathan Richardson, William Watrous, Sylvanus Snow, William Moore, John Wampee, and Timothy Bugbee -- lost their guns in the fight. As to losses, one account gives fourteen killed and thirty wounded among the Connecticut men. Dr. Philip Turner is mentioned as "attendg wounded after Charlestown Battle. Lawrence Sullivan, prisoner, was released Feb. 24, 76. William Crane of Wethersfield, Chester's Co., was in the action. --- OOO --- [p.72] SIXTH REGIMENT -- COL. PARSONS' -- 1775 [Raised on the first call for troops in April-May, 1775. Recruited from New London, Hartford, and present Middlesex Counties. Two companies, including Capt. Coit's, marched at once to Boston, and Capt. Mott's was ordered to the Northern Dept. The other companies remained on duty at New London until June 17, when they were ordered by the Governor's Council to the Boston camps. There the regiment took post at Roxbury in Gen. Spencer's Brigade, and remained until the expiration of term of service, Dec. 10 75. Adopted as Continental. Regiment re-organized under Col. Parsons for service in 76. --- OOO --- [p.74] 4th Company William Coit. . .Captain . . New London. . .Com. May 1; engaged at Bunker Hill June 17; detached to command of privateer; disc. Dec. 75; entered Navy in 76. Jedediah Hide . .1st Lieut.. Norwich. .Com. May 1; engaged at Bunker Hill June 17; disc. [Dec.] 75; in service in 77. James Day . . . .2d Lieut. . New London. . .Also Adjutant. See above. William Adams, Jr., . . Ensign. . . . .New London. . .Com. May 1; disc. [Dec.] 75 [Enlistment Roll of this Company missing.] THE STORY OF THE CONTINENTAL ARMY, 1775-1783, by Lynn Montross, Reprinted by Barnes & Noble, Inc., New York, 1967, Formerly published as RAG, TAG, AND BOBTAIL, by Harper & Brothers, 1952. (Bunker Hill troop strength and casualty lists, Maps) THE WAR FOR INDEPENDENCE, A Military History, by Howard H. Peckham, Published by The University of Chicago Press, Chicago, IL, 1958. (Time of attack, Clinton quote) THE RECORD OF CONNECTICUT MEN IN THE MILITARY AND NAVAL SERVICE DURING THE WAR OF THE REVOLUTION (1775-1783), Edited by Henry P. Johnston, A.M., under the authority of the Adjutant General of Connecticut; printed by The Case, Lockwood & Brainard Company, Printers and Binders, Hartford, CT; 1889. (Lexington Alarm information & Connecticut troop organization, Service record of Jedediah Hyde, Storrs' account of battle) PENSION FILE OF JEDEDIAH HYDE #S39759 (Service record of Jedediah Hyde) HISTORY OF WINDHAM COUNTY, CONNECTICUT, by Ellen D. Larned, Vol. II (1760-1880), Published by the Author, 1880. (Putnam and Dana quotes) THE PRICE OF LOYALTY, Tory Writings From the Revolutionary Era, Narrative and Editing by Catherine S. Crary, Published by McGraw-Hill Book Company, New York, etc., 1973. (Dorothea Gamsby's story) THE RIDDLE OF JOSEPH WARREN, by Jay Stevens, "Yankee Magazine, Vol. 57, No. 7, July, 1993. THE SIEGE OF BOSTON, by Donald Barr Chidsey, Published by Crown Publishers, Inc., 419 Park Avenue South, New York, NY 10016, 1996. See also: Worcester Polytechnic Institute Breed's Hill / Bunker Hill Staff Ride Copyright © 1998 G. R. Gordon
<urn:uuid:40afc805-8a8c-49de-9a83-dd06f4c27e88>
CC-MAIN-2021-21
https://grgordon.tripod.com/bunkerhi.htm
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988696.23/warc/CC-MAIN-20210505203909-20210505233909-00057.warc.gz
en
0.977941
5,078
3.1875
3
A small but growing number of studies link enrolment in preschool or child care centers (which typically include a preschool curriculum) to higher cognitive and language scores on kindergarten-entry tests The early childhood stage is a permanent learning stage. Whatever they learn now, they will take home. This preschool education is the provision of education for children before the commencement of statutory education, usually between the ages of three and five, dependent on the jurisdiction. The institutional arrangements for preschool education vary widely around the world, as do the names applied to the institutions. Effective preschool education can help make all children ready to learn the day they start school and, more importantly, help close the enormous gap facing children in poverty. Preschool gives our kids the strong foundation they need to be successful in school and in life. Children who attend pre-kindergarten programs have bigger vocabularies and increased math skills, know more letters and more letter-sound associations, and are more familiar with words and book concepts, according to a number of studies (Patson P. Opido 2010). The child is the ultimate concern in all educational processes. He is the beginning at the end of all educational efforts. The goal of education is to help every child grow up well-rounded; physically well-developed, mentally healthy, intelligently alert, emotionally secure and socially well adjusted. These can be truly achieved by giving attention to the childs foundation. The first day of the children in school is a unique experience. It may be their first contact with big group of children. The difference among first grade pupils in their level of preparedness to grade one work may vary. The grade I teacher should be aware of the differences in the childrens readiness; some readiness is the springboard to do actions. Knowing pupils differences will guide the teacher on what to do to develop them to the fullest ( Lindberg and Swedo, 1995). A child born of a healthy, responsible and emotionally mature parents has a good foundation. His parents, especially the mother, guide him through the proper habits of eating, sleeping and cleanliness. An individuals attitude toward himself and others, his behaviour either at work or at play, and his emotional roots in his early childhood experiences. What he learns at home constitutes the basis for future learning and adjustment. As the child develops social awareness, he needs to experience association with a larger group outside his home. Parents send their children to school simply because they want them to develop basic health habits and self sufficiency. Furthermore, this also includes the ability to use language patterns for simple and correct social attitudes in relation to the company of people around him, whether adults or other children and the appreciation of the aesthetic attributes of his immediate surroundings. Modern teaching accompanied with modules and analytical measures develop the preschoolers memory retention serving as the foundation of their education. Kids today are more willing and not afraid to try to discover new ways and methods of learning. The value of preschool is a hot topic these days. A small but growing number of studies link enrolment in preschool or child care centers (which typically include a preschool curriculum) to higher cognitive and language scores on kindergarten-entry tests. The early childhood stage is a permanent learning stage. Whatever they learn now, they will take home. This preschool education is the provision of education for children before the commencement of statutory education, usually between the ages of three and five, dependent on the jurisdiction. Parents on the other hand, play a vital role in educating their children because they are their first teachers, which is the greatest contribution before a child ever begins his formal education in school. When a child enters the formal school, he carries out with him the acquired values from his parents. Just like the teachers task, if parents fail to perform their responsibilities, it may bring misbehaviour on their children which may directly or indirectly affect the childs academic performance. In the Philippine public elementary schools today, inner tensions have been continuously affecting the learners going to grade one level, especially those who had never gone to any kind of schooling before. These learners entering grade one have many apprehensions. Most of them have no experiences in going to school. Parents are not capable of sending them to school especially those in remote and slum areas. Instead of giving their children a chance to study in Day Care Centers and Kindergarten in some public elementary schools, they ended up waiting for their to be accepted in Grade One. With these scenarios the pupils encounter difficulties in catching up with different skills like numeracy and literacy which are now the basic skills necessary in the first grade level of formal schooling. These children also suffer in relating themselves to their new environment, the school. In order to have a smooth transition from home to school and to prepare them socially and psychologically, the curriculum on the Early Childhood Experiences was recommended for adoption in all public elementary schools as included in Every Child A Reader Program ( ECARP). It aims to developing the reading readiness and developmental reading in Grade one as launched by the Department of Education. One of the major goals of the 2015 Education for All (EFA) is the expansion of the coverage and improvement of the quality of the Early Childhood Care and Development (ECCD) programs in the country. The present government administration in its Ten-Point Agenda has declared a policy calling for the standardization of preschool and day care centers. The Department of Education (DepEd) in support of this thrust will administer School Readiness Assessment Test to All Grade One Entrants, effective SY 2005-2006. The School Readiness Assessment (SRA) is a tool to determine the readiness of Grade One entrants in tackling formal Grade One work. The School Readiness Assessment Tool will be administered by Grade One teachers assisted by the Grade Two and Three teachers one week before opening of classes. The assessment shall not be treated as an entrance test or examination. No child shall be refused entry to Grade 1 based on the results neither of this assessment nor without preschool experience. To continuously determine the school readiness of all Grade One Entrants, the School Readiness Assessment (SReA) was administered. One of the objectives of SReA is to assess pupils readiness across the different developmental domains gross and fine motor, receptive/ expressive language, cognitive domain and socio- economic domain. The result obtained was the basis for grouping the Grade One entrants. It was also used to guide Grade One teachers in providing appropriate instruction and assistance to address specific needs of the pupils. The result of the School Readiness Test in May 2011 identified that there were at least forty two point ninety eight percent of the school population of Grade One entrants were not ready. Children with No Early Childhood Care and Development (ECCD) has low average in pupils readiness across the different developmental domains gross and fine motor, receptive/ expressive language, cognitive domain and socio- economic domain. Background of the Study The researcher is motivated by the above mentioned situation and this led to the conceptualization of this study. As an educator, the researcher is faced with the fact that there is an imperative need to strengthen and streamline the internal management of educational arrangements in order to achieve efficiency and responsiveness to trends and challenges of the next millennium. It is therefore the aim of this study to empower parents and positively influence them on affirmative effects of pre-school education in the holistic development of their children particularly on the advancement of their academic performance. The value of preschool is a hot topic these days. A small but growing number of studies link enrolment in preschool or child care centers (which typically include a preschool curriculum) to higher cognitive and language scores on kindergarten-entry tests. The early childhood stage is a permanent learning stage. Whatever they learn now, they will take home. This preschool education is the provision of education for children before the commencement of statutory education, usually between the ages of three and five, dependent on the jurisdiction. The institutional arrangements for preschool education vary widely around the world, as do the names applied to the institutions ( Bustos Alicia and Espiritu 1985). The Early Childhood Experiences Curriculum, hence all Grade One teachers are expected to implement it. Teachers are also encouraged to make use of local songs, games, dances and indigenous materials to enrich the curriculum. It is hoped that the Early Childhood Experience for Grade One will greatly benefit the children and strengthen efforts to make the schools child-friendly. Theoretical Framework This study is anchored on Edward Thorndikes, Jerome Bruners, and B. F. Skinners Theories of Learning. These theories enabled the researcher in the conceptualization of this work. The Law of Readiness as advocated by Thorndike is associated with mind set. It states that when an organism is prepared to respond to a stimulus, allowing doing so would be satisfying while preventing him would be annoying. This law works well in this study because the children is mentally ready to learn. The Law of Exercise states that the constant repetition of response strengthens its connection with the stimulus, while disuse of response weakens it. The exercises given to the children using a modifiable connection like instructional materials enables them to acquire the learning easier and faster because the responses will be utilized, the stronger the connection to be developed. Thus, when a modifiable between a stimulus and a response has been made, it is strengthened if its results in satisfaction as the Law of effect proves. Jerome Bruners (1915) theory of Instrumental conceptualization is also applied as it involves (3) three simultaneous processes as: Acquisition, Transformation and Evaluation. This theory of learning believes that the acquisition of whatever form of knowledge acquisition, who selects structures, retains and transforms information. Teaching without the use of proper strategic plans will result to failure. Through School Readiness Assessment Test (SReA), pupils will acquire knowledge through different techniques used by the researchers. Hence, learning to read is facilitated by Skinners Theory. Conceptual Framework This study focused on the evaluation of academic performance of Grade One pupils with and without Early Childhood Experience of Sto. Nino Elementary School. The independent variable consist of School Readiness Assessment Test (SReA) for children with and without Early Childhood Experience while the dependent variable is the academic performance of the respondents in terms of the following: Sensory Discrimination, Concept Formation, Numeracy, Reading Readiness and Construction and Visual Motor Integration. Research Paradigm Independent Variable Dependent Variable Figure 1 The above figure shows the relationship of independent variables to dependent variables of the study. Statement of the Problem This study intended to evaluate the academic performance of Grade One pupils with and without Early Childhood Experience (ECE) at Sto. Nino Elementary School, Division of San Pablo City. Specifically, this study sought to answer the following questions: 1. What are the mean pre-test scores of the two groups of pupils in terms of the following: a) Sensory, b) Concept Formation, c) Numeracy, d) Reading Readiness and e) Construction and Visual- Motor Integration? 2. What are the mean post-test scores of the two groups of pupils in terms of the following: a) Sensory Discrimination, b) Concept Formation, c) Numeracy, d) Reading Readiness and e) Construction and Visual Motor- Integration? 3. Is there a significant difference in the mean scores between the pupils with and without Early Childhood Experience (ECE) and their performance? Hypothesis The hypothesis stated below was tested in this study. There is no significant difference in the mean scores between the pupils with Early Childhood Experience (ECE) and those without Early Childhood Experience (ECE) and their performance in terms of the following: i. Sensory Discrimination, ii. Concept Formation, iii. Numeracy, iv. Reading Readiness and a. Construction and Visual- Motor Integration? Significance of the Study This study is of importance to the pupils, teachers, principals, parents and other researcher for the following reasons: Pupils are primary group which the study would benefit. They are the central point to be given much consideration because they are the recipients of this study. They will be assessed and it would be a big help for them to improve their academic performance. Teachers are the facilitators of learning. They may be able to undertake possible teaching alternatives that may be facilitate, enhance and improve their teaching skills to cater the needs of the pupils with and without Early Childhood Experience in order to improve their academic performance. They will specifically take cognizance of their status at present in terms of the problem arising in their own classroom. Likewise, they could assess definitely where the problem lie and thus, make remediation to solve them. Therefore the learners needs would be taken into considerations. The results of this investigation will also help other teacher in the field since the problems raised here may have also help them to improve the academic performance of their pupils. Principals are the ones who initiate support for every change that happens in the school. Good management and supervision of the school and the teachers, respectively, are the responsibility of the principals. Results which this study reveal may enable the school heads to plan out better and more effective ways to evaluate the academic performance of Grade One pupils with and without Early Childhood Experience. It is very important to take in consideration the needs of Grade One pupils because it is the foundation year for them. In that case the principal ought to have a plan to cater the individual needs of the learner to improve their academic performance to elevate the quality of education in the country. Parents are stakeholders of the school. The findings of this study are important to parents because they need to be informed about the performance of their school children in school. Through this, they will know the importance of Early Childhood Experience (ECE) for their children. For this reason, they will send them in preschool. So that their children will not be shocked with their new environment. The parents will work hand and hand with the teacher in facilitating strategies to evaluate the academic performance of the learners. They may also help influencing their children to have a good study habits. Their support to their children and school is important so that the goals will be attained. Other researchers who would be interested with this problem may gain further insights in developing their own research work. The data that will be revealed by this study may be used by other researcher to enhance their own studies. They may also use it as related study or augment data that they have to come up with a more comprehensive knowledge about the problem presented here into. Scope and Limitation of the Study The focus of the study to be conducted is An Evaluation of Academic Performance of Sto. Nino Elementary School, Dapdapan, District, Division of San Pablo City. It limits its coverage on the result of School Readiness Assessment (SReA) which includes the following areas Sensory Discrimination, Concept Formation, Numeracy, Reading Readiness, and Construction and Visual Integration; the Pre test and Post test of School Readiness Assessment (SReA) and the instructional module being devised to answer the needs of Grade One pupils. The respondents of the study will be eighty (80) pupils of Sto. Nino Elementary School, forty (40) pupils with Early Childhood Experience (ECE) and forty (40) pupils without Early Childhood Experience (ECE). Definition of Terms For the interpretation of the study, the terms used are defined in order to avoid vagueness or ambiguousness meaning. Therefore, provide the reader a common point of reference. Public Elementary SchoolsThese are school managed, operated and maintained by the national government. It offers curricular programs for Grade One to Six children. Sensory Discrimination These refer to exercises in discriminating simplest form of mental operation that was clearly intellective. It includes exercises on identifying same and different shapes. Concept FormationThese refer to exercises that requires the learner to construct the properties of the object from the definition. It includes exercises on completing statements showing simple analogy. Numeracy The term refers the ability to learn the specific tasks in Mathematics like counting, arranging, sequencing sets of objects. The numeracy skills are designed to help with the more advanced levels of mathematics that pupils will encounter during the school lives and also into their adulthood. It includes exercises pointing out which has more or less sets. In this study, it pertains to the level of achievement of the Grade One pupils in different learning skills in Mathematics as perceived by their Grade One teachers. Construction and Visual-Motor Integration These skills refer to the smooth coordination of the eyes and hands working together. Sto. Nino Elementary School Public Elementary school situated in Brgy. Sto. Nino, San Pablo City where the present study is being conducted. Grade One Pupils. Refer to children entering the formal school in the primary grades as prescribed by the Department of Education, whose ages ranges from six (6) years old and above. Chapter II REVIEW OF RELATED LITERATURE AND STUDIES This chapter presents literature and studies which are related to the problem. The materials found in local and foreign books, educational journals and magazines, documents, guidelines and reports by Department of Education provided references. Related Literature Philosophy and Goals of Elementary Education. Philosophy of pre-school education as stated in DECS Memo no. 107 s. 1989 considers the child, the school and the teacher with the support of the family in the maximizing the childs potential. Pre-school education is based on the knowledge that each child is unique individual with his own biological make up, interest, capacities, and ways of viewing the world. He has a tremendous capacity for learning. He is active and understands the world differently from adult. His language has developed with acquisition of wide vocabulary making him capable of communicating his ideas and feelings. A pre-school child is always on the process of becoming, and therefore if properly developed can become a critical thinker and a socially sensitive, directed, creative, responsible and caring individual. Pre-school education must aim to develop children in all aspects physical, social, emotional and cognitive so that they will be better prepared to adjust and cope with life situations and the demands of formal schooling. By doing so, learning gaps and dropouts may be reduced or avoided to the maximum. Objectives of Pre-School education is founded on the following objectives; (Inc.DECS Memo No. 45 1995). They are as follows: To develop the child in all aspects ( physical, social, emotional and cognitive) so that they may be better prepared to adjust and cope with the life situations within the context of his experience. To maximize the childs potential through a variety of carefully selected and meaningful experiences considering his interests and capabilities, and; To develop the child in all aspects so that he becomes a self- propelling, thinking and contributing individual able to make decisions which all prepare him more complex demands for future life. DepEd Order No. 10, s. 2004 is the legal basis in the implementation of the Enhanced Eight-Week Early Experiences for Grade One. Its main thrust is development of academic skills among learners. It is because most Grade One entrants have not gone through pre-school experiences. Hence, the Early Childhood Experience has been enriched and aligned with the BEC making its integral part of the Grade 1 Curriculum. In 1995, Early Childhood Experiences for Grade One was institutionalized at the same time as the official age for entry into the primary school was dropped to six years of age. All Grade One teachers were requested to implement the Eight-Week Curriculum and gradually move to the regular Grade One curriculum. Pursuant to DepEd Order No. 15, s. 2005, which calls for the administration of School Readiness Assessment for All Grade One Entrants, all incoming Grade 1 shall undergo a school readiness assessment using the revised tool. The School Readiness Assessment (SRA) will be administered by Grade 1 teachers to be assisted by Grade II, III and master teachers of their respective schools. This assessment shall be administered twice. The first assessment given on May. The second shall be administered after the children have undergone 8-week curriculum, focusing on the competencies not manifested by the child during the first assessment. The SRA will determine the level of progress of Grade 1 entrants across different developmental domains that are critical in tracking Grade 1 learning competencies. The result shall be the basis for grouping the Grade 1 entrants. It will be also used to guide Grade 1 teachers in providing appropriate instruction and assistance to address specific needs of the pupils through the utilization of the 8-week curriculum. The assessment shall not be treated as an entrance test or examination as children may be anxious about passing or failing. No child shall be refused entry to Grade 1 based on the results of this assessment. Educating our children at an early stage will give more chance for young Filipinos in the future to compete for jobs and opportunities in the new world order in which better educated and highly skilled persons have become the most valued resources. Giving access to free quality early childhood education will bridge the gap between the rich and the poor that will give our less privileged countrymen a strong foundation for the challenges in the next millennium. (Eduardo J. Angara, 1997) The Early Childhood Care and Development ( ECCD ) Law, enacted in 2000, recognizes the importance of early childhood and its special needs, affirms parents as primary caregivers and the childs first teachers, and establishes parent effectiveness, seminars and nutrition counselling for pregnant and lactating mothers. The law requires the establishment of a National Coordinating Council for the Welfare of Children which: (a) establishes guidelines, standards, and culturally relevant practices for ECCD programs; (b) develops a national system for the recruitment, training, and accrediting of caregivers; (c) monitors the delivery of ECCD services and the impact of beneficiaries; (d) provides additional resources to poor and disadvantaged communities in order to increase the supply of ECCD programs; (e) encourages the development of private sector initiatives the Republic Act 6972 known as Barangay (village) Level Total Protection of Childen Act has a provision that requires all local government units to establish a day-care centre in every village ; the law institutionalized the features of day-care programme that provide for young childrens learning needs aside from their health and psychosocial needs. The universalization of early childhood education and standardization of preschool and day care centers was established though the Executive Order No. 658 of 2008 (Expanding the Pre-School Coverage to Include Children Enrolled in Day Care Centers). (PTFE 2008). According to Clark (2002), in her article First Grade Readiness, there are signs one can look for, to know if a child is a ready for first grade. In the physical realm, the first grade childs limbs are now proportion with the body and head . There is a loss of baby far and greater definition in the face. In the emotional realm, the young child who once expressed strong emotions through sudden outburst now has a feelings that begin to deepen. A child will talk of hurt feelings and being sad. Socially, the first grade ready child begins to form friendships which go deeper than before. The child feels loyalty for friends and often expresses the desire to be with them. In the mental realm, there is the birth of free memory. This is different than the memory of a four year old. The younger childs memory must be triggered by a sight, smell, or rhythmic verse when the memory and recall it will. Kagan (2000) stated that the concept of school readiness has been defined and redefined over the years resulting in differing viewpoints. Several theories of child development and learning have been used to explain the term. In fact, there appears to be two types of readiness: readiness to learn, which involves a level of development at which the child has the capacity to learn specific materials, and readiness for school and readiness for school which involve specific set of cognitive, linguistic, social and motor skills that enable a child to assimilate the schools curriculum. According to Quinto (2001) the lowering of entrance to six years old for grade one pupils in the Philippines public elementary schools have created inner tensions, especially to those who had never gone to any kind of school before. So, in order to have a smooth transition from home to school and to prepare them socially, psychologically, the curriculum on the Early Childhood Experiences was recommended for adoption in all public elementary schools. Studies show that childs mind is almost full developed before he reaches the age of five. This presents a need for an organized early childhood education. Pre-elementary or preschool education is one of the latest trends in childhood education which gives equal opportunities to all children at the lowest step of educational ladder. Preschool education holds a prominent place, being that level in the school system wherein children are trained to be better prepared for grade one. For the development of the child, the curriculum focuses on these areas of development: physical ( gross and fine motor coordination through play and manipulated activities like games, simple work); cognitive ( communication skills, sensory-perceptual concepts, numeracy skills); personal social (health habits and independence in dressing, eating, sleeping, toileting; relating with teachers, peers and other people through group play and interaction; follow rules and routine. Groark (2006) stresses that the school and district administrators, as well as policymakers are increasingly recognizing that early education and intervention services for young children have a direct and positive impact on later school performance and quality. Soliven (1999) stated that an authority on child development, underscores the significance of pre-primary education to the mental development of children citing the results of research which showed that pre-primary education is important to the child, she pointed out the intellectual capacity of the child is most susceptible to reaches a substantially higher rate of intellectual development of Early Childhood especially in a favourable environment. It is apparent that intelligence is best developed in the first six years of life, if the child is exposed to a favourable environment for development during this formative period. Vittetow (1994) former Education Expert of International Cooperation Administration (ICA) in his Educational Series Bulletin for the Bureau of Public Schools gave growth characteristics of Pre-school Filipino children, which are true to all children at this level of growth and development. Said development and growth includes: 1) Physical Characteristics, 2) Mental Characteristics, 3) Social Characteristics, 4) Emotional Characteristics, 5) Spiritual and Moral Characteristics and 6) Aesthetic Characteristics. According to Kats (2001) what the children learn, how they learn, and how much they learn depend on many factors. Among the most important factors are the childs physical well-being, and his emotional and cognitive relationships with those who care for him. The school readiness goal reflects two concerns about the education of young children. The first is that the increasing numbers of young children in poverty, in single-parent households have limited proficiency in English are affected by the drug abuse of their parents have poor nutrition, and receive inadequate health care. The second area of concern involves such matters as the high rates of retention in kindergarten and primary grades, delayed school entry in some districts, segregated transition in classes in others and the increasing use of standardized tests to determine childrens readiness to enter school. Standardized tests used to deny children entrance to school or place them in special classes are inappropriate for children younger than six. These trends are due largely to the fact that an academic curriculum and direct instruction teaching practices that are appropriate for the upper grades have gradually been moved down to the kindergarten and first grade. These two areas of concern suggest that reaching the school readiness goal will require a twofold strategy: one part focused on supporting families in their efforts to help their children get ready for school, and the second on helping the schools to be responsive to the wide range of development levels, backgrounds, experiences, and needs of children to bring them in school. Watson (1985) pointed out that groups of children of higher economic status have higher level of intelligence than those favored economic status, the higher their average IQs on Standford Binet or similar verbal test. The mismatch between the schools and children from low income working class families had led to concerted attempts to involve parents from these families in the schools. When the school can involve low-income parents, their childrens school attendance increases, the children are less disruptive in class and less aggressive on the playground, their classwork improves, and they are more likely to complete their homework. If they are raised in emotionally secured homes they tend to be emotionally secured children. If they are raised in homes which lack happiness and have little emotionally security they may in time tend to be unhappy and insecure. However, these differences between higher and lower socio-economic groups may be due to non-intellectual factors. Some of these factors serving to depress intelligence test scores among the lower socio-economic groups could be greater resistance to taking test, the effect of nutritional deficiencies, different attitudes towards education, suspicion, lack of support and the like. Although any or all of these factors seem reasonable, there are no definite research to establish the answer conclusively. It has been observed that most elementary teachers do not have the necessary educational background to teach visual arts. University of Hawaiis Professor, Dr. Stephanie Feeny (1986); stresses the importance of the arts in the development of the thinking process in children.
<urn:uuid:1e4e65d0-e80d-4455-9a6d-737648f26fdb>
CC-MAIN-2021-21
https://essaygraph.com/essay/an-evaluation-of-academic-performance-of-grade-essay-17022
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988758.74/warc/CC-MAIN-20210506144716-20210506174716-00217.warc.gz
en
0.958487
5,979
3.859375
4
The article on this topic (problems of tax collection in Nigeria) is an extract from literature review of the project material. The complete project work would be made available when you subscribe for the complete project material. 2.1 Conceptual Framework Adebayo (a004) defined taxation is the legal demand made by the Federal Government or State government for its citizens to pay money on income goods and services. In a less complex society in which the government has few duties and responsibilities, the financial need of the government are minimal. However, as society becomes complex the need of the people becomes greater and the government assures greater responsibility the financial needs of the government becomes great. Consequently, taxes increase and their effect on the economy becomes more important. In the past, government has utilized taxation as an instrument of regulating the general economy. Since income tax provide large source of national revenue. Effect on inflation, unemployment and social and economy objective has become the prime consideration in enacting tax law in Nigeria. Taxation in an aggregate definition is mandatory contribution from the people to generate revenue for the government; use in financial it’s capital project and recurrent expenditure. A renown tax authority in his book “Principle of Public Finance” Dr. H. Dalton define tax as a compulsory contribution imposed by a public authority, in respect of the exact amount of services rendered to the tax payer in return” according to Professor Seligman, a tax is compulsory contribution from a person to the government to defray the expenses incurred in the common interest of all, without reference to special benefit conferred. From the above concept, the following are therefore the global objective of taxation and avoidance. i. To persecute the tax law very vigorously thereby deterring tax evasion and avoidance. ii. To recognize the tax official’s as important human asset in achievement of the said objective. iii. To collect tax according to the law as means as possible and to the actively encouraging voluntary competence. 2.2 History of Taxation in Nigeria Taxation is an age long concept which dates lack to the pre-colonial era in Nigeria. Taxes where paid through different kinds of manual labour for the entire community benefit. Some examples of such services are clearing of bushes, digging of pit toilet, well etc for the benefit of the community as a whole. Failure to render such services usually resulted in seizing of property which will be claimed only on payment of money. For example, the best house at Isenyin, which is inherited by Oyo State Government was said to have been built between 1916 and 1932 after the Isenyin riot of 1916, under the supervision of Captain W. Rose, the resident district officer and Mr. Yerokun, the case taker. In 1904, during the colonial rule, late Lord Lugards government introduces income tax to Nigeria and community tax was being paid in Sokoto caliphate, northerner Nigerian. The Ordinance of 1917, 1918 and 1928 were later incorporated into the Direct Taxation Ordinance of No. 4 of 1940 which replace the native revenue Ordinance. During this period, the board of constituted, comprising of the following:- 1) The residence Governor. 2) A representative of elders in each district. 3) Any native authority recognize by the tax authority. 4) Any village council appointed by the government. 2.2.1 Characteristic of Taxation There are three major characteristic of taxation. There are as follows: Tax is compulsory contribution imposed by the government on the people residing within the country. Since it is compulsory, it then means that person who comes under a tax jurisdiction and refuses to pay is liable to punishment. Tax is not levy in return for any specific service rendered by the government to the tax payer. An individual cannot ask any special benefit for the state in return for the tax paid by him. Tax is a contribution to settle the cost incurred by the government of the state, the state uses the revenue collected from the taxes to provide good and social services such as hospital, school, public utility service and so on which benefit all the people. 2.2.2 Form of Taxes The mode of differentiating tax by asking a question who pay tax? I am assessed and fellow pays, them tax become indirect. Direct Tax: This are taxes that are levied on the income, gains or profit of individuals and business firms, and which are actually paid by the person or persons on whom is legally imposed. This view was aptly experienced by John Staurt Mill who define tax as one which is “demanded from the very person who it is intended or desired should pay it. In Nigeria, direct taxes include the following: i. Personal Income Tax: This is a tax on the income of an employees, sole trader, partnership and personals. ii. Company Income Tax: This applies to the profit/income of companies which are usually cooperate economic. iii. Capital gain Tax: This affects companies, individuals and non-cooperate bodies. It is a tax on the gains arising from the disposal of items of Capital natures. iv. Petroleum Profit Tax: This is a tax payable by entity that engage prospecting for or the extraction and transporting of petroleum oil on natural gas. Indirect Tax: An indirect tax is a tax imposed on employment of goods and services by individuals as well as cooperate persons. It impose on one person, but paid party or wholly by another, owning to “a consequential change in the terms of some contract or bargain between them” in Nigeria, example of indirect tax are as follows: i. Import Duties/Tariff ii. Import Duties iii. Custom Duties iv. Excise Duties v. Value Added Tax (VAT) 2.2.3 At State Government Level The administration of the income tax law in each of the federation invested in the State Board of Internal Revenue prior to 1993. The composition of the Board could be different from the state to state. However, with 1993 amendment to ITMA the composition is know uniform through out the country. Sub section 2 gives the composition of the State Board as: a) Three other persons shall be nominated by the Commissioner of Finance of the State on merits. b) The Directors and Head of Department within the state service. c) The Director from the State Ministry of Finance. d) The executive head of the state service as chairman. He shall be a person experienced in taxation, appointment in from within the state service e) A legal adviser who shall be appointed from the state ministry of justice. The State Board shall be responsible for: a) Ensuring the effectiveness and optimum collection of all taxes and penalties due to the government under the relevant law; b) Appointment, promotion, transfer and discipline of employee of the state service. c) General control of the management of the service on matters of policy, subject to the provision of the law setting up the service. d) Making recommendation where appropriate to the joint tax Board on tax policy, tax reform, tax legislation, tax treaties and exemption as may be required from time to time. 2.2.6 Technical Committee of the State Board As an adjunct to the Board, the law (Section 33C of Decree 3 of 1993) also provided for technical committee of the Board which shall be made up of the following: a) The Chairman of the State Board as Chairman. b) The Director within the state service. c) The Legal adviser to the Board. d) The Secretary. The technical committee shall have power to do the followings: a) Have powers to co-opt additional staff from within the services in the discharge of its duties. b) Advice the State Board on all its power and duties as prescribed. c) Attend to such other matter as may from time to time be referred to it by the Board. d) Consider all matter that require professional and technical expertise and make recommendation to the State Board. 2.3 Sources of Tax Revenue An interesting feature of the Internal Revenue sources of the State is that they are common. The same revenue sources are therefore observed from one state to the other with only little variation. The sources are tax-revenue sources and no-tax revenue sources. The differences between the two sources is that where as the tax revenue are dependent on taxes imposed by the state, the non-tax revenue resource are independent of taxes and hence of the tax administrative machinery available to the state. An example of non-tax revenue is income earned from state enterprise like property, development corporation, housing estates, government farms etc. Although a good number of such public enterprises have not really generated substantial revenue to state coffers. There are therefore relatively smaller components of the internal revenue sources of the states. Other examples of non-tax revenue are grants, gift or donations by state indigenes. The large component of internal sources of states’ revenue is the tax-revenue sources. There is a long list of these taxes and fee. A typical state list of tax-revenue sources include:- i. Direct Assessment ii. Pay As You Earn (PAYE) iii. Driver Licence Fee iv. Motor Vehicle Licence v. Entertainment tax vi. Stamp duties and Penalties 2.4 Importance of Taxation Taxation as one of the measures that assist the nation economy has the following importance:- a) Taxation is imposed to generate revenue for the government to meet its capital and recurrent expenditure. b) It reduces inequalities of income; the more you earn, the more you pay. c) To increase output and employment. d) To awaken civic responsibilities among citizens. e) Tax is in fiscal policy measure inflation, deflation and depression. f) It encourages and protects new industries. g) It discourage the consumption of harmful product or foods such as beer, tobacco etc 2.4.1 Tax Exemption In Nigeria, there are some types of income that are completely exempted from tax imposition. Their incomes are those from:- i. Social clubs ii. Cooperative society iii. Mosque and churches Profit of trade union Fund raised by Local government Federal Government endowment funds. Contribution to approved institutions e.g. pension and national providence funds. 2.5 Review Of Taxation Problems In Nigeria Challenges of the Tax collection and Administration in Nigeria Today In a symposium by the Chartered Institute of Taxation of Nigeria, as part of Nigeria’s 50th Anniversary Celebration, Naiyeju, J.K. (2010) highlight the various Challenges of the Tax collection and Administration in Nigeria Today as follows: (i) Administrative Challenge: Most of the Tax authorities (especially the States and Local Government) lack the desired institutional capacity to administer effectively. The taxes under their purview (capacity in terms of staffing, skills, salary pay, other funding, computer and IT infrastructure etc). (ii) Compliance Challenges: For PIT, Non-compliance of employers to register their employees and remit such taxes to relevant tax authorities. For VAT, a lot of VAT collected are not remitted while many evade the tax in the cities and rural areas. For CIT, many of SMEs, informal sectors and even big companies carry out evasive practices. (iii)Lack of Equality: The bulk of PIT today are paid by only the employees. Politicians, the rich, professionals and the privileged, few are not equitably taxed. (iv)Challenge of Multiple taxes: It is still a major problem besetting our tax collection and administration. (v) Poor Taxation Drive by tiers of Government: The political economy of revenue allocation discourages a proactive revenue drive, especially by the states and LGs. They heavily rely on their share of the oil revenue. (vi) Challenge of Bad Governance: Taxpayer are not encouraged to pay more taxes because there is no visible evidence of good governance. (vii) Challenge of Corruption: The tax collection and administration is often prone to corruption. The corruption risk erodes the tax yield and confidence in the system. (viii) Challenges of Human Capacity Building and Training: At the States and Local Governments, there is dearth of capable hands to administer the relevant taxes efficiently. 2.5.1 Tax Refund A Tax refund or tax rebate is a refund of taxes when the tax liability is less than the tax paid. When a tax is over paid, the law requires it be refunded. Section 23 of FIRS (Establishment Act) 2007 makes provision for tax Refund to eligible taxpayers. The Act gives FIRS the power to make tax refund after proper auditing. A dedicated account is to be set up by the AGF and funded from the Federation Account based on approved budget. The Tax refund is to be made from this account. Payment should be made from this account within 90 days. The prescribed 90 days commences from the time a claim is made. What if the claim is frivolous? It is presumed the claim will be registered to avoid controversies. The service is to decide on who is eligible for refund. This may be unfair to a legitimate taxpayer if there is undue delay. There is no time-frame stipulated for the tax audit instigated by a claim for tax refund. 2.6 Prospects of Tax Collection and Administration In a symposium by the Chartered Institute of Taxation of Nigeria, as part of Nigeria’s 50th Anniversary Celebration, Naiyeju, (2010) highlight the prospects ahead in regards to the Tax collection and Administration in Nigeria. These are in line with the recommendations of the symposium as follows: The Current National Tax Policy Bill and remaining four Tax Bills should be passed into laws by the National Assembly. The collection and Administration of Personal Income Tax, other Taxes and levies (in the approved list of collection) Act 1998 should be removed from the first schedule of the “FIRS (Establishment Act) 2007” to conform with our fiscal federalism. Tax Compliance Strategy should be improved drastically, at all levels of our tax collection and administration. A bill to establish tax court should be passed by the State. The tax collection and administration should be strengthened, especially at the states and local government levels in a manner similar to the provisions of FIRS (Establishment) Act 2007. This will given then more autonomy and allows then to build capacity for efficient tax collection and administration. Enthrone Good Government to elicit voluntary tax compliance. Government should improve the current revenue allocation system so as to encourage the taxation drive of the state and local government. The introduction of the present tax refund system is significant step in the right direction. The FIRS, should demonstrate good intention by making prompt refund of taxes over paid by genuine taxpayers. Companies, small, medium and big, are advised to keep proper and complete records of their business transactions to convince the FIRS tax auditors for their tax refund claims The tax authorities too, should be more careful and objective in their assessment to avoid taking excess tax from the taxpayer to warrant tax refund at a later date. State and local government tax authorities should recruit training end motivate high level skilled personnel to administer taxes in their jurisdiction. Corruption risk –mitigating strategies must be developed for the tax collection and administration: - Efficient service delivery - Code of Conduct for staff, - Internal Control, - Sanctions and incentives - Whistleblower protection; and - regulation against corrupt practices. - Corruption auditing In conclusion, as we go into post 50 years of our independence, all economic citizens must be committed to meeting their tax obligation while the government must be serious in the discharge of its responsibilities to the governed. Tax evasion and corruption must be seen as social and economic leprosy on perpetrators. Entrance to our public offices and good things of life must be shut to tax evaders and corrupt citizens. Good taxpayers and honest citizens must be adequately reward (Naiyeju, J.K. 2010). 2.7 THEORETICAL FRAMEWORK Local government system in Nigeria needs a moderate amount of financial autonomy to be able to discharge its responsibilities effectively. Public revenue in a federal system assumes that there are benefits to be derived from decentralization. Public revenue decentralization occurs when lower tiers of government have statutory power to raise taxes and carry out spending activities within specified legal criteria. This is referred to as the Overlapping Authority Model propounded by Deil.S.Wright (1978) on Intergovernmental relationships. Public revenue decentralization occurs when much of the money is raised centrally but part of it is allocated to lower levels of government through some revenue-sharing formula otherwise known as administrative decentralization. The main reason for decentralization is anchored on allocation sharing or efficiency grounds so it is possible to advance argument for decentralization in Nigeria where there are many ethnic groups. Oates (2003:240) contends that “there are surely reasons, in principle to believe that policies formulated for the provision of infrastructure and even human capital that are sensitive to regional of local conditions are likely to be more effective in encouraging economic development than centrally determined policies that ignore these geographical differences” There is a great relationship between decentralization and economic growth and behaviour for economic fundamentals within the decentralized jurisdiction is a matter that remains an empirical issue and discussions must be country specific. Kim (1995) quoted in Oates (2006) has shown that in his mode of explaining rates of economic growth, revenue decentralization that there are positive and statistical significant change, using a sample of countries. His results also shows that, other things being equal, more public revenue decentralization was associated with more rapid growth in GDP per capita during 1974-1989 period. Prud’ homme (1995) on the other hand, argues that decentralization can increase disparities jeopardize stability, undermine efficiency and encourage corruption. He maintains that local authorities, for example, have few incentives to undertake economic stabilization policies. The instrument of monetary and public revenue policies are better handled by the central government. Oates (2003) opines a contrary view that the principles of centralization is costly because it leads the government to provide public goods that diverge from the preferences of the citizens in particular areas (regions, provinces, states, local governments). He also argues that “when these preferences vary among geographical area, a uniform package chosen by a nation’ s government is likely to force some localities to consume more of less than they would like to consume. According to Tanzi (1995) the interpretation of both Oates and Prud’ homme assumes that subnational government levels already exist, hence the crucial problem becomes which of the existing government levels ought to be responsible for particular forms of spending. The function of government can be divided into three-allocation, distribution and stabilization function (Musgrave 1959). Using this stratification, stabilization and distribution functions are expected to be under the peripheri of the central government while lower government undertakes allocative functions. Hence, any spending and taxing decisions that will affect the rate of inflation, level of unemployment, etc. are better handled at the centre, while other activities that will affect social welfare are more efficient if undertaken by subnational governments. Theoretically, the scope of benefit is the basis for allocating responsibilities governments. Public goods and services which are national in nature (foreign affairs, environment, immigration and defense) should be provided by the central government while those whose benefits are mainly localized should be assigned to the lower levels of government. Quasi-private goods or intermediate goods and services such as administration, health and welfare services should on account of efficiency delivery, be assigned to lower levels of government (Vincent. O. 2001). Studies on tax and public revenue mobilization in Nigeria have shown a high degree of centralization. According to Emenuga (2003), the allocation of revenue to the tiers of government has not adhere strictly to the expenditure requirements of each tier, thus the federal government has become a surplus-spending unit while other functions, he proposes the determination of a tier’ s share through the aggregation of its basic expenditure needs. To reduce the gap between tax power ad responsibilities, two types of revenue sources are allocated to each tier. These are independent revenue sources and direct allocation from the federation to which centrally collectable revenues are paid. Local government also receives allocations from state Internal Revenues. An agreed formula for vertical revenue sharing is used in sharing funds from the federation account. Another key issue in the practice of public revenue mobilization in Nigeria is how to distribute the bloc share from the federation account among the constituent units of each tier i.e. among the 36 states and the 774 local governments. This is called horizontal revenue sharing. In Nigeria, there are four categories in the vertical allocation list – federal, state, local governments, and the special fund. The allocation to the Federal Capital Territory (FCT) is accounted for under the special fund which is administered by the federal government. 2.8 Local Government Finances and Revenue Utilization Public revenue mobilisation is one of the most keenly contested issues in Nigeria. A comprehensive review of the reports of the various commissions and government policies from the 1946 Philipsons commission to the activities of the National Revenue Mobilisation, allocation and fiscal commission established in 1989 could be found in Kayode (1993), Emenuga (2003) and Ekpo (2004). Local governments in Nigeria receive statutory allocations from the two higher tiers of government (federal and states). At the present, revenue sharing formula, local governments receive 20 per cent from the federation account. They are also statutorily entitled to 10 per cent of states’ internally generated revenue. As regards to Value Added Tax, local governments receive 30 percent in 1998. This was shared to local governments, on the following basis: equality (50 per cent): population (30 per cent) and derivation (20 per cent). In 1999, local governments received 35 per cent of the VAT proceeds. The federal government controls all the major sources of revenue like import and excise duties, mining rents and royalties, petroleum sales tax, petroleum profit tax and companies income tax among other revenues sources (see table 1). Local Government taxes are minimal hence this limits their ability to raise independent revenue and so they depend solely on allocation from the federation account. Much of the revenue collected by the federal government and distributed among the different tiers of government using the vertical revenue allocation formula is from the federation account. But the federal government seems to exercise too much control over its distribution. So many deductions are made from the total revenue collected before the rest is distributed according to the sharing formula Can't find what you are looking for? Call (+234) 07030248044. OTHER SIMILAR ACCOUNTING PROJECTS AND MATERIALS
<urn:uuid:32d9173a-81ae-49c1-80c3-f2725bfbb795>
CC-MAIN-2021-21
https://www.projectclue.com/accounting/project-topics-materials-for-undergraduate-students/problems-of-tax-collection-in-nigeria-a-case-study-of-eket-local-government-area
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991428.43/warc/CC-MAIN-20210514152803-20210514182803-00256.warc.gz
en
0.944746
4,726
3.078125
3
The Truth about Vegetarinism - Chapter 5 to 8 CHAPTER 5 - Vegetarian Nutrition - Getting Everything Your Body Needs At this point you’re probably starting to get worried about how you’re going to make sure you get the right balance of nutrients that your body needs, and thinking that you’ll need a spreadsheet to keep track of everything you eat. But it’s not as difficult as it may seem from the outset – you just need to bone up on a few nutritional basics to keep in mind when you plan your meals. Some people spend their entire lives studying the science of nutrition, but you don’t have to make it your life’s work. The truth is, despite what the meat industry repeatedly tells you, vegetarian diets aren’t nutritionally inferior to meat-based diets. There’s no need to worry that you’ll be lacking the vitamins, minerals and protein that your body needs. Which isn’t to say that it’s not possible to eat badly as a vegetarian – many people have lousy diets, even vegetarians. But if you eat smart, your vegetarian diet can be the healthiest way you’ve ever eaten. Protein – Am I Getting Enough? Your first concern on starting a vegetarian way of life is that, without meat foods in your diet, you’ll lack protein. So you’ll be happy to discover that it’s almost impossible to eat too little protein on a vegetarian diet. Protein is, of course, of the utmost importance to a healthful diet. Your bones, muscles and hormones all contain protein, and eating enough of it helps keep your body strong on the most fundamental level. Unfortunately, the importance of eating animal protein has long been made unrealistically important. Man once believed that eating the flesh of other animals would make him stronger and healthier – but now that we know what we do about cholesterol and the dangers of eating saturated fats, it’s obvious that limiting animal proteins is the healthy choice. Vegetarians can, of course, be protein deficient – but that comes from undereating, or relying too heavily on junk foods. In most case, any diet adequate in calories from a variety of healthful sources provides enough protein. Grains, vegetables, beans, seeds and nuts are all protein-rich foods, easily providing what the body needs. Contrary to what many vegetarians believed in the last couple of decades, they don’t need to weigh and balance arcane combinations of foods to get adequate protein. This myth goes back to Frances Moore Lappe’s 1971 book “Diet for a Small Planet,” in which she wrote that vegetarians needed to balance foods based on which amino acids they were lacking, creating “complementing proteins.” For some time, there were even nutritionists who created complex charts to help vegetarians pick foods that went together, and concerned meat-free eaters made sure to combine beans and rice, or rice and corn, or grains and cheese … and it was an awful lot to remember! But we now know that combining types of protein isn’t nearly as important as simply eating enough calories to maintain a healthy weight – Lappe even revised later editions of her book, admitting that she was wrong about the importance of food combining. No, if you eat enough food from different sources, you’ll probably be getting plenty of protein. If you want to get technical about it, health professionals recommend that you eat 0.8 grams of protein each day for every kilogram of body weight. A kilogram is about 2.2 pounds – so to find your recommended amount of daily protein, multiply your ideal weight by 0.8, then divide that number by 2.2. If you prefer a quicker method, just divide your ideal weight by 3. But even then, you don’t need to eat that much protein to stay healthy – keep in mind that recommendations like these always err on the side of safety, so the number you get will actually be higher than what you realistically need. But you, as a vegetarian, should strive to meet the recommended daily requirement of protein – because plant proteins are, unfortunately, less efficient foods for providing nutrients. For one thing, they’re somewhat more difficult to digest than animal proteins, and they also lack the amount of amino acids present in meat. If you get most of your protein from beans and grains, this is especially true – ovo lacto vegetarians consume a similar amount of protein to omnivores, and vegans who eat a lot of soy products also get plenty of protein. It’ll always be true, however, that as a vegetarian you’re eating less protein than people who eat both plant and animal proteins. A 1984 study found that a typical omnivore diet consists of between 15 and 17 percent protein, while lacto-ovo vegetarians generally eat about 13 percent protein and vegans around 11 to 12 percent. Despite needing more protein and eating less, the vegans still had an adequate amount of protein in the diets. So don’t worry about doing anything fancy to meet your protein requirements – just eat from a variety of sources and get enough calories and you’ll be fine. You will, in fact, be better than fine – because meat-eaters generally eat too much protein! Studies have shown that replacing animal protein with plant protein in your diet can help lower your blood cholesterol levels, decreasing your risk of heart attack. Most people are by now aware of the danger of saturated fats in red meat and its effect on blood cholesterol – people recovering from heart attacks are prescribed diets which replace the beef with skinless chicken or fish. That is a good move, to be sure, but these people could lower their cholesterol even further by switching to a vegetarian diet and reducing the amount of fat that they eat. Plant proteins are lower in saturated fat than animal proteins and dairy products, and free of cholesterol. There are also studies that show that eating slightly less protein than is optimal is far superior than eating too much – and in this era of supersizing, most meat-eaters eat far more than they need. When we eat too much protein, it’s up to our kidneys to filter out the excess. In the process, calcium is lost, increasing the risk of osteoporosis. Since plant-based diets are lower in total protein, vegetarian diets are better for your bones! Excess protein is also, understandably, hard on the kidneys and unhealthy for people with kidney disease. Plant proteins contain all the same amino acids, to differing degrees, as animal proteins, and eating enough of them gives you all the protein you need. Studies have shown that people can meet their protein needs just by eating rice, wheat or potatoes so long as they meet their caloric needs. By eating a variety of plant foods throughout the day and consuming enough calories, you’ll be getting enough healthy plant protein. You’ll have a lower risk of heart and kidney disease, and you’ll be eating protein that’s more efficiently produced besides, using less valuable resources than animal protein. It’s what people call “win-win!” CHAPTER 6 - Parasites - The Guests Who Came to Dinner The intestinal tract is like a luxury hotel for parasites, bacteria and fungus. It’s warm, it’s moist, and oxygen is limited due to all the waste matter that’s packed in there. The colon – one of the most important organs in the body – is a dumping ground for waste, the place where the body toxins and excess nutrients that could be harmful to the system. It’s also where your body absorbs the nutrients that it needs in order to function and survive – so the health of your colon is mighty important. Parasites in particular can be dangerous to the health of the colon, leeching nutrients from the body and emitting emit harmful toxins that can further weaken the colon’s integrity. This can lead to a number of problems, from the mildly annoying to the deadly, such as: Blood sugar imbalances Bloating Sugar cravings Fatigue Insomnia Weight gain or loss Teeth grinding or TMJ Diarrhea Itching Irritability Malnutrition Anemia Immune deficiency It’s estimated that over 90 percent of Americans contract a parasite of some kind at some point in their life. They enter our bodies from a variety of sources, including pets, food and unwashed hands. You can pick up parasites from contact with pets and other people, or just by walking barefoot. Children are easily infected by being less aware of hygiene and playing with dirt and other possible contaminated substances. But meat consumption is probably the biggest contributor to parasites flourishing in our bodies. Eating meat can cause constipation, and constipation creates the perfect environment for parasites to thrive. These unwelcome guests multiply in the haustras, the pouches in the colon where debris is stored. At their least offensive, they cause intestinal gas – but if you experience any of the symptoms above, even something as seemingly minor as chronically itchy skin, you could very well be harboring parasites in your colon. There are several families of parasites: Roundworms, Tapeworms, Flukes and Single Cell parasites. Each group has its own unique subset of parasites who do different things to your body. Let’s look at some of the more common intestinal parasite and what you can do to avoid them: Giardia lamblia are protozoan parasites that infect humans via consumption of contaminated food and water. It’s commonly found in untreated water supplies, and is one of the most common causes of diarrhea in travelers, but people sometimes pick them up while swimming in ponds and lakes. Giardia is responsible for the condition known as giardiasis that causes diarrhea, bloating, flatulence, abdominal cramping, weight loss, greasy stools, and dehydration. Toxoplasma gondii is another protozoan organism commonly found in the colon. Cats and kittens often carry it, and it can be transmitted to humans who handle of cats – especially their feces. You can also be infected with them by breathing in their eggs.Toxoplasma is responsible for the disease toxoplasmosis, which causes chills, fever, headaches and fatigue. If a pregnant woman contracts toxoplasmosis, it can lead to miscarriage, or birth defects such as blindness and mental retardation. Roundworms are are the most common intestinal parasite in the world, affecting over one billion people. They’re also one of the largest parasites, and can grow to up to 30 inches in length. Humans can contract a roundworm infection by eating improperly cooked meat, or by handling dogs or cats infested with roundworms. Symptoms include loss of appetite, allergic reactions, coughing, abdominal pain, edema, sleep disorders, and weight loss. Hookworms are able to penetrate the human skin, and often enter the body through the feet when people walk barefoot through contaminated areas. They can be all over the world, in warm, moist tropical areas, and can live in the intestines for up to fifteen years. A hookworm infection may cause symptoms such as itchy skin, blisters, nausea, dizziness, anorexia, and weight loss. Trichinella parasites, caused by the consumption of raw or undercooked pork, can can mimic the symptoms of up to fifty different diseases. Possible symptoms of infection include muscle soreness, fever, diarrhea, nausea, vomiting, edema of the lips and face, difficulty breathing, difficulty speaking, enlarged lymph glands, and extreme dehydration. Tapeworms are the largest colon parasites that are known to infect humans. There are different types of tapeworms that infect different animals – there are beef tapeworms, pork tapeworms, fish tapeworms, and dog tapeworms. They can grow to several feet in length and live in the intestines for up to 25 years. Symptoms of a tapeworm infection are diarrhea, abdominal cramping, nausea, and change of appetite. Flukes, or Trematodas, are small flatworms that can penetrate the human skin when an individual is swimming or bathing in contaminated water. Flukes can travel throughout the body and settle in the liver, lungs or intestines. Symptoms of a fluke infection include diarrhea, nausea, vomiting, abdominal pain, and swelling. CHAPTER 7 - The Happy Vegetarian - How a Meatless Diet Will Improve Your Health and Well-Being Let’s talk digestion. No, really, this is fascinating – and important. If you still have any lingering suspicions that humans are supposed to eat meat as a primary source of protein, you might want to take a look at the digestive tract of true carnivores. Meat is hard to digest, and it takes time for it to break down so that its nutrients can be used by the body. We’ve already talked about the differences between the teeth of carnivores (sharp amd pointy for tearing flesh) and the teeth of plant-eaters (blunt and flat, like ours), and that’s where digestion begins – in the mouth. While you’re chewing your food, the enzyme’s in your saliva begin the digestive process, the first step in breaking it down to its most usable form. After you swallow, the food moves on to your stomach, where it’s dunked in a bath of hydrochloric acid that’s creaks it down further into a substance called chyme. It travels from there to the digestive tract where its slowly pushed through by contractions of the intestines called peristalysis. As it goes, tiny little hairlike fingers called villi absorb most of the nutrients from the chyme. Finally, the almost completely digested food makes it to the colon, where water is absorbed from the chyme along with some more vitamins and nutrients before exiting through the rectum. Meat – the protein that overstays its welcome. Here’s where it gets interesting. Looking at a true carnivore – like, say, that lion with his big sharp teeth -- we can see enormous differences in their digestive tract. Specifically, the lion’s small intestine, where most of the nutrients are only about three times the length of his body. This means that the meat he eats moves through his system quickly, while it’s still fresh. Humans, however, have much, much longer intestines, with food taking from 12 to 19 hours to pass through the digestive system. This is ideal for plant-based foods, allowing our intestinal tracts to absorb every little bit of nutrient available, but it also means that when we eat meat it’s decaying in a warm, moist environment for a very long time. As it slowly rots in our guts, the decaying meat releases free radicals into the body. Free radicals are unstable oxygen molecules that are present to some degree in every body. When you hear advertisements trumpeting the importance of foods and supplements containing cancer-fighting “anti-oxidents,” it’s these free radicals that they’re battling. Scientists only know a little bit about free radicals at this time, but what they do know is this: free radicals are connected with the aging process, and may play a part in heart disease and cancer. They are, essentially, the tiny mechanisms that break down our bodies so that, eventually, we die. While they’ll always be a part of you – free radicals are built in to cells as part of their normal activities – you can do things to minimize their damage. Too much sunlight in the form of excessive tanning encourages the production of free radicals, which is why even though a little sunlight is important each day (remember our buddy, Vitamin D?). Using a good sunblock will not only help you avoid skin cancers, it’ll help keep you younger in general. But the biggest thing you can do to limit the free radicals in your body is to avoid eating meat. For the 12 hours or more that meat is rotting away in your system, those tiny, free radical time bombs are multiplying in your system. Along with that, as meat protein breaks down it creates an enormous amount of nitrogen-based by-products like urea and ammonia, which can cause a build-up of uric acid. Too much uric acid in your body leads to stiff, sore joints – and, when it crystallizes, can cause gout and increased pain from arthritis. Carnivorous animals, interestingly, produce a substance called uricase, which breaks down uric acid. Humans don’t produce uricase, though – another clue that we’re not meant to be meat-eaters. The raw and the cooked When you eat meat, how much of it do you eat raw? Well, Mr. Lion eats his raw, while its still brimming with enzymes that aid in digestion. Humans, however, cook their meat. In fact, we cook our meat to temperatures over 130 degrees Fahrenheit. This has the benefit of killing most disease-causing bacteria, but it also kills the enzymes in the meat. Whenever you eat dead food – food lacking in the natural enzymes that help you digest it – your pancreas has to work extra hard to provide more so the food will break down for digestion. This puts strain on the pancreas that it wasn’t originally designed to handle. Which isn’t to say that you should eat raw meat, like the lion. But it’s another consideration when we look at whether humans are designed to eat meat – when true carnivores eat raw, fresh meat, all the enzymes are present to help them garner the nutrients they need as it passes quickly through their short digestive tracts, and the nutrient-depleted waste is eliminated soon after. When we eat cooked meat, though, our bodies have to work extra hard to digest it, using precious energy needed for other purposes, overtaxing the pancreas, and creating free radicals as the dead flesh decays in our intestinal tract. But when we eat a plant-based diet, we’re feeding ourselves food that’s abundant with living enzymes, which breaks down efficiently in our systems, and which provides extra energy by not demanding that our organs work overtime to use it. CHAPTER 8 - “But I’m Not a Freak!” or, How to Cope in a Carnivorous World Being new to vegetarianism, it’s more than likely that you’re the only person in your household going meatless. Whether you live with a partner, your parents, your children or roommates, sticking to your guns when everyone else is chowing down on meat loaf or cheeseburgers can be difficult. Even if they’re supportive of your decision, you’ll have to deal with them not understanding all the ins and outs of your new lifestyle – and if they’re not supportive, you may find them ridiculing your food choices or even actively trying to sabotage you. The first thing you need to accept is that it’s not your job to make them change to suit your way of eating, any more than it’s theirs to turn you back into a meat-eater. If they want to change, that’s great – you can share this book with them and you can all work on menu-planning together! But the best way you can influence others in your household to adopt healthier habits is to be a good example – and not turning them off by lecturing them! Meal time at an omnivorous dinner table What’s the best way to deal with vegetarian needs when the rest of the family expects meat and potatoes for dinner? Should you just partake of the same meal as the others, only skipping the meat? Or should you make it clear that you have special needs, and eat a separate meal from everyone else? If you’re the primary cook in your family, you may not want to prepare multiple entrees every night – and you might not want to cook a meat-based dish for others when you’ve given it up yourself. And if you’re not the family chef, is it fair to ask them to go to extra effort for you, night after night? Only you know the dynamic in your home, so only you can figure out the answers to these questions. One thing is certain, however – you need to sit down and talk to the people you live with about your dietary needs and figure out the most agreeable way to make it work for everyone. If you can’t stand to have meat around you at all, this is a huge issue. You may have to ask the others in your home to cook meat outside on a grill, and dedicate a special section of the refrigerator to meat storage, asking that it’s wrapped in such a way that you don’t have to look at it. If your feelings aren’t that strong, you may simply want to negotiate who cooks what, and when – perhaps you can arrange to cook completely vegetarian meals for everyone three nights a week, and prepare your own entrée on the other nights. It all comes down to what your needs are, and the compromises you and your family are willing to make. What about the children? A little patience and negotiation can overcome issues between a meat-eater and a vegetarian, but what if you have children? It’s a little like a “mixed marriage” where you have to decide in which religion you’ll raise your children! Few areas can lead to disharmony in a relationship faster than disagreements on how to bring up the kids, so sit down and negotiate this one with your partner before you go any further. Raising your child to be vegetarian is certainly a healthful option – kids benefit from going meatless just like adults – and we’ll discuss the how-to of that in Chapter 16. The most important thing right now is to figure out how you’ll handle meals at home with your kids. Some families eat nothing but meatless meals at home, but allow the children to eat meat at school and at their friends’ houses. Others create meals that offer options for everyone in the family, so that the omnivores and the vegetarians can choose whatever they like. On the other hand, you may feel so strongly that your children become vegetarians that there may be no room for compromise. You’ll need to lay this out for your partner in a kind, non-confrontational way and, even if you do, it may lead to conflict. It may seem like it’s “just food,” but it’s an important issue – if you can’t easily negotiate the issue, there’s no shame in working it out with a family counselor. Remember, though, that no matter what their age, people like to eat good food – so if you put together tasty, attractive menus full of flavor, color and a variety of textures, you’ll find that the kids and adults are more willing to try vegetarian meals. Going your own way – and letting them go theirs If you’re the main cook in the family, cooking multiple entrees for family dinners can be a huge pain. It’s a lot of extra work, but it’s also the easiest solution to making sure you get something to eat while keeping everyone happy. And it’s also, you’ll be surprised to learn, the best way to sway others to your side. Look at it this way – your omnivorous tablemates can enjoy the meat-based portion of the meal while you eat your vegetarian option, and all of you share the (meatless) side dishes. Of course, your vegetarian food is going to look so good and smell so delicious, they’ll want to try your food, too. So the next time, you just make the vegetarian dish, and chances are they’ll never miss the meat-based dish! Pretty soon, you’ll be making vegetarian meals almost every day of the week … mission accomplished. You can also make your meal out of all of the non-meat dishes on the table which, if you plan well, should be enough to fill up your plate and your belly. Steamed vegetables, roasted red potatoes, a salad and a whole wheat roll is a fine meal – let the others have the pork chops, because you’ve got plenty to eat. This is a good approach when you find yourself at a Thanksgiving dinner, office party or dinner at a friend’s house and you can’t dictate the menu – just eat what you can, without making a big deal out of your vegetarian lifestyle.
<urn:uuid:cf98b5aa-0592-4e0d-919f-4c53708135c4>
CC-MAIN-2021-21
http://www.jardinvegan.com/blog/the-truth-about-vegetarinsm/the-2.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989856.11/warc/CC-MAIN-20210511184216-20210511214216-00057.warc.gz
en
0.95407
5,156
2.546875
3
Aquinas was famously inspired by the works of Aristotle. Aristotle had a rather odd idea, to modern ears, of what politics is about. Aristotle tells us: “The end of politics is the best of ends; and the main concern of politics is to engender a certain character in the citizens and to make them good and disposed to perform noble actions.”-- Ethica Nicomachea, I.9, 1099b30. Aristotle believed that virtue is the highest good--that only virtue truly leads to happiness. In order to seek the good of its citizens, then, Aristotle believed that the state should guide us to virtue. Taken at face value, Aristotle's dictum can lead in some very dangerous, totalitarian directions. After the horrors of the twentieth century, we certainly don't want some fascist or communist dictatorship micromanaging our lives, trying to remold us through great misery and pain into the state's vision of virtue. And yet we Catholics do find Aquinas agreeing with Aristotle. (Before reading what Aquinas has to say, a word about how he writes. It can be hard on the modern reader, but I think his arguments need to be read in full if we are to grapple with them. The first thing to keep in mind is that Aquinas writes in outline form. He begins his consideration of each philosophical question by imagining objections to his own position. Only after listing these objections does he "answer" with his own viewpoint, and then explain why the objections are wrong.) Here is Aquinas agreeing (as usual) with Aristotle, whom Aquinas always deferentially refers to as "The Philosopher": -- Summa Theologica, I-II, Q. 92, art. 1. Article 1. Whether an effect of law is to make men good? Objection 1. It seems that it is not an effect of law to make men good. For men are good through virtue, since virtue, as stated in Ethic. ii, 6 is "that which makes its subject good." But virtue is in man from God alone, because He it is Who "works it in us without us," as we stated above (Question 55, Article 4) in giving the definition of virtue. Therefore the law does not make men good. Objection 2. Further, Law does not profit a man unless he obeys it. But the very fact that a man obeys a law is due to his being good. Therefore in man goodness is presupposed to the law. Therefore the law does not make men good. Objection 3. Further, Law is ordained to the common good, as stated above (Question 90, Article 2). But some behave well in things regarding the community, who behave ill in things regarding themselves. Therefore it is not the business of the law to make men good. Objection 4. Further, some laws are tyrannical, as the Philosopher says (Polit. iii, 6). But a tyrant does not intend the good of his subjects, but considers only his own profit. Therefore law does not make men good. On the contrary, The Philosopher says (Ethic. ii, 1) that the "intention of every lawgiver is to make good citizens." I answer that, as stated above (90, 1, ad 2; A3,4), a law is nothing else than a dictate of reason in the ruler by whom his subjects are governed. Now the virtue of any subordinate thing consists in its being well subordinated to that by which it is regulated: thus we see that the virtue of the irascible and concupiscible faculties consists in their being obedient to reason; and accordingly "the virtue of every subject consists in his being well subjected to his ruler," as the Philosopher says (Polit. i). But every law aims at being obeyed by those who are subject to it. Consequently it is evident that the proper effect of law is to lead its subjects to their proper virtue: and since virtue is "that which makes its subject good," it follows that the proper effect of law is to make those to whom it is given, good, either simply or in some particular respect. For if the intention of the lawgiver is fixed on true good, which is the common good regulated according to Divine justice, it follows that the effect of the law is to make men good simply. If, however, the intention of the lawgiver is fixed on that which is not simply good, but useful or pleasurable to himself, or in opposition to Divine justice; then the law does not make men good simply, but in respect to that particular government. In this way good is found even in things that are bad of themselves: thus a man is called a good robber, because he works in a way that is adapted to his end. Reply to Objection 1. Virtue is twofold, as explained above (Question 63, Article 2), viz. acquired and infused. Now the fact of being accustomed to an action contributes to both, but in different ways; for it causes the acquired virtue; while it disposes to infused virtue, and preserves and fosters it when it already exists. And since law is given for the purpose of directing human acts; as far as human acts conduce to virtue, so far does law make men good. Wherefore the Philosopher says in the second book of the Politics (Ethic. ii) that "lawgivers make men good by habituating them to good works." Reply to Objection 2. It is not always through perfect goodness of virtue that one obeys the law, but sometimes it is through fear of punishment, and sometimes from the mere dictates of reason, which is a beginning of virtue, as stated above (Question 63, Article 1). Reply to Objection 3. The goodness of any part is considered in comparison with the whole; hence Augustine says (Confess. iii) that "unseemly is the part that harmonizes not with the whole." Since then every man is a part of the state, it is impossible that a man be good, unless he be well proportionate to the common good: nor can the whole be well consistent unless its parts be proportionate to it. Consequently the common good of the state cannot flourish, unless the citizens be virtuous, at least those whose business it is to govern. But it is enough for the good of the community, that the other citizens be so far virtuous that they obey the commands of their rulers. Hence the Philosopher says (Polit. ii, 2) that "the virtue of a sovereign is the same as that of a good man, but the virtue of any common citizen is not the same as that of a good man." Reply to Objection 4. A tyrannical law, through not being according to reason, is not a law, absolutely speaking, but rather a perversion of law; and yet in so far as it is something in the nature of a law, it aims at the citizens' being good. For all it has in the nature of a law consists in its being an ordinance made by a superior to his subjects, and aims at being obeyed by them, which is to make them good, not simply, but with respect to that particular government. Aquinas endorses the idea that it is legitimate for the state to lead us to virtue. Further, Aquinas tells us that the state may legitimately compel us to virtue: Article 1. Whether it was useful for laws to be framed by men?-- Summa Theologica, I-II, Q. 95, art. 1. Objection 1. It would seem that it was not useful for laws to be framed by men. Because the purpose of every law is that man be made good thereby, as stated above (Question 92, Article 1). But men are more to be induced to be good willingly by means of admonitions, than against their will, by means of laws. Therefore there was no need to frame laws.... On the contrary, Isidore says (Etym. v, 20): "Laws were made that in fear thereof human audacity might be held in check, that innocence might be safeguarded in the midst of wickedness, and that the dread of punishment might prevent the wicked from doing harm." But these things are most necessary to mankind. Therefore it was necessary that human laws should be made. I answer that, As stated above (63, 1; 94, 3), man has a natural aptitude for virtue; but the perfection of virtue must be acquired by man by means of some kind of training. Thus we observe that man is helped by industry in his necessities, for instance, in food and clothing. Certain beginnings of these he has from nature, viz. his reason and his hands; but he has not the full complement, as other animals have, to whom nature has given sufficiency of clothing and food. Now it is difficult to see how man could suffice for himself in the matter of this training: since the perfection of virtue consists chiefly in withdrawing man from undue pleasures, to which above all man is inclined, and especially the young, who are more capable of being trained. Consequently a man needs to receive this training from another, whereby to arrive at the perfection of virtue. And as to those young people who are inclined to acts of virtue, by their good natural disposition, or by custom, or rather by the gift of God, paternal training suffices, which is by admonitions. But since some are found to be depraved, and prone to vice, and not easily amenable to words, it was necessary for such to be restrained from evil by force and fear, in order that, at least, they might desist from evil-doing, and leave others in peace, and that they themselves, by being habituated in this way, might be brought to do willingly what hitherto they did from fear, and thus become virtuous. Now this kind of training, which compels through fear of punishment, is the discipline of laws. Therefore in order that man might have peace and virtue, it was necessary for laws to be framed: for, as the Philosopher says (Polit. i, 2), "as man is the most noble of animals if he be perfect in virtue, so is he the lowest of all, if he be severed from law and righteousness"; because man can use his reason to devise means of satisfying his lusts and evil passions, which other animals are unable to do. Reply to Objection 1. Men who are well disposed are led willingly to virtue by being admonished better than by coercion: but men who are evilly disposed are not led to virtue unless they are compelled. So does this mean that human law should control every aspect of our lives? Should Catholics seek a totalitarian state that micromanages us into virtue? No. Article 2. Whether it belongs to the human law to repress all vices?-- Summa Theologica, I-II, Q. 96, art. 2. Objection 1. It would seem that it belongs to human law to repress all vices. For Isidore says (Etym. v, 20) that "laws were made in order that, in fear thereof, man's audacity might be held in check." But it would not be held in check sufficiently, unless all evils were repressed by law. Therefore human laws should repress all evils. Objection 2. Further, the intention of the lawgiver is to make the citizens virtuous. But a man cannot be virtuous unless he forbear from all kinds of vice. Therefore it belongs to human law to repress all vices. Objection 3. Further, human law is derived from the natural law, as stated above (Question 95, Article 2). But all vices are contrary to the law of nature. Therefore human law should repress all vices. On the contrary, We read in De Lib. Arb. i, 5: "It seems to me that the law which is written for the governing of the people rightly permits these things, and that Divine providence punishes them." But Divine providence punishes nothing but vices. Therefore human law rightly allows some vices, by not repressing them. I answer that, As stated above (90, A1,2), law is framed as a rule or measure of human acts. Now a measure should be homogeneous with that which it measures, as stated in Metaph. x, text. 3,4, since different things are measured by different measures. Wherefore laws imposed on men should also be in keeping with their condition, for, as Isidore says (Etym. v, 21), law should be "possible both according to nature, and according to the customs of the country." Now possibility or faculty of action is due to an interior habit or disposition: since the same thing is not possible to one who has not a virtuous habit, as is possible to one who has. Thus the same is not possible to a child as to a full-grown man: for which reason the law for children is not the same as for adults, since many things are permitted to children, which in an adult are punished by law or at any rate are open to blame. In like manner many things are permissible to men not perfect in virtue, which would be intolerable in a virtuous man. Now human law is framed for a number of human beings, the majority of whom are not perfect in virtue. Wherefore human laws do not forbid all vices, from which the virtuous abstain, but only the more grievous vices, from which it is possible for the majority to abstain; and chiefly those that are to the hurt of others, without the prohibition of which human society could not be maintained: thus human law prohibits murder, theft and such like. Reply to Objection 1. Audacity seems to refer to the assailing of others. Consequently it belongs to those sins chiefly whereby one's neighbor is injured: and these sins are forbidden by human law, as stated. Reply to Objection 2. The purpose of human law is to lead men to virtue, not suddenly, but gradually. Wherefore it does not lay upon the multitude of imperfect men the burdens of those who are already virtuous, viz. that they should abstain from all evil. Otherwise these imperfect ones, being unable to bear such precepts, would break out into yet greater evils: thus it is written (Psalm 30:33): "He that violently bloweth his nose, bringeth out blood"; and (Matthew 9:17) that if "new wine," i.e. precepts of a perfect life, "is put into old bottles," i.e. into imperfect men, "the bottles break, and the wine runneth out," i.e. the precepts are despised, and those men, from contempt, break into evils worse still. Reply to Objection 3. The natural law is a participation in us of the eternal law: while human law falls short of the eternal law. Now Augustine says (De Lib. Arb. i, 5): "The law which is framed for the government of states, allows and leaves unpunished many things that are punished by Divine providence. Nor, if this law does not attempt to do everything, is this a reason why it should be blamed for what it does." Wherefore, too, human law does not prohibit everything that is forbidden by the natural law. Aquinas' words above in the "Reply to Objection 2" are the core of this post, and of much of my own attempt at political thought. The state may legitimately lead us to virtue, but in prudence, it must lead us gradually. This will often mean allowing us to behave viciously (i.e., with vice), if we are unable to bear compulsion to greater virtue. The core insight of a Burkean conservatism is that human nature is not amenable to utopian schemes to refashion it with the brutal tools of the state. Modern liberals, conservatives, and libertarians, all Whigs in different ways, draw from this the lesson that we have a right to be let alone by the state. While conceding that it is often imprudent for the state to attempt to mold us to virtue, Aquinas does not concede, however, a right to act as viciously as we please without state interference: Article 3. Whether human law prescribes acts of all the virtues?-- Summa Theologica, I-II, Q. 96, art. 3. Objection 1. It would seem that human law does not prescribe acts of all the virtues. For vicious acts are contrary to acts of virtue. But human law does not prohibit all vices, as stated above (Article 2). Therefore neither does it prescribe all acts of virtue. Objection 2. Further, a virtuous act proceeds from a virtue. But virtue is the end of law; so that whatever is from a virtue, cannot come under a precept of law. Therefore human law does not prescribe all acts of virtue. Objection 3. Further, law is ordained to the common good, as stated above (Question 90, Article 2). But some acts of virtue are ordained, not to the common good, but to private good. Therefore the law does not prescribe all acts of virtue. On the contrary, The Philosopher says (Ethic. v, 1) that the law "prescribes the performance of the acts of a brave man . . . and the acts of the temperate man . . . and the acts of the meek man: and in like manner as regards the other virtues and vices, prescribing the former, forbidding the latter." I answer that, The species of virtues are distinguished by their objects, as explained above (54, 2; 60, 1; 62, 2). Now all the objects of virtues can be referred either to the private good of an individual, or to the common good of the multitude: thus matters of fortitude may be achieved either for the safety of the state, or for upholding the rights of a friend, and in like manner with the other virtues. But law, as stated above (Question 90, Article 2) is ordained to the common good. Wherefore there is no virtue whose acts cannot be prescribed by the law. Nevertheless human law does not prescribe concerning all the acts of every virtue: but only in regard to those that are ordainable to the common good--either immediately, as when certain things are done directly for the common good--or mediately, as when a lawgiver prescribes certain things pertaining to good order, whereby the citizens are directed in the upholding of the common good of justice and peace. Reply to Objection 1. Human law does not forbid all vicious acts, by the obligation of a precept, as neither does it prescribe all acts of virtue. But it forbids certain acts of each vice, just as it prescribes some acts of each virtue. Reply to Objection 2. An act is said to be an act of virtue in two ways. First, from the fact that a man does something virtuous; thus the act of justice is to do what is right, and an act of fortitude is to do brave things: and in this way law prescribes certain acts of virtue. Secondly an act of virtue is when a man does a virtuous thing in a way in which a virtuous man does it. Such an act always proceeds from virtue: and it does not come under a precept of law, but is the end at which every lawgiver aims. Reply to Objection 3. There is no virtue whose act is not ordainable to the common good, as stated above, either mediately or immediately. Aquinas' vision tempers Aristotle by giving us the important Burkean insight that people are not always reformable by the state, and the state will do more harm than good if it fails to recognize this. However, Aquinas tempers modern Whiggish liberal and libertarian philosophies because he does not speak in terms of the state's leaving us alone as a matter of our "rights," rather than of political prudence. (Which is not to say that people do not have rights vis-à-vis the state. For example, in the 1965 declaration Dignitatis Humanae, the council fathers of Vatican II endorse and defend the human right to the free exercise of religion.) In contrast to modern thinkers, Aquinas agrees with Aristotle that the state may and should legitimately lead us to virtue. Where Aquinas saves us from totalitarianism, however, is tempering this truth with the recognition that sometimes the best way to lead us to virtue is indeed to leave us alone. Totalitarian micromanagement of individual lives by the state does not lead to virtue. Instead, it "bringeth out blood."
<urn:uuid:0891c538-e416-4b74-851e-f92f1186913b>
CC-MAIN-2021-21
http://www.irenist.com/2013/01/human-law-should-not-suppress-all-vices.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991428.43/warc/CC-MAIN-20210514152803-20210514182803-00257.warc.gz
en
0.97379
4,309
2.796875
3
Over the last decades, fluid therapy was considered as the first line of therapy during resuscitation of hemodynamically unstable patients. Although, fluid therapy increases cardiac output (CO) and improves blood pressure, volume overload can lead to pulmonary edema and interstitial edema which will increase morbidity and mortality . It was found that only one half of the critically ill patients are responsive to fluid therapy. Thus, fluid intake can be harmful when hemodynamically unnecessary. Accurate fluid prediction is crucial as it has a great impact on patient outcome. It is important to be able to predict increase in CO before fluid therapy is given. Clinical signs of volume depletion (e.g. heart rate, blood pressure, skin turgor and urine output) are routinely used but of limited sensitivity and specificity . Numerous hemodynamic variables are studied to predict fluid responsiveness. Static variables (e.g. central venous pressure, pulmonary capillary wedge pressure, inferior vena cava diameter and left ventricle end diastolic volume) depend mainly on the preload estimation. These static variables are not reliable because Starling curves differ between patients. So, there may be a positive response at a given preload (preload dependency) or no response (preload independency) (Figure 1). In contrast to static variables, dynamic variables (e.g. PPV and stroke volume variation, pleth variability index (PVI) and aortic blood flow measured by Doppler) are more reliable and more predictive for fluid responsiveness. These dynamic variables depend on the relation between cardiopulmonary interaction with mechanical ventilation. Mechanical ventilation produces cyclic changes in intrathoracic and trans-pulmonary pressures. These cyclic changes affect the left ventricle preload with resultant cyclic changes in left ventricle stroke volume and systemic arterial pressure in the preload dependent patients only . Figure 1. Prediction of patients’ position on starling curve through arterial waveform analysis (PP: Pulse Pressure) . PPV is calculated as the ratio between the maximum difference in pulse pressure (PP) observed over three respiratory cycles and the average of these two PPs as follows: Recently, PPV and stroke volume variation (SVV) indices can be continuously calculated and monitored through monitoring devices. IntelliVue MP Invigilator is a monitor that can continuously monitor PPV . Michard et al. concluded that PPV more than 12% - 13% predicted fluid responsiveness in patients with septic shock or acute respiratory distress syndrome (ARDS) (Figure 2). 2. Patients and methods The study was conducted in the theatre of the general surgery department, at faculty of medicine, A in Shams university during the period from September 2014 to February 2016. After approval of local ethical committee and taking informed written consent from patients, 60 adult patients, of both sexes, aged between 20 to 60 years, ASA I/II physical status, undergoing open major abdominal surgery under general anesthesia were enrolled in our study. 1) Age < 20 years or >60 years. 2) Valvular heart disease or intra-cardiac shunts. 4) Pregnancy or increased intra-abdominal pressure. 5) Chronic obstructive pulmonary disease. 6) Evidence of right ventricular failure. Upon arrival to the operating room, 1 mg midazolam was given intravenously to all patients as a premedication. routine intraoperative monitoring for vital data (i.e. 5 leads ECG, arterial blood pressure, heart rate, capnography and pulse oximetry) was doneusing Philips MP 50 IntelliVue monitoring system (Philips Medical Systems, Böblingen, Germany). After local lidocaine infiltration, radial Figure 2. Method of calculation of PPV . artery cannulation was done by 20 G catheter. Pressure transducer was leveled at mid-axillary line and zeroed to atmospheric pressure. A series of hemodynamic variables are measured from the indwelling radial artery catheter including heart rate (beats/minute), systolic & diastolic arterial pressure (mmHg), and pulse pressure variation (PPV). Induction of anesthesia was done using propofol (2 mg/kg), fentanyl (2 µg/kg), atracurium (0.5 mg/kg) with endotracheal intubation. Maintenance of anesthesia was done using sevoflurane 2% (1 minimal alveolar concentration) in oxygen/air gas mixture. Fentanyl 1 µg/kg bolus injection was given if required. Mechanical ventilation was maintained in volume control mode with tidal volume 8 ml/kg, respiratory frequency 12 breaths/min. And positive end expiratory pressure (PEEP) 0 cm H2O to maintain an ET CO2 between 35 - 40 mmHg. Following induction of anesthesia, atriple-lumen central venous catheter (Certofix® Trio, B. Braun, Melsungen, Germany) was inserted in right internal jugular vein for central venous pressure (CVP) monitoring. Patients who met the inclusion criteria were randomly allocated into either two groups (30 patients each) by computer generated program: CVP group (Control group) and PPV group. Lactated ringer and normal saline (crystalloids) were used alternatively to maintain normal CVP range (5 - 10 cm H2O) and to maintain PPV < 13%. In case of bleeding exceeding allowable blood loss, colloid and blood transfusion were given to maintain hemoglobin level around 10 gm % depending on patients’ preoperative hemoglobin. Systolic blood pressure (SBP), diastolic blood pressure (DBP), heart rate (HR), CVP (normal range 5 - 10 cm H2O) and PPV (normal range 10% - 13%) were measured at baseline after intubation, then every 10 min. during first hour of surgery, then every 15 min. till end of surgery. The total volume of fluid (crystalloid & colloid) and blood given to the patients during surgery were calculated. In case of persistent hypotension (i.e. MAP < 65 mmHg) despite normal CVP and PPV, norepinephrine 0.1 - 0.7 µg/kg/min was given. 3. Sample Size calculation Group sample sizes of 28 patients per group achieve 81% power to detect a difference of 500 ml in blood loss between both groups assuming the mean blood loss in the control group is 2 liters with estimated group standard deviations of 650 ml and with a significance level (alpha) of 0.05 using a two-sided two-sample t-test. Thirty patients were included to replace any dropout. 4. Statistical analysis Data were analyzed using the Statistical Package for Social Sciences v 18 SPSS 18.0 for Windows (SPSS, Chicago, IL, USA). Normally distributed numerical data are presented as mean ± SD and differences between groups were compared using the independent Student’s t-test, data not normally distributed were compared using Mann-Whitney test and are presented as median (IQR) and categorical variables were analyzed using the χ2 test or fisher exact test and are presented as number (%). All P values are two-sided. p < 0.05 is considered statistically significant (Figure 3). Comparison of demographic data between the two groups showed no significant difference. The two groups were comparable in the general characteristics including age, sex, body weight, ASA, duration and type of surgery (p > 0.05) (Table 1). Figure 3. Flowchart showing the number of participants in each stage. Table 1. Demographic data comparison among the two groups. Values are expressed as mean ± SD or number of patients. p > 0.05 was considered statistically non-significant. No difference between the two groups as regards the preoperative Hb. Level and the amount of intraoperative blood loss. The amount of crystalloid given intraoperatively was comparable in both groups (p > 0.05) (Table 2). Group P patients required more intraoperative colloid (i.e. 533.33 ± 319.84 ml.) than patients in group C who required only 239.73 ± 166.67 ml (p < 0.001) (Table 2). The amount of intraoperative blood and fresh frozen plasma given are higher in group P than group C. Intraoperatively, group P patients received 1.97 ± 0.81 units packed RBC and 1.77 ± 0.73 units FFP compared with 1.1 ± 1.094 units packed RBC and 1.1 ± 1.03 units FFP in group C (p < 0.05) (Table 2). Figures 4-6 showing comparison between the two groups regarding HR, SBP and DBP. Patients in CVP group are more tachycardic and hypotensive than patients in group P (p < 0.05). But, neither patients in the two groups required any vasopressor support. Table 2. Intraoperative fluid management in the two groups. Values are expressed as mean ± SD. *p < 0.05 was considered statistically significant. Figure 4. Heart rate comparison between the two groups. Data are expressed as mean ± SD. p < 0.05 is considered statistically significant. Figure 5. Systolic blood pressure (SBP) comparison between two groups. Data are expressed as mean ± SD. p < 0.05 is considered statistically significant. Figure 6. Diastolic blood pressure (DBP) comparison between two groups. Data are expressed as mean ± SD. p < 0.05 is considered statistically significant. In this study, patients in the PPV group received more intraoperative crystalloid but the difference between the two groups was insignificant. Colloid, blood and FFP transfusion were significantly more in the PPV group. intraoperative blood loss was comparable between the two groups. Monitoring of vital data (i.e. HR, SBP, DBP) were done and were more stable in the PPV group. No patients in either groups required any norepinephrine infusion. Assessment of volume requirement during major operations (e.g. Wipple operation) may be difficult due to use of irrigation fluids, blood loss beneath the drapes and inaccuracy of usual index for cardiac preload (i.e. CVP). So, fluid management should depend on the hemodynamics with volume adjustment guided by PPV. Pulse pressure variation measured by IntelliVue MP system possess the ability to predict fluid responsiveness in patients undergoing major operation with mechanical ventilation. This provides a useful guide for fluid management in these patients. Several clinical studies over the past years demonstrated the effectiveness of using PPV monitor for intraoperative and postoperative fluid guidance and how this can reduce the incidence of perioperative complications and hospital stay . IntelliVue MP monitoring system can provide continuous automatic monitoring of PPV through analysis of the arterial pressure waveform without the need for special devices. Other methods for PPV monitoring can be used like PiCCO system and LiDCO system, but these methods require special device . Kim et al. investigated the use of PPV in patients undergoing cardiac surgery and found good clinical application . Fischer et al. demonstrated that the accuracy of PPV in predicting fluid requirement might be compromised in open chest operations. This is due to inaccuracy in reflection of phasic changes in preload and stroke volume, decreased cyclic changes in intrathoracic pressure and increased aortic impedance . Despite the strong predictive value of PPV, Cannesson and colleagues demonstrated that PPV may be inaccurate and inconclusive in predicting fluid responsiveness in 25% of patients under general anesthesia. Also, Biais and colleagues used the term “gray zone” while investigating the clinical use of PPV. The practical value of this term allows determination of three zones: A zone with positive response, a zone with negative response and a third zone with uncertainty (i.e. gray zone). This zone occurs in patient receiving small tidal volume or those with low heart rate/respiratory rate ratio. Gouvea et al. investigated the use of PPV during orthotopic liver transplantation. They found inability of PPV to predict fluid responsiveness during operation of liver transplantation due to decreased systemic vascular resistance (SVR), decreased heart function and operation stimulation . A recent study done by Sundaram et al. investigated the use of PPV for intraoperative fluid management in adult patients undergoing craniotomy operation in supine and lateral position. They concluded that PPV monitoring resulted in better postoperative hemodynamic stability, avoided central line associated complications and reduced the additional cost . There are many limitations for the use of PPV as a reliable predictor for fluid responsiveness. Tidal volume must be at least 8 ml/kg with controlled ventilation at fixed rate. This is because the low tidal volume ventilation cannot produce significant cyclic change in the intrathoracic pressure to induce preload variations. The cardiac rhythm must be sinus rhythm without any bradycardia or arrhythmia. PPV measurement is also influenced by the presence of spontaneous breathing . PEEP must be avoided as it enhances cyclic changes in pleural pressures and hence increase PPV . Also, drugs like β-blocker, norepinephrine and vasodilators interfere with PPV accuracy. Norepinephrine decreases PPV while vasodilators increase PPV . Fluid resuscitation guided by CVP may leads to inaccurate volume replacement. CVP guided fluid therapy is effective only when the patients are on the ascending portion of the Frank-Starling curve, but when the left ventricle reach the flat portion of the curve, fluid intake will increase tissue edema and tissue dysoxia . There were some limitations to our study: First, small sample size of the studied groups. Second, we did not measure the cardiac output which is the best method to differentiate between fluid responder from non-responder. Third, we did not continue monitoring the patients during the postoperative period. Colloid, like hydroxyethyl starch, was used during fluid resuscitation although evidence suggesting increased risk of nephropathy and coagulopathy associated with starch products . PPV when combined with CVP can be a good guide to monitor fluid therapy in patients undergoing major abdominal operations. Brandt, S., Regueira, T., Bracht, H., Porta, F., Djafarzadeh, S., Takala, J., et al. (2009) Effect of Fluid Resuscitation on Mortality and Organ Function in Experimental Sepsis Models. Critical Care, 13, R186. Verdant, C. and De Backer, D. (2005) How Monitoring of the Microcirculation May Help Us at the Bedside. Current Opinion in Critical Care, 11, 240-244. Sakka, S.G., Bredle, D.L., Reinhart, K. and Meier-Hellmann, A. (1999) Comparison between Intrathoracic Blood Volume and Cardiac Filling Pressures in the Early Phase of Hemodynamic Instability of Patients with Sepsis or septic shock. Journal of Critical Care, 14, 78-83. Pinsky, M.R. (2004) Using Ventilation-Induced Aortic Pressure and Flow Variation to Diagnose Preload Responsiveness. Intensive Care Medicine, 30, 1008-1010. Vieillard-Baron, A., Chergui, K., Augarde, R., Prin, S., Page, B., Beauchet, A., et al. (2003) Cyclic Changes in Arterial Pulse during Respiratory Support Revisited by Doppler Echocardiography. American Journal of Respiratory and Critical Care Medicine, 168, 671-676. Aboy, M., McNames, J., Thong, T., Phillips, C.R., Ellenby, M.S. and Goldstein, B. (2004) A Novel Algorithm to Estimate the Pulse Pressure Variation Index Delta PP. IEEE Transactions on Biomedical Engineering, 51, 2198-2203. Cannesson, M., Slieker, J., Desebbe, O., Bauer, C., Chiari, P., Henaine, R., et al. (2008) The Ability of a Novel Algorithm for Automatic Estimation of the Respiratory Variations in Arterial Pulse Pressure to Monitor Fluid Responsiveness in the Operating Room. Anesthesia & Analgesia, 106, 1195-1200. Derichard, A., Robin, E., Tavernier, B., Costecalde, M., Fleyfel, M., Onimus, J., et al. (2009) Automated Pulse Pressure and Stroke Volume Variations from Radial Artery: Evaluation during Major Abdominal Surgery. British Journal of Anaesthesia, 103, 6786-6784. Michard, F., Boussat, S., Chemla, D., Anguel, N., Mercat, A., Lecarpentier, Y., et al. (2000) Relation between Respiratory Changes in Arterial Pulse Pressure and Fluid Responsiveness in Septic Patients with Acute Circulatory Failure. American Journal of Respiratory and Critical Care Medicine, 162, 134-138. Cannesson, M., Musard, H., Desebbe, O., Boucau, C., Simon, R., Henaine, R. and Lehot, J.J. (2009) The Ability of Stroke Volume Variations Obtained with Vigileo/FloTrac System to Monitor Fluid Responsiveness in Mechanically Ventilated Patients. Anesthesia & Analgesia, 108, 513-517. Cannesson, M., de Backer, D. and Hofer, C.K. (2011) Using Arterial Pressure Waveform Analysis for the Assessment of Fluid Responsiveness. Expert Review of Medical Devices, 8, 635-646. Kim, I.B., Bellomo, R., Fealy, N. and Baldwin, I. (2011) A Pilot Study of the Epidemiology and Associations of Pulse Pressure Variation in Cardiac Surgery Patients. Critical Care and Resuscitation, 13, 17-23. Fischer, M.O., Coucoravas, J., Truong, J., Zhu, L., Gérard, J.L., Hanouz, J.L. and Fellahi, J.L. (2013) Assessment of Changes in Cardiac Index and Fluid Responsiveness: A Comparison of Nexfin and Trans Pulmonary Thermodilution. Acta Anaesthesiologica Scandinavica, 57, 704-712. Cannesson, M., Le Manach, Y., Hofer, C., Goarin, J.P., Lehot, J.J., Vallet, B., et al. (2011) Assessing the Diagnostic Accuracy of Pulse Pressure Variations for the Prediction of Fluid Responsiveness: A “Gray Zone” Approach. Anesthesiology, 115, 231-241. Biais, M., Ehrmann, S., Mari, A., Conte, B., Mahjoub, Y., Desebbe, O., et al. (2014) Clinical Relevance of Pulse Pressure Variations for Predicting Fluid Responsiveness in Mechanically Ventilated Intensive Care Unit Patients: The Grey Zone Approach. Critical Care, 18, 587. Gouvea, G., Diaz, R., Auler, L., Toledo, R. and Martinho, J.M. (2009) Evaluation of the Pulse Pressure Variation Index as a Predictor of Fluid Responsiveness during Orthotopic Liver Transplantation. British Journal of Anaesthesia, 103, 238-243. Sundaram, S.C., Salins, S.R., Kumar, A.N. and Korula, G. (2016) Intra-Operative Fluid Management in Adult Neurosurgical Patients Undergoing Intracranial Tumour Surgery: Randomized Control Trial Comparing Pulse Pressure Variance (PPV) and Central Venous Pressure (CVP). Journal of Clinical and Diagnostic Research, 10, UC01-5. Oliveira-Costa, C.D., Friedman, G., Vieira, S.R. and Fialkow, L. (2012) Pulse Pressure Variation and Prediction of Fluid Responsiveness in Patients Ventilated with Low Tidal Volumes. Clinics (Sao Paulo), 67, 773-778. Freitas, F.G., Bafi, A.T., Nascente, A.P., Assuncao, M., Mazza, B., Azevedo, L.C., et al. (2013) Predictive Value of Pulse Pressure Variation for Fluid Responsiveness in Septic Patients using Lung-Protective Ventilation Strategies. British Journal of Anaesthesia, 110, 402-408. Marik, P.E., Cavallazzi, R., Vasu, T. and Hirani, A. (2009) Dynamic Changes in Arterial Waveform Derived Variables and Fluid Responsiveness in Mechanically Ventilated Patients: A Systematic Review of the Literature. Critical Care Medicine, 37, 2642-2647. Myburgh, J.A., Finfer, S., Bellomo, R., Billot, L., Cass, A., Gattas, D., et al. (2012) Hydroxyethyl Starch or Saline for Fluid Resuscitation in Intensive Care. New England Journal of Medicine, 367, 1901-1911.
<urn:uuid:13209af1-be70-4cc3-9d8d-98799e00bbb8>
CC-MAIN-2021-21
https://m.scirp.org/papers/82708
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991537.32/warc/CC-MAIN-20210513045934-20210513075934-00336.warc.gz
en
0.851113
4,604
2.515625
3
This portlet should not exist anymore A Look at the U.S. Electorate in 2020 The U.S electorate is the most diverse it has ever been; there are several demographic groups that are essential for Democrats and Republicans (to gain support from) if they want to win the general election. The non-white and foreign-born share of the electorate is growing, and as white deaths now outnumber white births, nonwhite citizens account for a third of eligible voters – their largest share being Latinx. The overall age of the electorate is also becoming younger. This is also evident among the now largest voting block: Millennials (born between 1981 – 1996) and Gen Z (under 25 years of age). Only roughly half of them are white and a majority is female. This demographic shift has significant political consequences. Both major political parties have recognized these trends and have launched massive outreach campaigns to all of these groups in the past several election cycles. Democrats and Republican alike need to capture at least a decent share of each of these demographic groups' votes to carry a win. To do so, they must acknowledge and work on the issues these groups care most about. Women – the political shapers Gender realignment of American politics is the biggest change in party affiliation and has gained momentum since 2016. Women make up the largest share of eligible voters, and as such are a key demographic target group for both Democrats and Republicans. Among white women, party identification differs widely between college educated and non-college educated women. Women without a college education tend to lean more conservative than women with degrees. The same applies to women living in rural vs. urban areas. Women of color are the exception. The overwhelming majority of them lean liberal, regardless of age, education level or location. According to a recent Harvard online survey, over 64% of women lean democratic. Issues women care about have become key strategic campaign issues for both parties in 2020. For example, during the coronavirus pandemic, women have been disproportionality affected by unemployment and childcare issues compared to males. An analysis by the Institute for Women’s Policy Research suggests the woman most impacted by COVID-19 is a single, urban, working class mother of color. Inadvertently, this woman would also be most affected by issues of race, ethnic, economic, and health inequities, inner-city violence and gun control. Although the female electorate is also becoming younger and more diverse, the average female voter is still white, middle aged, and suburban; and U.S. politics predominantly addresses her policy objectives. Women are also challenging the dominance of men in our political elections and institutions. More women than ever are running for office in 2020 – especially young women. And there will be more all-woman congressional contests on the ballot in November 2020 than ever before. Women have already secured both Democratic and Republican major-party nominations in 51 contests for the U.S. House and Senate. That number will rise before Election Day. People of Color – The Political Clients Numbers from the U.S. Census Bureau indicate that about one-in-ten people eligible to vote in this year’s U.S. presidential election are naturalized citizens. And most (61%) of these 23 million eligible voters live in just five states: California, New York, Florida, Texas and New Jersey. According to Pew Research, most of these eligible voters are either Hispanic or Asian, though they hail from varied countries across the globe. Immigrants from Mexico make up the single largest group, at 16% of foreign-born voters. Two-thirds have lived in the U.S. for more than 20 years. This group, in itself, is an important voter block. However, what gives them more momentum is the number of their U.S. born descendants who in second, third or subsequent generations are fully American, yet identify with their ethnic heritage. Another emerging strong voter bloc are Asian Americans, specifically Desi Americans (South Asian Americans born or raised in the United States). Still much smaller in size than other communities, what sets them apart, is that they are the highest educated racial or ethnic group in the U.S. and have the highest median household income of these groups. As such, their influence and voting power have been noted by both Democrats and Republicans who are actively wooing this electorate. For U.S. voters overall, issues surrounding immigration policy have steadily gained in importance. For the U.S. public, immigration has also become a priority they feel Congress and the President should address. This is especially true for Latinx under the current administration. Numerous proposed policy changes by the administration, such as the U.S.-Mexico Border Wall Expansion and limitations to legal immigration, have engendered strong polarized public reactions. Undeniably, these affect how immigrants see their place in the U.S. Studies and recent polls have found that the coronavirus pandemic has dealt a disproportionately strong blow to the Latinx communities. Next to Blacks, Latinx have shouldered the crisis as essential workers who have kept the economy afloat. But Latinx have experienced despairingly higher infection and mortality rates, along with higher unemployment and evictions than white people. Issues that are important to them in this election are healthcare, the economy and social inequity. According to 2018 U.S. Census Data, Latinx have the third highest poverty rate by race next to Native Americans with 25.4% and Black voters (20.8%). However, one cannot assume Latinx votes will go to Democrats across the board. Some polls suggest a sizable amount of Latinx (nationally between a quarter and a third) back Trump — similar to 2016. And like in 2016, a sizable block of Latinx in Arizona, Florida and Texas — three important swing states that have large Latinx electorates — are likely to support the conservative party. A report by the Poor People’s Campaign, found that poverty affects more than 38 million people in America and new research suggests they represent a vast reservoir of votes. Their findings concluded that between the Democratic nominee Joe Biden and incumbent Republican Donald Trump, the candidate who addresses issues of poverty likely could take advantage of untapped votes in key swing and battleground states. Furthermore, the study suggests that attracting the votes of low-income individuals and mobilizing them to go to the polls could prove decisive in the 2020 election. Over half of the country’s 63 million registered low-income voters did not cast a ballot in the 2016 presidential election. This fact simultaneously presents untapped potential as well as a major danger for both parties and has long been a contentious topic of debate. Low income individuals have also been strongly affected by the COVID-19 crises and will likely vote for the candidate promising job growth. The balance of partisanship is strongest held among the second largest ethnic group in the U.S – African Americans who make up 12.5% of eligible voters. Black GOP support has been in the single digits (hovering around 8%) since conservatives opposed the Civil Rights Act in the 1960s. Democrats hope to capitalize on this during this election cycle by addressing and campaigning on issues of race and justice – which many conservatives have failed to incorporate in their platform. Racial issues will likely define the 2020 presidential election. The Black community, although perhaps not inherently enamored with Joe Biden (many Black voters have felt let down by Democrats since the Obama administration), has taken note of this and will predictably align themselves with Democrats as they have in the past. Nearly 89% intend to vote for Democrats in this election according to a Pew Research Study. In the midst of a pandemic, many Black voters have been disillusioned with the president’s response to racial inequality and nationwide have taken to protesting after the deaths of Elijah McClain, Breonna Taylor, George Floyd and most recently the near-fatal shooting of Jacob Blake in Kenosha, WI. Surveys suggest that Black voters in America do not feel heard or understood by President Trump and feel he is further polarizing society with statements which some regard as disparaging and racially divisive. This is added to the disappointment Black voters already experienced with the president’s handling of the coronavirus pandemic. Systemic health and social inequities have put people of color at an increased risk to contract COVID-19. Racial disparities have played an apparent role in infection and mortality rates with rates three times higher than their white counterparts. Next to Latinx, African Americans have been affected hardest by the pandemic in a widespread manner that spans the country, throughout hundreds of counties in urban, suburban and rural areas, and across all age groups. Circumstances that have made Black (and Latinx) people more likely than white people to be exposed to the virus: they have front-line jobs that keep them from being able to work from home; they rely heavily on public transportation for jobs; and often live in multigenerational homes and inner cities where socially distancing or isolating often isn’t possible. Although there are many policies that influence the African American Community in the U.S. such as healthcare and the economy, how they cast their vote in this general election will be determined by COVID-19 and race. There is a widespread effort and campaigns led by prominent personalities such as Michelle Obama to mobilize the Black vote in November; however, Black voters face greater barriers in many places to registering to vote, staying on the voter rolls, and having their mail-in ballots counted. These mostly democrat led campaigns point to factors impeding Black Americans from voting, which they believe is a direct result of decisions made by Republican legislators in the past decade. A new study from the nonpartisan Center for Election Innovation & Research found that voter registration rates are currently not up to the same level they were compared to the same period in 2016. This will likely concern both major political parties– albeit perhaps the Democrats more. Young Voters – the political consumers Minority identities are represented in the largest voting bloc in 2020: Young voters (Millennials & Gen Z). Generation Z is said to be the most diverse generation and has the smallest number of white members of any generation. Together Millennials and Gen Z outnumber the other two significant age groups: Baby Boomers (born between 1946 and 1964) and Generation X (born between 1965 and 1980). The Millennial and by extension the Gen Z is “the American Voter” in 2020 and the single most important target voter for any political party or candidate. Essentially, the young voter is both the median and average voter in the 2020 general election; and they overwhelmingly lean democrat (70%). Sir Winston Churchill is famously credited with saying that “If you’re not a liberal when you’re 25, you have no heart. If you’re not a conservative by the time you’re 35, you have no brain”. This seems to no longer hold true. Millennials seem to be holding on to their liberal leaning behaviors beyond their thirties. U.S. voters under 40 are more ethnically diverse, live in urban areas, have higher education, are well read, less religious, and remain skeptical of institutions their parents supported. So far indications are that their political attitudes and voting behavior no longer align with that of their parents. This should give political parties cause to reassess their platforms, especially as, according to Pew Research, roughly 44% of them identify as independents. When asked, nearly 54% of young adults claimed that they support Joe Biden for president for the mere reason that he is not Donald Trump. Only 19% agree with his leadership and an even lesser number subscribe to his policy positions. Furthermore, they do not believe him to be the face of the American values. This should give the Democratic Party serious pause to push for the Party’s reorganization and rejuvenation if they intend to hang on to these voters in a post-Trump future America. To do so, they must revamp and modernize their platform, continuously also be more inclusive and diversify their candidates across the entire party spectrum, and actually run on issues young voters care about. That being said, Republicans will have to do the same. They still have a solid base (roughly 30% of Millennials and Gen Z) of young conservative supporters, who are just as passionate in their activism and view many issues similarly to their more liberal counterparts. Millennials have a plethora of choices due to how accessible information is throughout the world. This access, enabled through technology; i.e. the internet, allows them to form multiple options that change based on constant new information. Accustomed to choice, it is no surprise that they expect and demand the same from their country's political framework, parties, and leadership. As a result, young voters do not simply support one political party, they support leaders of movements within the big tent political parties whose platform align with the issues they care most about. Millennial voters tend to have more liberal political attitudes compared to their older counterparts. Additionally, millennial voters place a higher emphasis on challenging existing paradigms and moral behavior, rather than voting according to party lines. Doing “the right thing” is important to them! It is because of the vast variety of options millennials enjoy, that they take on a more aggressive approach to politics. Millennials and minorities compared to their older counterparts, tend to consumer politics - a market based alternative form of political engagement, rather than subscribing to politics – the belief and support of basic established political principles. Kristen Soltis Anderson makes the point in her book The Selfie Vote: Where Millennials are Leading America (And How Republicans Can Keep Up) that because millennials are more “self-absorbed”, and brilliant self-promoters, they have become more demanding political consumers. Furthermore, Millennials and Gen Z on both ends of the political spectrum have transformed their activism into careers by founding record numbers of NGO’s and political action organizations with a growing political influence. Brooking’s William Frey finds that “the COVID-19 pandemic will most negatively impact the economic prospects of younger generations, who are bearing the brunt of outsized job losses, evictions, and—among Gen Z—disruptions in education. For older millennials, this is the second stage of a double economic whammy, as many of them never fully recovered from the 2007 to 2009 Great Recession. As millennials and younger generations find themselves at the center of the pandemic’s economic storm, they are poised to fight for a bigger say in how the nation recovers.” He adds that another reason for the power of young voters is their astute awareness and activism in opposition to systemic racism and who, even in the face of a pandemic rallied to protest racial and social injustice that is prohibiting people of color “from achieving the education, jobs, housing, and wealth that whites have long enjoyed.” Additionally, these issues are deeply personal for young people as nearly 40% of Millennials and Gen Z themselves are Black and Brown people of color. They have formed a coalition of all races—including whites—into a socio-political movement by using the leverage, they are aware they have, to bring about fundamental changes in racial & social justice. Frey concludes that: “It is likely that the pandemic and recent activism will further galvanize this generation to promote an array of progressive causes.” Conscious of the change they can bring about, young voters are emboldened to make greater demands from their elected officials and expect factual and clearly defined positions, as well as a platform a political party can deliver on. Thus, traditional political principles and values, such as fiscal conservatism or individual liberty are too narrow and too simple for them. When surveyed, the registered voters ranked their issue priorities from high to low as follows: the economy, health care, supreme court appointments, the coronavirus outbreak, violent crime, foreign policy, gun policy, race and ethnic inequality, immigration, economic inequality, climate change, abortion. Oddly, young voters almost have the reverse priority lineup with the addition of LGBTQ rights. Having grown up in the absence of common global enemies (except Islamic terrorism), foreign policy is not a top priority for this post-Cold War generation, who no longer has a personal connection to the alliances nor their meaning. Their activist nature makes them more empathetic towards global humanitarian and human rights causes, however, they lack the appreciation of transatlantic partnerships of yesteryear. Strong generational divides in partisanship Social, economic, and political fissures between millennials and older, white generations are well known and, on both the Left and the Right. The main conceptual frameworks have largely shifted in focus from isolated values to group identities. As Amy Chua puts it in Political Tribes: “The Left believes that right-wing tribalism—bigotry, racism—is tearing the country apart. The Right believes that left-wing tribalism—identity politics, political correctness—is tearing the country apart. They are both right.” Young voters find themselves in the middle of this partisanship in addition to conflicts with their elders over differences in ideology. One of these differences is the growing secularization among young people who are religiously unaffiliated and the declining share of Americans who are Christians, as well as shrinking confidence in organized religion. However, young voters are polarized amongst themselves too. Today's “Under 40” are less willing to compromise on the truth they hold compared to their parents and thus, draw a line in the sand that further divides the generations. Clashes are to be expected between parties and generations, however, if the Millennial and Gen Z generations want to stay in the political arena some compromises will be necessary In the long run, racial and ethnic diversity will likely increase. However, for now, it contributes to a decline in social trust. Not only in institutions and leaders, but in one another. According to The American Interest, young people are subject to a “growing influence of certain ways of thinking about each other”, which is reinforced by factors such a geographical and political party sorting. For many members of this population these views are polarized into good and bad (i.e. demonizing one political figure and praising another). These views are informed by group think and evident within the current political climate. In an effort to overcome this derision, organizations founded by, and catering to Millennials and Gen Z (e.g. Millennial Action Project) are working to identify and bring together young leaders from both sides of the aisle to bridge the partisan divide in American politics. Why is this so important? Millennials and Gen Z constitute the next generation of leadership and will drive the future course of the United States, as well as its foreign relations. The way in which Americans vote in 2020 and the issues that are endorsed will foreshadow the next several election cycles. The Target Voter in 2020 Will any individual demographic group be able to tip the balance of the election in November? The group most capable of doing so are young voters, if their activism translates into political support. Data from Brookings shows that the combined Millennial, Gen Z, and younger generations numbered 166 million as of July 2019, or 50.7% of the nation’s population. Based on U.S. Census data, the average voter in 2020 is moderately liberal, under the age of 35, college educated, and a woman of color who lives in an urban setting. She cares about race and justice, women’s rights, health and climate change. She is a consumer of politics and will vote for the party she feels best embodies her values. She will likely cast her ballot in opposition to the presidential candidate she fears will be most divisive and not usher in the changes she expects of her political leadership. She is highly motivated to vote by mail-in ballot due to COVID-19 health concerns, however, some experts predict she could potentially face challenges in doing so. The, already underfunded, U.S. Postal Service fears extensive delays in processing votes by mail before voting deadlines in light of additional pandemic induced shortages. Both Democrats and Republicans have long misunderstood this and have had a much too simplistic view of the values this emerging black voter holds dear. They have underestimated how diverse and complicated these voters are. According to The Atlantic, signs are growing that voter turnout in 2020 could reach the highest levels in decades—if not the “highest in the past century” due to an influx of new young voters. It is projected that about 156 million people could vote in 2020, an enormous increase from the 139 million who cast ballots in 2016. Konrad-Adenauer-Stiftung มีสำนักงานตัวแทนใน 80 ประเทศในห้าทวีป พนักงานในต่างประเทศสามารถให้รายงานเบื้องต้นเกี่ยวกับสถานการณ์ปัจจุบันและการพัฒนาในระยะยาวในประเทศของตนได้ และผู้ใช้เว็บไซต์สามารถเข้าไปดูการวิเคราะห์ ข้อมูลพื้นฐานและการประเมินผลเฉพาะของ Konrad-Adenauer-Stiftung เหล่านั้นใน "country reports" ได้
<urn:uuid:94c310b5-496c-46d8-a91b-2c071cd2cd1c>
CC-MAIN-2021-21
https://www.kas.de/th/laenderberichte/detail/-/content/young-and-diverse
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991829.45/warc/CC-MAIN-20210514214157-20210515004157-00096.warc.gz
en
0.957167
4,714
2.75
3
Ecology and Religion: Ecology and Confucianism ECOLOGY AND RELIGION: ECOLOGY AND CONFUCIANISM Within the Confucian tradition, there are rich resources for understanding how Chinese culture has viewed nature and the role of humans in nature. These are evident from the dynamic interactions of nature as expressed in the early classic Yi jing (Book of changes), to the Han period integration of the human into the triad with heaven and Earth, to the later neo-Confucian metaphysical discussions of the relationship of principle (li) and material force (qi). This does not imply, however, that there is not a gap between such theories of nature and practices toward nature in both premodern and contemporary East Asian societies. China, like many countries in Asia, has been faced with various environmental challenges, such as deforestation, for centuries. Thus, this is not to suggest an idealization of Confucian China as a model of environmental ideas or practices. This is an exploration of how Confucian thought contributed to the Chinese understanding of the relationship of humans to nature. China's complex environmental history would need to be examined for a fuller picture of the social and political reality of these relations. In addition, the spread of Confucianism would have to be traced across East Asia to Korea and Japan. Confucianism, along with Daoism and Buddhism, has helped to shape attitudes toward nature in the Chinese context. These attitudes have changed over time as the three primary religious traditions have interacted with each other in a dynamic and mutually influencing manner. While distinctions have been made between various schools in these traditions, there has also been coexistence and syncretism among the traditions. Indeed, it is fair to say Confucianism and Daoism, in particular, share various terms and attitudes toward nature, although they differ on the role of humans in relation to nature. Confucians are more actively engaged in working with nature, especially in agricultural processes, while Daoists are more passive toward nature, wanting to experience its beauty and mystery without interfering in its rhythms. Confucianism has conventionally been described as a humanistic tradition focusing on the roles and responsibilities of humans to family, society, and government. Thus, Confucianism is often identified primarily as an ethical or political system of thought with an anthropocentric focus. However, upon further examination and as more translations become available in Western languages, this narrow perspective needs to be reexamined. The work of many contemporary Confucian scholars in both Asia and the West has been crucial for expanding the understanding of Confucianism. Some of the most important results of this reexamination are the insights that have emerged in seeing Confucianism as not simply an ethical, political, or ideological system. Rather, Confucianism is now being appreciated as a complex religious tradition in ways that are different from Western traditions. This is because Confucianism is being recognized for its affirmation of relationality, not only between and among humans but also between humans and the natural world. Confucians regard humans as not simply individualistic entities but as communitarian beings. It is this emerging understanding of the religious, relational, and communitarian dynamics of Confucianism that has particular relevance to the examination of Confucian attitudes toward nature. Some of these attitudes may be characterized as: - Embracing an anthropocosmic worldview. - Affirming nature as having inherent moral value. - Protecting nature as the basis of a stable agricultural society. - Encouraging human self-realization to be achieved in harmony with nature. The contemporary Confucian scholar Tu Weiming has spoken of the Confucian tradition as one based on an anthropocosmic vision of the dynamic interaction of heaven, Earth, and human. He describes this as a continuity of being with no radical split between a transcendent divine person or principle and the world of humans. Tu emphasizes that the continuity and wholeness of Chinese cosmological thinking is also accompanied by a vitality and dynamism. This view is centered on the cosmos, not on the human. The implications are that the human is seen as embedded in nature, not dominant over nature. The Confucian worldview might be described as a series of concentric circles where the human resides in the center, not as an isolated individual, but as embedded in ever-expanding rings of family, society, government, and nature. The moral cultivation of the individual influences the larger circles of society and politics, as is evident in the text of the Great Learning, and that influence extends to nature, as is clear in the Doctrine of the Mean. All of these interacting circles are contained within the vast cosmos itself. Thus, the ultimate context for human flourishing is the 10,000 things, nature in all its remarkable variety and abundance. Indeed, in Confucianism there is recognition that the rhythms of nature sustain life in both its biological needs and socio-cultural expressions. For Confucians, the biological dimensions of life are dependent on nature as a holistic, organic continuum. Everything in nature is interdependent and interrelated. Most importantly, for Confucians nature is seen as dynamic and transformational. These ideas are present as early as the classical texts of the Book of Changes and the Book of Poetry and are expressed in the Four Books, especially in Mencius, the Doctrine of the Mean, and the Great Learning. They come to full flowering in the neo-Confucian tradition of the Song (960–1279) and Ming (1368–1644) periods, especially in the thought of Zhu Xi, Zhangzai, Zhou Dunyi, and Wang Yangming. Nature in this context has an inherent unity, resulting from a primary ontological source (Taiji ). It has patterned processes of transformation (yin/yang ) and is interrelated in the interaction of the five elements (wuxing) and the 10,000 things. Nature's dynamic vitalism is seen through the movements of material force (q qi ). Within this Confucian worldview, human culture is created and expressed in harmony with the transformations of nature. Thus, the leading Confucian of the Han period (202 bce–220 ce), Dong Zhongshu, developed a comprehensive synthesis of all the elements, directions, colors, seasons, and virtues. This codified an ancient Chinese tendency to connect the patterns of nature with the rhythms of humans and society. This theory of correspondences is foundational to the anthropocosmic worldview where humans are seen as working together with heaven and Earth in correlative relationships to create harmonious societies. The mutually related resonances between self, society, and nature are constantly being described in the Confucian texts. This early Han correlative synthesis, along with the institution of the civil service examination system, provided the basis for enduring political rule in subsequent Chinese dynasties. This is not to suggest that there were not abuses of political power or manipulations of the examination system, but simply to describe the anthropocosmic foundations of Confucian political and social thought. These Confucian ideas spread across East Asia to Korea and Japan and today are present in Taiwan, Hong Kong, and Singapore as well. Nature Has Inherent Moral Value For Confucians, nature is not only inherently valuable, it is morally good. Nature thus embodies the normative standard for all things. There is not a fact/value division in the Confucian worldview, for nature is seen as the source of all value. In particular, value lies in the ongoing transformation and productivity of nature. A term repeated frequently in neo-Confucian sources is "life-life" or "production and reproduction" (sheng sheng ), reflecting the ever-renewing fecundity of life itself. In this sense, the dynamic transformations of life are seen as emerging in recurring cycles of growth, fruition, harvesting, and abundance. This reflects the natural processes of growth and decay in nature, human life, and human society. Change is thus seen as a dynamic force with which humans should harmonize and interact rather than from which to withdraw. In this context, where nature has inherent moral value, there is nonetheless a sense of distinctions. Value rests in each thing in nature, but not in each thing equally. Differentiation is recognized as critical; everything has its appropriate role and place and should be treated accordingly. The use of nature for human ends must recognize the intrinsic value of each element of nature, but also its particular value in relation to the larger context of the environment. Each entity is considered not simply equal to every other; rather, each interrelated part of nature has a unique value according to its nature and function. Thus, there is a differentiated sense of appropriate roles for humans and for all other species. For Confucians, hierarchy is seen as a necessary way for each being to fulfill its function. In this context, then, no individual being has exclusive privileged status. The processes of nature and its ongoing logic of transformation (yin/yang ) are the norms that take priority. Within this context, however, humans have particular responsibilities to care for nature. Protecting Nature as the Basis of a Stable Agricultural Society With regard to protecting nature, the Confucians taught that what fosters nature is valuable; what destroys nature is problematic, especially for a flourishing agricultural society. Confucians would ascribe to this in principle if not consistently in practice. Confucians were mindful that nature was the basis of a stable society and that without careful tending imbalances could result. There are numerous passages in Mencius advocating humane government based on appropriate management and distribution of natural resources. Moreover, there are various passages in Confucian texts urging humans not to wantonly cut down trees or kill animals needlessly. Thus, Confucians would wish (at least in principle) to nurture and protect the great variety and abundance of life forms. Again, it may be noted that this did not always occur in practice, especially with periods of population growth, military expansion, economic development, and political aggrandizement. However, the goal of Confucian theory to establish humane society, government, and culture inevitably resulted in the use of nature for creating housing, growing food, and establishing the means of production. In this sense, Confucianism can be seen as a more pragmatic social ecology that recognized the necessity of forming human institutions and the means of governance to work with nature. Nonetheless, it is clear for Confucians that, in principle, human cultural values and practices are grounded in nature, are part of its structure and dependent on its beneficence. In addition, the agricultural base of Confucian societies across East Asia has always been recognized as essential to the political and social well-being of the country. Confucians realized that humans prosper by living within nature's boundaries—they are refreshed by its beauty, restored by its seasons, and fulfilled by its rhythms. Human flourishing is thus dependent on fostering nature in its variety and abundance; going against nature's processes is destructive of self and society. Self-Realization in Harmony with Nature For Confucians, harmony with nature is essential; societal well-being and human self-realization are both achieved in relation to and in harmony with nature. The great intersecting triad of Confucianism—namely, heaven, Earth, and humans—signifies this understanding that humans can only attain their full humanity in relationship to both heaven and Earth. This became a foundation for a cosmological ethical system of relationality applicable to spheres of family, society, politics, and nature. The individual was always seen in relationship to others. In particular, the person was grounded in a reciprocal relationship with nature. Nature functions in the Confucian worldview as great parents to humans providing sustenance, nurturing, intelligibility, and guidance. In return, nature requires respect and care from humans. Human self-realization is achieved by fulfilling this role of filiality toward heaven and Earth (nature) as beneficent parents who have sustained life for humans. This idea of heaven and Earth as parents is first depicted in the early classic of the Book of History and is later developed by thinkers such as Kaibara Ekken in seventeenth-century Japan. Humans participate in the vast processes of nature by cultivating themselves in relation to nature, by caring for the land appropriately, by creating benevolent government, and by developing human culture and society in relation to nature's seasons and transformations. Human self-realization implies understanding the continuities of nature in its daily rhythms and seasonal cycles. Yet humans also recognize that these orderly patterns contain within them the dynamic transformations engendering creativity, spontaneity, and openness. This is the challenge for humans within a Confucian context: How to live within nature's continuities and yet be open to its spontaneities. Thus while nature has intelligible structures and patterns, it also operates in ways to produce and encourage novelty. With regard to establishing human culture and maintaining institutions, the same dynamic tensions are evident within the Confucian tradition. How to be faithful to the past—the continuity of the tradition—and yet be open to the change and innovation necessary for the ongoing life of the tradition. Achieving self-realization for the Confucians required a creative balancing of these two elements of tradition and innovation against the background of nature's continuities and changes. In the Confucian tradition there exists underlying patterns of cosmological orientation and connectedness of self to the universe and self to society. Indeed, one might say that Confucianism as a religious tradition is distinguished by a concern for both personal groundedness and cosmological relatedness amidst the myriad changes in the universe. The desire for appropriate orientation toward nature and connection to other humans is an enduring impetus in Confucianism. Indeed, this need to recognize and cultivate such relatedness is the primary task of the Confucian practitioner in attaining authentic personhood. This relatedness takes many forms, and variations of it constitute one of the means of identifying different periods and thinkers in the tradition. In China, from the classical period of the Book of Changes to the Han system of correspondences and the Neo-Confucian metaphysics of the Diagram of the Great Ultimate, concerns for cosmology and cultivation have been dominant in Confucian thought. In Korea one of the most enduring expressions of this was the four-seven debates that linked the metaphysics of principle (li) and material force (qi) to issues of cultivating virtue and controlling the emotions. These debates continued in Japan, although without the same intensity and political consequences. Instead, in Japan the effort to link particular virtues to the cosmos became important, as did the expression of cultivation in the arts, in literature, and in practical learning. In this manner, one's cultivation was shared for the benefit of the society in both aesthetic and practical matters. Thus, in varied forms throughout East Asian Confucianism, the human is viewed as a microcosm in relation to the macrocosm of the universe, Naturalistic Imagery of Confucian Religiosity Self-cultivation in this context is seen as essential to develop or to recover one's innate authenticity and one's connection to the cosmos. It is a process filled with naturalistic imagery of planting, nurturing, growth, and harvesting. It is in this sense that one might describe the religious ethos of Confucianism as a dynamic naturalism aimed at personal and societal transformation. This means that the imagery used to described Confucian religious practice is frequently drawn from nature, especially in its botanical, agricultural, and seasonal modes. Thus to become fully human one must nurture (yang ) and preserve (cun )—that is, cultivate—the heavenly principle of one's mind and heart. These key terms may refer to such activities as nurturing the seeds of goodness that Mencius identifies and preserving emotional harmony mentioned in the Doctrine of the Mean (Zhongyong ). In Mencius there is a recognition of the fundamental sensitivity of humans to the suffering of others (IIA:6). This is demonstrated through the example of an observer's response on seeing a child who is about to fall into a well. Mencius suggests that the child would be rescued through activating the instinctive compassion of the observer, not by promising the rescuer any extraneous rewards. Indeed, to be human for Mencius means to have a heart with the seeds (or germs) of compassion, shame, courtesy and modesty, right and wrong. When cultivated, these will become the virtues of humaneness, righteousness, propriety, and wisdom. When they are developed in a person they will flourish, "like a fire starting up or a spring coming through" (IIA:6). Thus the incipient tendencies in the human are like sprouts or seeds that, as they grow, lean toward becoming fully cultivated virtues. The goal of Mencian cultivation, then, is to encourage these natural spontaneities before calculating or self-serving motives arise. This begins the art of discerning between the Way mind (daoxin ) and the human mind (renxin ). In a similar manner, the Doctrine of the Mean speaks of differentiating between the state of centrality or equilibrium before the emotions (pleasure, anger, sorrow, joy) are aroused and the state of harmony after the emotions are aroused. This balancing between the ground of existence (centrality) and its unfolding process of self-expression (harmony) is part of achieving an authentic mode of human existence. To attain this authenticity (cheng ) means not only that one has come into harmony with oneself but also that one has achieved a unity with heaven and Earth. Thus the identification of the moral order and the cosmic order is realized in the process of human cultivation. Self-authenticity is realized against the backdrop of the sincerity of the universe. This results in participation in the transforming and nourishing processes of heaven and Earth. In Mencius, that self-cultivation is seen as analogous to the natural task of tending seeds and is thus enriched by agricultural and botanical imagery. Moreover, in the Doctrine of the Mean this cultivation is understood within the context of a cosmological order that is pervasive, structured, and meaningful. The human is charged to cultivate oneself and, in this process, to bring the transformations of the cosmos to their fulfillment. It is thus possible to speak of early Confucianism as having religious dimensions characterized by naturalistic analogies of cultivation within a context of cosmological processes of transformation. All of this, then, involves a religiosity of analogies between the human and the natural world. The Book of Changes was also a major source of inspiration for spiritual practice and cosmological orientation for the neo-Confucians. This was seen amidst the transformations of the universe celebrated as production and reproduction (sheng, sheng ). For the neo-Confucians it was clear that many of the virtues that a person cultivated had a cosmological component. For example, humaneness (ren ) in humans was seen as analogous to origination (yuan ) in nature. The growth of this virtue in humans thus had its counterpart in the fecundity of nature itself. To cultivate (hanyang ), one needs to practice both inner awareness and outer attention, abiding in reverence within and investigating principle without. This requires quiet sitting (jingzuo ) and extending knowledge through investigating things (gewu zhizhi). To be reverent has been compared to the notion of recollection (shoulian ), which means literally to collect together or to gather a harvest. Thus, from the early classical Confucian texts to the later neo-Confucian writings there is a strong sense of nature as a relational whole in which human life and society flourishes. This had implications for politics and society that were evident throughout Chinese history, even if the ideals of the tradition were not always realized in practice. Black, Alison Harley. Man and Nature in the Philosophical Thought of Wang Fu-chih. Seattle, 1989. Forke, Alfred. The World Conception of the Chinese. London, 1925. Henderson, John B. The Development and Decline of Chinese Cosmology. New York, 1984. Huddle, Norie. Island of Dreams: Environmental Crisis in Japan. New York, 1975. Hou, Wenhui. "Reflections on Chinese Traditional Ideas of Nature." Environmental History 2 (1997): 482–493. Needham, Joseph. Science and Civilisation in China. 8 vols. Cambridge, U.K., 1954. Smil, Vaclav. The Bad Earth: Environmental Degradation in China. Armonk, N.Y., 1984. Smil, Vaclav. China's Environmental Crisis. Armonk, N.Y., 1993. Taylor, Rodney L. The Confucian Way of Contemplation: Okada Takehiko and the Tradition of Quiet-Sitting. Columbia, S.C., 1988. Totman, Conrad. The Green Archipelago: Forestry in Preindustrial Japan. Berkeley, Calif., 1989. Tu Weiming. Confucian Thought: Self-hood as Creative Transformation. Albany, N.Y., 1985. Tu Weiming. Centrality and Commonality: An Essay on Confucian Religiousness. Albany, N.Y., 1989. Tu Weiming. Way, Learning, and Politics: Essays on the Confucian Intellectual. Albany, N.Y., 1993. Tu Weiming, and Mary Evelyn Tucker, eds. Confucian Spirituality. 2 volumes. New York, 2003–2004. Tucker, Mary Evelyn, and John Berthrong, eds. Confucianism and Ecology: The Interrelation of Heaven, Earth, and Humans. Cambridge, Mass., 1998. Tucker, Mary Evelyn. "The Relevance of Chinese Neo-Confucianism for the Reverence of Nature." Environmental History Review 15, no. 2 (Summer 1991). Tucker, Mary Evelyn. "Ecological Themes in Taoism and Confucianism." In Worldviews and Ecology. Lewisburg, Pa., 1993. Tucker, Mary Evelyn. "An Ecological Cosmology: The Confucian Philosophy of Material Force." In Ecological Prospects: Scientific, Religious and Aesthetic Perspectives. Albany, N.Y., 1994. Tucker, Mary Evelyn, with John Berthrong "Introduction." In Confucianism and Ecology: The Interrelation of Heaven, Earth, and Humans, edited by Mary Evelyn Tucker, with John Berthrong. Cambridge, Mass., 1998. Tucker, Mary Evelyn."The Philosophy of Ch'i as an Ecological Cosmology." In Confucianism and Ecology: The Interrelation of Heaven, Earth, and Humans, edited by Mary Evelyn Tucker, with John Berthrong. Cambridge, Mass., 1998. Tucker, Mary Evelyn. "A View of Philanthropy in Japan: Confucian Ethics and Education." In Philanthropy and Culture in Comparative Perspective, edited by Warren Illchman, Stanley Katz, and Edward Queen. Bloomington, Ind, 1998. Tucker, Mary Evelyn. "Religious Dimensions of Confucianism: Cosmology and Cultivation." Philosophy East and West 48, no. 1 (January 1998). Tucker, Mary Evelyn. "Cosmology, Science, and Ethics in Japanese Neo-Confucianism." In Science and Religion in Search of Cosmic Purpose, edited by John F. Haught. Washington, D.C., 2000. Tucker, Mary Evelyn. "Confucian Cosmology and Ecological Ethics: Qi, Li and the Role of the Human." In Ethics in the World Religions,. edited by Joseph Runzo and Nancy Martin. Oxford, 2001. Tucker, Mary Evelyn. "Working Toward a Shared Global Ethic: Confucian Perspectives." In Toward a Global Civilization? The Contribution of Religions, edited by Melissa Merkling and Pat Mische. New York, 2001. Tucker, Mary Evelyn. "Confucian Ethics and Cosmology for a Sustainable Future." In When Worlds Converge : What Science and Religion Tell Us about the Story of the Universe and Our Place in It, edited by Mary Evelyn Tucker, with Cliff Matthews and Philip Hefner. Chicago, 2002. Tucker, Mary Evelyn. "Kaibara Ekken's Precepts on the Family " in An Anthology of Asian Religions in Practice, edited by Donald S. Lopez, Jr. Princeton, N.J., 2002. Mary Evelyn Tucker (2005)
<urn:uuid:04c99a5f-3c42-451b-8e80-87334ffe35ea>
CC-MAIN-2021-21
https://www.encyclopedia.com/environment/encyclopedias-almanacs-transcripts-and-maps/ecology-and-religion-ecology-and-confucianism
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988882.7/warc/CC-MAIN-20210508121446-20210508151446-00617.warc.gz
en
0.939186
5,128
3.546875
4
The world is becoming increasingly connected electronically, expanding markets and reducing the inefficiencies of doing business across borders. Services can be hosted anywhere and customers can be served from anywhere as the Third World catches up to the First World's broadband penetration. Emerging market territories often lack proper client control, however, and malware infection rates are high. When these malware clients are directed by centralized command-and-control servers, they become "botnets." The sheer number of client machines involved in botnets provides enormous load-generation capacity that can be rented cheaply by any party with an interest in disrupting the service of a competitor or political target. Today's global botnets are using distributed denial-of-service (DDoS) attacks to target firewalls, web services, and applications, often all at the same time. Early DDoS attacks used a limited group of computers (often a single network) to attack a single host or other small target. When commercial interests gained entry to the Internet in the 1990s, they presented a target-rich environment for any group with an axe to grind against a competitor or perceived commercial monopoly; Microsoft and the Recording Industry Association of America (RIAA) were frequent targets. Thus, DDoS attacks were perceived as being a problem primarily for "big players" or, in fact, for the Internet itself. In 2002 and 2007, coordinated DDoS attacks were launched against the 13 DNS root servers in an attempt to attack the Internet at its most vulnerable infrastructure. The 2002 attack was largely successful, but the 2007 attack failed (11 of 13 root servers stayed online), thanks to lessons learned from the 2002 attack. Commercial DDoS defense services were developed for deployment at the service provider level. Simple network attacks still work against undefended hosts. For example, a single Linux host running the world's most popular web server software, the Apache 2 server, fails under these simple attacks at very low packet rates. |SYN flood||1500 syns per second||Denial-of-service| |Conn flood||800 connections||Denial-of-service| Figure 1: Terminal metrics for a single Linux host with Apache 2 server Early DDoS attack types were strictly low-level protocol attacks against Layers 3 and 4. Today, DDoS attacks come in three major categories, climbing the network stack from layer 3 to layer 7. The most basic attacks in the DDoS threat spectrum are simple network attacks against the weakest link in the network chain. These attacks, called floods, harness a multitude of clients to send an overwhelming amount of network traffic at the desired target. Sometimes the target succumbs and sometimes a device in front of the target (such as a firewall) succumbs, but the effect is the same—legitimate traffic is denied service. By using multiple clients, the attacker can amplify the volume of the attack and also make it much more difficult to block, since client traffic can appear to come from all over the globe. The SYN flood and connection flood (conn flood) typify these simplest distributed attacks, which are designed either to tie up stateful connection mechanisms of devices, such as hosts, that terminate layer 4, or to fill up flow tables for stateful devices that monitor connections, such as stateful firewalls or intrusion prevention systems (IPS). Modern network attacks rarely fill or exceed the throughput capacity of the ingress pipes of the targets because they don't need to; stateful devices within the target data center typically fail long before the throughput limit is exceeded1. |SYN flood||Stateful flow tables||Fake TCP connection setup overflows tables in stateful devices| |Conn flood||Stateful flow tables||Real, but empty, connection setup overflows tables in stateful devices| |UDP flood||CPU, bandwidth||Floods server with UDP packets, can consume bandwidth and CPU, can also target DNS servers and VOIP servers| |Ping flood||CPU||Floods of these control messages can overwhelm stateful devices| |ICMP fragments||CPU, memory||Hosts allocate memory to hold fragments for reassembly and then run out of memory| |Smurf attack||Bandwidth||Exploits misconfigured routers to amplify an ICMP flood by getting every device in the network to respond with an ICMP broadcast| |Christmas tree||CPU||Packets with all flags set except SYN (to avoid SYN flood mitigation) consume more CPU than normal packets| |SYN/ACK, ACK, & ACK/PUSH floods||CPU||SYN-ACK, ACK, or ACK/PUSH without first SYN cause host CPUs to spin, checking the flow tables for connections that aren't there| |LAND||CPU||Identical source and target address IPs consume host CPU as they process these invalid addresses| |Fake TCP||Stateful flow tables||TCP sessions that look real, but are only recordings of previous TCP sessions; enough can consume flow tables and avoid SYN flood detection| |Teardrop||CPU||Sends a stream of IP fragments; meant to exploit an overlapping fragment problem present in some systems| Figure 2: Simple network attacks that can nonetheless be very effective These simple network attacks are still in use today, often in concert with the more advanced techniques of DNS attacks and HTTP floods. The Domain Name System (DNS) translates name queries (e.g., www.example.com) into numerical addresses (e.g., 192.168.204.201). Nearly all clients rely on DNS queries to reach their intended services, making DNS the most critical—and public— of all services. When DNS is disrupted, all external data center services (not just a single application) are affected. This single point of total failure, along with the historically under-provisioned DNS infrastructure, especially within Internet and enterprise data centers, makes DNS a very tempting target for attackers. Even when attackers are not specifically targeting DNS, they often inadvertently do; if the attack clients are all querying for the IP of the target host before launching their floods, the result is an indirect attack against the DNS server that can often bring it down. Because of the relatively simple, UDP-based DNS protocol, a DNS attack has two main characteristics: UDP floods: The DNS packet protocol is based on UDP, and UDP floods are extremely easy for attackers to generate. When under attack from a UDP flood, the DNS server must spend CPU cycles to validate each UDP packet until it runs out of connection contexts or CPU, at which point the services either reboot or drop packets. The most common response is to reboot (often causing a reboot cycle until the attack ends). The second option, dropping packets, is little better, as many legitimate queries will be dropped. Legitimate queries (NSQUERY): The hierarchical nature of the Domain Name System can require a DNS server to contact multiple other DNS servers to fully resolve a name; thus a single request from a client can result in four or five additional requests by the target server. This asymmetry between the client and the server is exploited during an NSQUERY DDoS attack, whereby clients can overload servers with these requests. A variation on this attack is a reflection/amplification DNS attack, whereby a series of misconfigured servers can be fooled into amplifying a flood of queries by sending their responses to a target victim's IP address. Such exploitation accounted for one of the largest attacks in recorded history (over 100 Gb/s) in 20101. Legitimate queries against non-existent hosts (NXDOMAIN): This is the most advanced form of attack against DNS services. Distributed attack clients send apparently legitimate queries to a DNS service, but each query is for a different host that does not exist anywhere, for example, urxeifl93829.com. The DNS service must then spend critical resources looking in its cache and zone database for the nonexistent host. Not finding a record, some DNS services will pass the attack request onward to the next service and wait for a response, tying up further resources. Even if the service survives the attack, it will ultimately replace all its valid cache entries with these invalid entries, further impacting performance for legitimate queries. These NXDOMAIN attacks are extremely difficult to defend against. In November 2011, a similar attack disabled many DNS servers around the world. The vulnerability remained until the Internet Systems Consortium could release a patch for BIND, the software on which many DNS servers are based. The historic under-provisioning of DNS machinery is being corrected, but only slowly, and often in response to a DDoS attack. Until the system as a whole is strengthened, DNS attacks on this vulnerable target will continue to be a tempting method for attackers. Over 80 percent of modern DDoS attacks are HTTP floods1. Unlike most simple network attacks, which overwhelm computing resources with invalid packets, HTTP flood attacks look like real HTTP web requests. To conventional firewall technology, these requests are indistinguishable from normal traffic, so they are simply passed through to the web servers inside the data center. The thousands or millions of attacking clients overwhelm the web servers with a massive number of requests. The two main variations of the HTTP flood attack differ in the requested content. The most common, basic attack merely repeats the same request over and over again. Clients that use this attack often do not bother to parse the response; they simply consume it and then resend the same request. Because they are the equivalent of a finger pressing a doorbell, these "dumb" attack clients are easy to program, but also easy to detect and filter. The more advanced version of the HTTP flood attack is a recursive-get denial-of service. Clients using this attack request the main application page, parse the response, and then recursively request every object at the site. These attacks are very difficult to detect and filter on a per-connection basis because they are requesting different, yet legitimate, objects. Recursive-get attack clients are quite intelligent and getting more sophisticated over time, resulting in an arms race between security services and recursive-get attackers. An undefended modern web server is a surprisingly vulnerable target for very simple HTTP attacks such as the Slowloris script. Slowloris works by opening connections to a web server and then sending just enough data in an HTTP header (typically 5 bytes or so) every 299 seconds to keep the connections open, eventually filling up the web server's connection table. Because of its slow approach, it can be a devious attack, remaining under the radar of many traffic-spike attack detection mechanisms. Against a single, typical web server running Apache 2, Slowloris achieves denial-of-service with just 394 open connections2. Like Slowloris, the Slowpost attack client uses a slow, low-bandwidth approach. Instead of sending an HTTP header, it begins an HTTP POST command and then feeds in the payload of the POST data very, very slowly. Because the attack is so simple, it could infect an online Java-based game, for instance, with millions of users then becoming unwitting participants in an effective, difficult-to-trace, low-bandwidth DDoS attack. A third low-bandwidth attack is the HashDos attack. In 2011, this extremely powerful DoS technique was shown to be effective against all major web server platforms, including ASP.NET, Apache, Ruby, PHP, Java, and Python3. The attack works by computing form variable names that will hash to the same value and then posting a request containing thousands of the colliding names. The web server's hash table becomes overwhelmed, and its CPU spends all its time managing the collisions. The security professionals exploring this attack demonstrated that a single client with a 30 Kbps connection (which literally could be a handset) could tie up an Intel i7 core for an hour. They extrapolated that a group of attackers with only a 1 Gbps connection could tie up 10,000 i7 cores indefinitely. If a web server is terminating SSL connections, it can be vulnerable to the SSL renegotiation attack invented and popularized by the French group "The Hacker's Choice." This attack capitalizes on the SSL protocol's asymmetry between the client and server. Since the server must do an order of magnitude more cryptographic computation than the client to establish the session, a single SSL client can attack and overwhelm a web server with a CPU of the same class. |Slowloris||Connection table||Slowly feeds HTTP headers to keep connections open| |Slowpost||Connection table||Slowly POSTs data to keep connections open| |HashDos||CPU||Overwhelms hash tables in back-end platforms| |SSL renegotiation||CPU||Exploits asymmetry of cryptographic operations| Figure 3: Low bandwidth HTTP attacks Rounding out the category of low-bandwidth attacks are simple HTTP requests that retrieve expensive URLs. For example, an attacker can use automated reconnaissance to retrieve metrics on download times and determine which URLs take the most time to fetch. These URLs can be then be distributed to a small number of attacking clients. Such attacks are very difficult to detect and mitigate, turning any weak points in an application into a new attack vector. Attacks are so ubiquitous today that many sites are constantly under some form of traffic attack, 24 hours a day, 365 days a year. These large sites defend and provision their resources accordingly. Smaller companies, which may not have the resources to routinely over-provision, must defend against attacks on a case by case basis. The reasons for DDoS attacks vary, but currently the primary motivation is either political or financial. Since the mid-2000's, a counterculture movement has arisen to use DDoS attacks as a form of online protest. The most infamous of these groups calls itself Anonymous. Anonymous sprang from the counterculture image board 4chan, and its main rallying point is freedom of speech. Early Anonymous targets were The Church of Scientology (for its presumed efforts at stifling information about its practices), YouTube (for censorship), and the Australian government (again for censorship). In recent years, Anonymous has made headlines by attacking governments and firms that it perceived were against the Wikileaks document-leaking service. Subsequently, they were involved in more high-profile attacks during the Arab Spring of 2011 and the Occupy Wall Street movement, with which they became closely associated. Occupy Wall Street participants and Anonymous share the same mask: the Warner Brothers version of Guy Fawkes, the seditionist who attempted to blow up the English Parliament in 1605. Attacks against commercial properties are motivated by financial gain for the attackers and/or loss for the targets. Gaming sites and auction sites have time- specific windows where DDoS can prevent game actions or final bids, and this occurs frequently enough that some auction sites automatically cancel any auction where a DDoS has occurred. The average botnet size peaked in 2004 at over 100,000 client machines. Now, however, there are many more botnets in a wide range of sizes. The average botnet size in 2011 shrank to 20,000, in part because more efficient, smaller botnets are still effective at DDoS but better at evading detection, analysis, and mitigation. Nonetheless, enormous botnets with 10 to 15 million client machines still exist. The massive Rustock and Cutwail botnets, for instance, were taken down in a simultaneous, globally coordinated campaign conducted by Interpol, Microsoft, and the University of Washington, with the help of other key sovereign enforcement agencies. Though there have been some truly enormous DDoS attacks in recent years (including a documented 40 Gbps attack in 2010 and a 127 Gbps attack in 2009 4), most DDoS attacks are much smaller. Approximately 80 percent involve less than 1 Gbps1 and short durations (typically hours, not days). Modern botnet clients can be easily constructed by purchasing a software development kit (SDK), available on the Internet for $1,500 to $3,000, and then tuning the software to perform the malicious activity desired, whether spam, DDoS attacks, click fraud, adware, hostage, bank authentication credential theft, and so on. These clients can attack other, nearby computers, infect them and increase the size of the botnet. The individual who put together the botnet is known as the bot-herder. Once the botnet has achieved the desired size, the bot-herder can either use it to threaten or attack targets or simply rent it to interested parties wishing to threaten or launch an attack. |Aspect||Cost per Client or Email ($US)||Cost per 10,000 Clients ($US)| |Per client acquisition5||$0.04 to $0.10||$400 to $1,000| |DDOS attack (per hour)||$0.01 to $0.02||$100 to $200| |DDOS extortion||$10,000 (common)||N/A| |Spam emails||$0.005 to $0.015||$0.50 to $1.50| |Click fraud6||$0.15 per client||$1,500| |Adware7||$0.30 to $1.50 per install||$3,000 to $15,000| Figure 4: The economics of botnets It can be difficult to judge the exact size of a botnet from its advertising, because bot-herders have financial motives to make their botnets appear to be as large as possible. Still, botnets of 10,000 clients or more can be found for rent on underground software markets for a rate of $200 (US) per hour. Such a botnet can create a SYN flood exceeding 4 million packets per second or a sustained conn flood attack exceeding 4 million concurrent connections. To prove their capacity, these medium-sized botnets sometimes can be used for free for 3 minutes in a "try-before-you-buy" model. A botnet of this size, when launched against a competitor on a busy holiday shopping day, could cost that competitor $100,000 per hour. Larger botnets may rent for several thousand dollars per hour and are capable of attacking larger targets and causing larger losses. For the small- to medium-sized botnets, there is a more compelling way to make money—by threatening to launch an attack and then extorting blackmail from the intended target. According to researchers at CloudFlare, many of these DDoS extortion acts originate in Russia and China. A typical threat may begin like this: Dear [target], we are a security firm in Shanxi province and we have received word that your company's website may be attacked one week from today. We have some influence with the attackers, and we believe that we may be able to convince them not to attack for $10,000 US. Please advise if you would like us to proceed. Instructions on how to wire funds are following. The letter is sent by the attacker, and if the monies are not paid, the attack is executed. If the blackmail is paid, the target may receive a thank you letter: Dear [target], congratulations. The payment we received was used to cancel the attack. The individuals who were planning the attack enjoy doing business with you and would like to offer you a discount of 20 percent should you choose to hire them to attack someone on your behalf… |Botnet||Estimated Size||DDoS Attack Types| |Rustock||2.4 million||Conn flood| |Cutwail||2.0 million||Fake SSL flood| |akbo||1.3 million||DDOS (unknown type)| |TFN2K||Unknown||SYN flood, UDP flood, ICMP flood, Smurf attack| |LOIC||15,000||HTTP flood, SYN flood, UDP flood| |RefRef||Unknown||DoS via SQL server vulnerability| Figure 5: Some of the world's high-profile botnets Historically, the majority of botnet clients have been relatively high performance utilities written in the low-level C/C++ language. This may be changing, as several new technologies will open opportunities to create different attack clients. Attackers will use network and HTTP floods to activate DDoS mitigation defenses and then use CPU-vector attacks to bring down those defenses and cause a denial-of-service-via-mitigation failure. Similarly, an intriguing set of attacks known as ReDos target the use of regular expression engines in IPS and firewall defenses. A two-phase attack like this has already been used against a major online payment processing system8 in 2010. |DDoS on the Horizon| |HTML 5 Threads| |Mobile Handset Bots| The DDoS threat spectrum has evolved from simple network attacks to DNS amplification attacks and finally application layer attacks. DDoS attacks are on the rise in 2012 as global participants increasingly engage in "hactivism" over digital rights and the changing landscape of intellectual property. While HTTP floods currently account for over 80 percent of today's attacks, expect simple network attacks to make a resurgence as they are combined with HTTP floods into sophisticated multi-stage attacks that achieve denial-of-service. As new technological frontiers open, expect to see more distributed, low-bandwidth DDoS attacks.The DDoS threat spectrum will continue to evolve as attackers bend those new technologies to the political and commercial conflicts that will always be part of the human condition. 1 Network Infrastructure Security Report VI, Arbor Networks 2 F5 testing of slowloris.py vs. HP ProLiant DL165 G6 with Apache 2.2.3 8 What We Learned from Anonymous/AntiSec, SC Magazine
<urn:uuid:72cfbecb-7852-4bcc-b341-9911e340a29c>
CC-MAIN-2021-21
https://www.f5.com/zh_cn/services/resources/white-papers/the-ddos-threat-spectrum
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988696.23/warc/CC-MAIN-20210505203909-20210505233909-00057.warc.gz
en
0.928053
4,488
3.125
3
The hero of our story, Bob, has the personal principle that Honesty is the Best Policy. At work one day he is suddenly and desperately in the need of a postage stamp. He doesn’t have on of his own. He does have some company stamps. The letter is really important and must make the next mail pick up. Now Bob is honest in big and little ways: he pays a legitimate tax, and he never fudges a report at work. He keeps his promises to his wife and kids. And he'll even alert the salesperson if she undercharges him for an item. But he really needs this stamp. A lot is riding on this letter post marked that very day! He opens his desk drawer, and there they are, a whole sheet of stamps. And he only needs one. He’s done so much for the company and no one will notice. So, he looks side-to-side and then reaches in quick and takes what he needs. Bob has the same internal indicator that we do, personal peace – personal peace is felt when we align our actions to our beliefs – to ‘walk the talk’ or as I like to say our practices are aligned with our principles. Lack of peace, we are not doing as we belive we should. Personal peace – let’s a moment and reflect on a few of the many times when we are more likely to feel peace –remember what the internal indicator of peace feels like solving a problem living with purpose feeling happy for someone else cleaning out a junk drawer experiencing physical rest feeling connected to someone For our hero’s peace, let’s use a healthy temperature as a metaphor. Imagine an old fashioned thermometer –the glass tube with a bulb for mercury at the bottom. The peace is like a reading of 98.6. And for our purposes, it is the highest temperature that can be reached. Each practice that does not keep him at the principle gradient is the mercury sliding lower and lower – and at some point, it will kill his peace. The day he stole company postage, he dropped 1 degree. It is such a small thing – he can't feel the difference. However, he chooses to speak a little lie later that day. It doesn't take much to reach the dangerous gradient of 94 degrees. At this point, he begins to feel cold. He's uncomfortable when honesty comes up as a topic of discussion at the dinner table. He stammers with confusion when his kids ask him a question about integrity. He has difficulty looking at his biggest supporter, his wife, in the eye. He is suffering from low body temperature or 'hypothermia of the soul.' Now Bob can process the conflict between his principles and practices in many ways, a few that I can think of: 1. Bob can take a sharpie and make a new line, labeling it 98.6 degrees – basically lowering his standards and keep his practices where they are 2. He can give up and call himself a liar – but keep the personal principle of honesty is the best policy where it is– so he lives with the cold reality of failure 3. Bob can start looking around at others who say honesty is the best policy – but measuring their deficiencies – people who are more dishonest than him and use that to try to keep warm. But this is like using a potholder where a down comforter is needed. 4. He could even hold up the thermometer to a light bulb – and actually, lie to himself that he is an honest man. But all of these are not authentic – they do not reach the true mark – they do not bring personal peace of a healthy alignment between practice and principle. Luckily Bob's principle of "Honesty is the Best Policy" causes him to look at his practices and principles honestly. He begins to administer adequate treatment to reach a healthy temperature. He may slip 2 degrees once in a while, he is human after all, but he doesn't like the cold and he is now very aware of when his temperature drops. He enjoys living at a healthy temperature. He actively seeks warm peace. I will end with an excerpt from an article by Dr. John Forsyth, author of Anxiety Happens: "True peace of mind may depend less on our psychological and emotional weather and may have everything to do with whether we are living our lives in alignment with our core, our deepest desires, and values. After all, whether we do that or not, and how consciously we do choose to do what matters (even when faced with the inevitable pain of life) will add up to a life lived well or not. And, maybe that's what really matters in the end.” Do you know the song, "Hush Little Baby, Don't You Cry?" It starts with a soft plea for a baby to stop crying and the lovely thought of getting a mockingbird. One interpretation could be the dad is buying the mockingbird so the kid can hear how ugly a cry sounds, and if the kid stops crying, the bird won't sing, and in the subsequent stillness, the child will be rewarded with a piece of fine jewelry. However, with the rest of the song, we can see that the bawling is assumed to be an intense dissatisfaction with life. And thus the quest to make things right and that such a hope for the next best thing should be the substantial satisfaction needed to stop sniveling. And if all else fails, another promise to the previously 'need to be appeased' howling baby is he/she will still be the sweetest in town. It makes you wonder what all the other babies are like. Clever rhyme and a catchy tune are not enough to fool that the message is harmless. A song-less mocking bird needs no replacement. Is it not still a wonder? Does the lack of skill somehow diminish its beauty? Can the baby not be introduced to the amazement at the texture of feathers and their varied colors and design. Oh, and to stop a moment while baby and papa share the amazement of the bird's ability to fly! Oh, the power of holding his baby and making up a story of what it would be like to fly. To say ‘thank you’ to whatever source you believe gave us the mocking bird brings peace. I rewrote the song. Hush, little baby, don't say a word. Papa's gonna buy you a mocking bird And if that mocking bird don't sing. Papa's gonna show you its wing. And when you see its feathers you'll understand they're treasures. And we'll take a peek at its powerful pointy beak. And pause to see the bird's wonder even though sad times we are under. Oh, and how a bird can take flight and we can see what's good and right And you'll always have more if you learn to find that which to adore Because things don't always work out right and life sometimes has strife But be assured my little one we need not be undone. Cry if you need to I'll hold you and see you through. But know we can triumph and be happy even though this song is sappy. Is not gratitude the most excellent distraction from sorrow? Practicing gratitude brings peace. And finally, what is so wrong with NOT being the sweetest little baby in town? Just like a bird is still of great worth even though silent, we are still of great worth even when we are not so sweet. Let’s march to our own upbeat drummers, change our tune to something cheerful, and write our own lullabies of gratitude. As published on Military One Sources Blog Brigade “All over” was a vague term in my pre-mother days, most often used as hyperbole. Now, “all over” was literal and I was looking at it. I was stunned when I saw the kitchen table. I stood still, scatterbrained, and staring. My wonderful children, who should have had their church shoes on and standing at the front door, were running shoeless all over the house. One or all four of them had tipped a box of cereal all over the table. A table was full of tan flakes in various states of matter and white sugar. I had 3 minutes before I had to be driving away from my house and about 30 minutes of work to do before I could even get out the door. My husband was away, so no dividing and conquering strategies. And though my senses were currently divided, I would not be conquered. I retreated to the garage for the broom and dustpan. Yep, I swept my kitchen table, not my finest homemaking moment. I corralled the kids to the door, somehow, they got shoes on and we made it to church. And if it was like any other Sunday service, my kids were all over the pew and the youngest crawling all over me. And what do frosted flakes, brooms, and pews have to do with summertime fun in the age of quarantine? As it turns out, at least two lessons lay in my experience. It has to do with the time we are not all over the town going here and there doing this and that. First: Like using a broom to sweep a table, look for creative solutions. Second: I was frazzled because we had a tight schedule. But we can make the most of the time we save not taking kids all over town to various activities. 1. Summer Camp canceled? Be your own camp director! We’ve done this and the memories are a treasure. I set up a different theme for each day for a week. We had stickers to celebrate completing projects. My husband was away that week, so it also kept me distracted from missing him. There are ideas all over the internet for nature crafts, homemade games, outdoor DIY obstacle courses, etc. Don’t forget the s’mores! Even microwave ones count. If you have a fire pit, there you go, your own campfire program! 2. The local library had to go digital or postpone summer reading programs? Put on your own summer reading event. Have ‘read the book, see the movie’ activities. Make paper puppets of the characters in a book and act out the story. Read out loud to your kids outside. We used to put pillows all over a corner of our deck and I’d read aloud, and we’d eat popcorn. Simple. But it was just different enough to keep things fresh. Make your own reading bingo with prizes. (I suggest keeping the prizes simple and inexpensive or maybe not a thing at all but a ‘stay up late card’ or ‘get out of a chore card’) But add things besides reading to the squares. Literary things like writing a thank you note to a favorite author, drawing a picture of a character in a chapter book, or looking up more information about a place or thing from a storybook. You can even require non-fiction books or certain books. After all, you’re the librarian now. Make the Most of the Time As time moved past the cereal incident, there will be a time when the quarantine will be all over. Use those 10 minutes saved by not having to get shoes on small squirmy feet toward a complex project best broken into small pieces. 10 minutes a day for 3 months of summer equals 15 hours! 1. Stop Action Films. There are free apps for your phone that let you make stop-action films. Write up a story. Gather the characters from all over the house. Create backgrounds. Check lighting. Set it up. Practice. Take Pictures. Edit. Add music, etc. Too much to do at one time, but broken up into 10 minutes a day, you would have a mighty fine product at the end of the summer. 2. Family History. Gather photos and start writing the stories surrounding the image. Pick an ancestor but remember, family history isn’t all about those who have died. Current events are history. Think of your parents when they were your kids’ ages. Don’t cover just the basics of birth date and place, etc. Let your mind go all over the place. Describe them – write about eye color, favorite sayings, or jokes they told over and over. Want to tell the story of a family member that has lived a long time? Make index cards for every year. Fill in the card with all that you know happened that year – file chronologically. Research and fill the cards. Think of places they lived, how old their family members were that year, events that happened in their community. Find out little things, like the price of bread and gasoline. Was there a pandemic? What did they hope to be when they grew up? What music was on the radio? What were the major scientific developments? What were the fashion trends? Then when you get to put it all together, you’ll already have it in order and lots of details. Have the kids interview a family member – hopefully, the oldest family member you’re in contact with. Record the interview somehow. Think of the joy a 10-minute call once a day for a week would bring to an elderly relative. Plus, imagine the treasure that recording will be to your kid’s grandchildren. They’ll be all over it. Check out this great video! I explain the law God set forth to take care of His children so that there are, "no poor among them." The composite above is of my two daughters, my mother, my grandmothers, and me. All 6 of us have enjoyed being born members of the Church of Jesus Christ of Latter-day Saints. Five of us served full-time missions for the church. I estimate that our church service from age 20 to death/present is a combined 217 years. I don’t know all that the others know, but I have listened and watched my mother as she implemented teaching strategies and heard her quote her mother, I’ve certainly paid attention to my daughter’s fine examples and I’ve heard stories about my father’s mother. Thirty of the above total are my years teaching in the church. I say that all merely to establish a bit of credibility. Here are some tips for the nitty-gritty of teaching a church class, and when I say nitty-gritty I am not talking about knowing and loving the gospel or knowing and loving the students…just nitty-gritty running of a classroom so there is an increased likelihood your students will hear you and understand and feel the Spirit. 1. If you give homework or a challenge, follow up! Send a reminder of the challenge, be encouraging, and have some sort of reporting in place. Not reporting like in a ‘gotcha’ way or even publically. Like give them a sticker to put on the challenge – something totally private but marks that they’ve completed the challenge. 2. When it is time to have a prayer, don’t ask, “Do I have any volunteers to pray?” or any form thereof. If you teach a class, you’ll have regulars. At the first class, pass out index cards and ask them to put their name on it (or have their names already on it) and have them write if they are willing to pray or read out loud. Respect this. When you are in class, ask someone to pray like this, “Brother Soandso, will you say the prayer?” EZ PZ Direct. Saves SOOOO much time and awkwardness. 3. If you teach children or youth, be sure you have the parent’s permission to contact the child out of class. Find out their preferred method of being contacted. If you have children enrolled in your class and their parents do not attend, be sensitive to their situation and respectful. 4. If you are going to have class members read, display the book, chapter, and verse. This will increase your chances of not having to repeat the reference and it makes for a smoother class. 5. If you want class members to be silent participants by thinking about the lesson, post the questions you want them to ponder so they can see them. Tell them at the beginning you want to hear what they have to say about what you are learning together. All will be more comfortable answering questions they know in advance. 6. Don’t ask questions before you teach the lesson. UGH! A big pet peeve of mine. Starting the lesson off with a question that people will get wrong and then saying something dorky like, “That’s not what I’m looking for….” etc – totally shuts off learning and participation. 7. You are gonna have bad lessons. Things will get out of hand. You are teaching a classroom of individuals of all sorts of spiritual developmental stages. They are coming from their lives into the church. They could have had a very rough morning or going through a tough time. It is OK. Just express love and be patient. Don’t get so bogged down by how much effort you put in the lesson and they better well appreciate it attitude – you’ll fail for sure! 8. If it is a particularly spiritual lesson, say about the last week of Christ’s life or you’ll be sharing a personal tender story – tell the class at the beginning. 9. State the objective of the lesson at the beginning. It should not be hidden or a big reveal at the end. Keep them with you – you are learning together. You are not teaching because you are the expert, you are a leading learner. 10. Be enthusiastic. 11. Try to tell the scripture story/block you are studying at least two ways, the best is three. Read it together, have them read silently. show a video, have a student (ask them beforehand) summarize it, have them teach each other, do a radio play of it, have them draw a storyboard….you can think of ways. But if you want them to get the story so they can see the doctrine, repeat, repeat, repeat. 12. One principle that is nitty-gritty and high end - you’ve got to love them. Relax your face when you look at them. Smile. Don’t correct, redirect. These are God’s children and He has entrusted them to you…teach in a way that pleases Him. Imagine you live 2,000 BC. Your name is Hagar. You are a servant of an older woman, Sariah. She and her husband are Hebrews. You are an Egyptian, in fact, some people later would say your name Hagar means stranger. The older woman can’t have children, and you are given to her husband, Abram so he can have a son. You get pregnant. One day the older woman thinks you are giving her the evil eye and starts treating you harshly. So harshly that you, alone, pregnant, run away. An angel tells you to go back. He also tells you to name your son Ishmael and that he will grow into a wild man – could mean nomad or free man, - and will live among his people. You go back. Then 14 years later, the older woman, now called Sarah, is pregnant. She sees your son in what the scriptures call “mocking” and she tells the father of your son, now called Abraham, her son Isaac will not share any birthright inheritance with your son. She demands you and your son be thrown out! Abraham is sad about it, but he is told by God to let you go! Early the next morning, he gives you some bread and water and off you and your son go into the desert. The water doesn’t last very long. You throw the empty container under a bush. You can’t bear to see your son die, so you have him lie next to a tree and you go off, just as far as an arrow will fly (a bowshot)– so you can still see your son. You start to cry. Your son prays. An angel shows up and says your son isn’t going to die and he’ll yet live to have a family. He says to go and lift him up and hold his hand. He then shows you a well, you get your bottle, and fill it up and give your son a drink. You don’t return to your home, but continue onward. And what the angels told you becomes true: Your son marries an Egyptian – you are an Egyptian - he lives among his people! Does not have to answer to his father – so he is also a free man. And has children, 12 sons’ names are recorded. And ironically, as you were a bowshot (as far as an arrow travels) away from him when the angle visited, Ishmael became an archer. Hagar thought she was at a dead end, but it was really a the middle of a journey. Let’s examine the scriptural account a little closer: The first thing is about Sarah – both times – she didn’t ask for clarification on her perceptions of both Hagar and Ishmael. She just acted on what she thought was hatred and mockery. And such drastic action! The narrative doesn’t say whether she even talked to Hagar about it, to be sure or to offer a chance for the teenage boy to do better. Or even to double check with Ishmael – maybe she needed to hear his side of things before she took action. The second is how Hagar was allowed to leave – she was helped on her way by someone whom she should have been able to rely on. And he barely supplied her. I’ve read Abraham was a rich man and yet he sent her out with one container of water and just enough bread to carry. No mule loaded with supplies? No helper? And the third was Hagar didn’t see the well until she had divine help to open her eyes. The account doesn’t say, “suddenly a well appeared,” it said Hagar’s eyes were opened and so she saw the well. She saw another way – that God was more powerful than Abraham, and she could depend on God. When we look closer at Hagar we see promises, divine intervention, and mercy. (My sources for my research were the King James Version of the Old Testament, churchofjesuschrist.org, and myjewishlearning.com.) The fire of testimony’s witness Does not incinerate the weaknesses waiting, Inflammable, sitting as useless rubble, Polluting our inheritance. Optimistically examined one by one, Hardened chunks pulverized with joyful power To infinitesimal particulates, Distilled to usefulness by perfect Living Water. While missionaries are called as missionaries first and their area of service is secondary, the area is still a divine assignment. I’d like to share what I learned about mission calls and prayer. I earnestly prayed I would be sent to the land of my ancestors and hoped the answer would be, “Yes,” I had been singled out so many times, and not in a positive way, for being fair skinned and blonde. I thought it would be comforting to be around more people who looked like me. I imagined Denmark or Sweden. When I received my call, I was sent to the Illinois Peoria Mission. I accepted that the answer to my prayers was, “No.” And that was OK, I was thrilled to be considered worthy to serve and humbled by the chance to witness for Christ. However, through reading family history materials, by the time I reported to the MTC, I knew that in many ways Illinois is a land of my ancestors. So, the answer to my sincere plea was, “Yes.” Since my mission I have learned many more details of my Illinois family and other ancestors. Abraham Hunsaker, born in 1812 in Jonesboro, Illinois moved to Quincy, Illinois at age 14. Their son, my great-great grandfather, Allen was born in Quincy in 1840. While living in Quincy the Saints, escaping from political and social persecution that hounded them in MO, crossed the Mississippi and found refuge in Quincy. Displaced, the McBride family were taken in to the Hunsaker home and while living there took the initiative to introduce the gospel to Abraham and Eliza. They accepted the Gospel and were baptized in November 1840 and eventually joined with the Saints in their relocation to the area of Nauvoo. Quincy was my first area. While living in New York, a missionary named Jonathan Dunham ended up at the front door to the home of the William and Sally Stacy Murdock family. The family was baptized in 1836. When they were able to, they sold their fine property and gathered to Nauvoo. William died and buried in Nauvoo. When Simeon Adams Dunn was 37 he was taught the gospel by his brother Elder James Dunn, whom he had not seen since childhood. After joining the church in January 1840 Simeon walked the 500 miles from Michigan to Nauvoo, Illinois. While living in Nauvoo he worked on the Nauvoo Temple, saw on of his daughters married in the temple, buried his wife, remarried, and had a daughter born there. His house is still stands on the corner of Parley and Hyde Streets. Simeon’s 3rd wife, Harriet Atwood Silver, joined the church in Massachusetts in 1843. She was 25 and travelled alone to meet the saints in Nauvoo and lived there until the Exodus to the West. Of course, the most important event in my family that took place in Illinois was when Andrew and I lived there some 20 years after my mission ended and a daughter was born! Simeon A. Dunn's home in Nauvoo, Illinois. On this sacred day and hearing from others who miss going to church services, I wanted to explain how members of the church I belong to, the Church of Jesus Christ of Latter-day Saints are able to have church meetings in their homes during this time of social distancing - including taking the sacrament. We do not have paid clergy. Just like Christ’s apostles during in mortal ministry, our church leaders are called from their professions to serve, unpaid, in the church. The modern prophet was a heart surgeon as the ancient prophet Peter was a fisherman. Our church does not have a hierarchy, in that serving in one call does not mean you’ll be given more responsibility in the next nor do you have to meet certain criteria to be selected by a council to lead. There is no ‘moving up.’ NOTE: The priesthood is the Lord’s power to give to whom He pleases. So arguments and protests to any body of people to demand the power of the priesthood be given to women is a waste of time. Church leaders represent Christ to the people, not the people to Christ. The young men and men in our church are ordained as holders of the priesthood. As young as the January in the year they will turn 12. There are different responsibilities for each office in the priesthood. My son, Bryant, is a Priest. He can prepare, bless, and pass the Sacrament (bread and water). He can also perform baptisms. My husband, as a High Priest, can do all that my son can and be called to preside over church units as well as, by the laying on of hands (exactly like Jacob blessing Ephraim and Manasseh and Peter and John giving the gift of the Holy Ghost), give blessings to others. I have been enjoying church meetings at home. We get dressed in church clothes. We sing hymns. Partake of the sacrament. And we pray. It is personal, peaceful, and focused. When individuals and families do not have someone who is ordained to the priesthood to provide the sacred ordinance of the sacrament, there are those who can be assigned to bring it to them. It is a beautiful thing. I am happy to be a member of the Church of Jesus Christ of Latter-day Saints. The Gospel of Jesus Christ brings me great happiness. When I was in the all-girls church group for 10-12 year-olds, "Merry Misses", one of my two lovely leaders was Lydia Dean. Sister Dean had been blind for a few years. I could tell she loved me. I certainly loved her. One Sunday she sat by me in church. We were sitting on the pew closest to a side door - near the podium. The door leads to the outside and it was a sunny day. Light entered the chapel. I always loved how that looked and felt. That Sunday Sister Dean sat by me. Sometime during the service, she leaned over and whispered to me, "you're beautiful." My immediate thought was, "ya, only a blind person would think I was beautiful." But as I sat there I wondered, well, what is there about a person you can't see that can be beautiful? And I realized there are many parts to a person you do not see. And so I put my feeble efforts to focus on what I thought those were, such as following the Savior's example to "go about doing good." Her blind observation brought me great insight. The insight instigated an internal shift forever changing my life. So much gratitude do I have for her I named one of my children in her honor. A few years later, I came upon something Spencer W. Kimball, the Lord's prophet at the time, said: Life gives to all the choice. You can satisfy yourself with mediocrity if you wish. You can be common, ordinary, dull, colorless, or you can channel your life so that it will be clean, vibrant, useful, progressive, colorful, and rich. And none of those beauties in life have anything to do with teeth, clothing, or hair cut (see photo). And I find comfort in the scriptural account in Samuel, where the Lord tells Samuel the reason he has not chosen the oldest son of Jesse has nothing to do with his appearance. Or maybe Samuel rejected Eliab after he was not chosen...finding a reason and saw him as unattractive. Whatever the cause of his commentary, the lesson is in the Lord's reply: Look not on his countenance, or on the height of his stature; because I have refused him: for the Lord seeth not as man seeth; for man looketh on the outward appearance but the lord looketh on the heart. Oh, and by the way, my appearance got way worse - I got glasses (I bought the cheapest pair and they were so unattractive - and designed for a man) and though I was happy to get braces, I hated the headgear! Find out more about Lydia Dean. An article about Lydia Dean that was published in the Church of Jesus Christ of Latter-day Saints' monthly publication, the Ensign Magazine. Click on the link below to read: Me in 1978, 10 years-old - about the time Lydia Dean's whisper rung loud in my heart. My great grandfather, Lewis Hunsaker grew up in Northern Utah. He was a child and grandchild of polygamists who were also Mormon Pioneers, went to church all his life, served a mission for the church, had 13 children with his one wife, dedicated many hours in the temple, supported the Boy Scouts of America for decades, and was a hardworking dependable righteous man. I am not sure how many years he started working, but I would assume since he was a young boy. He shared the following experience he had when he was 20 years old: "While herding sheep I read the Bible through twice, the Doctrine and Covenants twice, and the Book of Mormon five times. I have read it through several times since. As I finished reading it the fifth time and came to these words, And when ye shall receive these things, I would exhort you that you would ask God the Eternal Father, in the name of Christ, if these things are not true; and if ye shall ask with a sincere heart, with real intent, he will manifest the truth of it unto you by the Power of the Holy Ghost, and by the Power of the Holy Ghost ye shall know the truth of all things. “I quit reading and walked a little closer to where the sheep were feeding, thinking of the words I had just read. I looked to see if there was anyone in sight, then I knelt and prayed earnestly to my Heavenly Father. After praying, oh what joy filled my soul, it is beyond the power of mortal man to describe it. I shall never forget that occasion. It was in October 1891, two miles south of Washakie and one mile west." I also know the Book of Mormon is true scripture. People say they know something is true, “with every fiber of my being,” or “as sure as I’m standing here,” or even “as I know the sun will rise tomorrow.” But my center of knowing The Book of Mormon is true isn’t in my physical self. It is not created by my environment, confirmed with my senses, nor experienced in a tangible manner. I know the Book of Mormon is true scripture through the simple personal spiritual communication of the Holy Ghost - joy. I wanted to know, I read it, and I asked my Heavenly Father if what I read was real. And in reality, He answered my prayer with a powerful soul-filling answer, the Book of Mormon is true scripture. While Lewis was about his work he read the scriptures. Now, my work is quiet, but I work with my hands as I fold laundry, do dishes, make meals, occasionally mop the floors, and dusting every once-in-a-while. I usually listen to podcasts or music to keep me going through boring tasks. Can I choose to be like Lewis? I can’t read the scriptures when I work, but I can certainly listen to them! According to Google searches it would take 115 hours to listen to the entire Holy Bible, the Book of Mormon, the Doctrine and Covenants, and the Pearl of Great Price. Which is 32 minutes a day for a year! However, I've never been that careful with time and record keeping. So, I could just listen every time I was doing any of the chores listed above... I cannot think of an excuse...so, here I go! UPDATE: I did start this. In five days while listening while working around the home - not even all of the time I've been doing housework - I have gotten to Exodus 26. Five days. Just saying, I'm glad I've chosen to be like Lewis. In case you are wondering: The hours it takes per book of scripture: 72 Holy Bible, 25 Book of Mormon, and 18 for the Doctrine and Covenants and Pearl of Great Price. God was like a neighbor who lived a few houses down the street. I’d go to him when something was broken, come back to collect the repaired problem, wave my hand in unspoken thanks, and speed back home. I didn’t know more about him and never invited him to my home. I was relieved to know he was there but thought of him as someone who lived close by and had a special set of skills. But in an instant I knew God loved me. I was thirteen years old. I’d been spit on, slapped, called names, called ugly, mocked, threatened, groped, and accused of gross sexual activity. I felt like the beauty of the world was not for me to enjoy. I did not deserve the air I breathed. I couldn’t sleep at night. I was convinced the devil would take me away because the world was too good for me. There I stood in the middle of a pack of girls. Always in a pack, like animals. They surrounded me kicking, laughing, and enjoying it. I withdrew into survival mode. One of the pack leaders laughed while remarking she could see my heart slamming against my rib cage. My sense of self was used up. Fear filled me. I wanted to melt into the ground and not to be seen. I imagined ways to escape. I thought of Scotty beaming me up to the Enterprise, a hole opening beneath my feet, and swooshing me through a tunnel across town to my bedroom. Fighting them even came to mind, but I’d seen too many Westerns. If I beat one, they’d take revenge. If I lost, I would be an easy target when one of them wanted to feel big. Though it crossed my mind then, I hadn’t run away yet. If I ran, I knew I could never recover. I looked toward the helpful neighbor, God. I prayed. He spoke. “Stand firm.” I doubted. My legs and resolve were quickly shrinking. I was going to collapse and give in. “Stand firm,” was repeated. I believed. A physical power of strength entered my legs and moved up. Like an aperture inside a soft clay sculpture, suddenly filled with steel. He made it possible to do as he told me to do. With that tangible power, I could and did stand firm. In that instant I knew what I thought was a neighbor was an all-powerful God. He knew what I needed, he communicated it to me, and he empowered me. He loved me and I was worth loving. I wisely invited him into my home. previously published on mormonwomen.com
<urn:uuid:8646f986-a4e2-4af1-9b4c-801470991d3a>
CC-MAIN-2021-21
https://studyandsay.com/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991378.48/warc/CC-MAIN-20210515070344-20210515100344-00175.warc.gz
en
0.976562
7,974
2.546875
3
WikiDoc Resources for DNA replication Evidence Based Medicine Guidelines / Policies / Govt Patient Resources / Community Healthcare Provider Resources Continuing Medical Education (CME) Experimental / Informatics DNA replication is the process of copying a double-stranded deoxyribonucleic acid (DNA) molecule, a process essential in all known life forms. The general mechanisms of DNA replication are different in prokaryotic and eukaryotic organisms. As each DNA strand holds the same genetic information, both strands can serve as templates for the reproduction of the opposite strand. The template strand is preserved in its entirety and the new strand is assembled from nucleotides. This process is called semiconservative replication. The resulting double-stranded DNA molecules are identical; proofreading and error-checking mechanisms exist to ensure extremely high fidelity. In a cell, DNA replication must happen before cell division. Prokaryotes replicate their DNA throughout the interval between cell divisions. In eukaryotes, timings are highly regulated and this occurs during the S phase of the cell cycle, preceding mitosis or meiosis I. A DNA strand is a long polymer built from nucleotides; two complementary DNA strands form a double helix, each strand possessing a 5' phosphate end and a 3' hydroxyl end. The numbers followed by the prime indicate the position on the deoxyribose sugar backgone to which the phosphate or hydroxyl group is attached (numbers without primes are reserved for the bases). The two strands in the DNA backbone run anti-parallel to each other: One of the DNA strands is built in the 5' → 3' direction while the other runs in an anti parallel direction, although its information is stored in the 3' → 5' direction. Each nucleotide consists of a phosphate, a simple sugar or a deoxyribose sugar - forming the backbone of the DNA double helix - plus a base. The bonding angles of the backbone ensures that DNA will tend to twist as the length of the molecule progresses, giving rise a double helix shape instead of a straight ladder. Base pairs form the steps of the helix ladder while the sugars and phosphate molecules forms the handrail. Each of the four bases has a partner to which it makes the strongest hydrogen bonds. When a nucleotide base forms hydrogen bonds with its complementary base on the other strand, they form a base pair: Adenine pairs with thymine and cytosine pairs with guanine. These pairings can be expressed as C•G and A•T, or C:::G and A::T where the number of colons indicate the number of hydrogen bonds between each base pair. For example, a 10-base pair strand running in the 5' → 3' direction that has adenine as the 3rd base will pair with the base thymine as the 7th base on the complementary 10-base pair strand running in the opposite direction. The replication fork The replication fork is a structure which forms when DNA is being replicated. It is created through the action of helicase, which breaks the hydrogen bonds holding the two DNA strands together. The resulting structure has two branching "prongs", each one made up of a single strand of DNA. Lagging strand synthesis In DNA replication, the lagging strand is the DNA strand at the replication fork opposite to the leading strand. It is also oriented in the opposite direction when compared to the leading strand, with the 5' near the replication fork instead of the 3' end as is the case with the leading strand. When the enzyme helicase unwinds DNA, two single stranded regions of DNA (the "replication fork") form. DNA polymerase cannot build a strand in the 3' → 5' direction. This poses no problems for the leading strand, which can continuously synthesize DNA in a processive manner, but creates a problem for the lagging strand, which cannot be synthesized in the 3' → 5' direction. Thus, the lagging strand is synthesized in short segments known as Okazaki fragments. On the lagging strand, primase builds an RNA primer in short bursts. DNA polymerase is then able to use the free 3' hydroxyl group on the RNA primer to synthesize DNA in the 5' → 3' direction. The RNA fragments are then removed (different mechanisms are used in eukaryotes and prokaryotes) and new deoxyribonucleotides are added to fill the gaps where the RNA was present. DNA ligase is then able to join the deoxyribonucleotides together, completing the synthesis of the lagging strand. Leading strand synthesis The leading strand is defined as the DNA strand that is read in the 3' → 5' direction but synthesized in the 5'→ 3' direction, in a continuous manner. On this strand, DNA polymerase III is able to synthesize DNA using the free 3'-OH group donated by a single RNA primer (multiple RNA primers are not used) and continuous synthesis occurs in the direction in which the replication fork is moving. Dynamics at the replication fork The sliding clamp in all domains of life share a similar structure, and are able to interact with the various processive and non-processive DNA polymerases found in cells. In addition, the sliding clamp serves as a processivity factor. The C-terminal end of the clamps forms loops which are able to interact with other proteins involved in DNA replication (such as DNA polymerase and the clamp loader). The inner face of the clamp allows DNA to be threaded through it. The sliding clamp forms no specific interactions with DNA. There is a large 35A hole in the middle of the clamp. This allows DNA to fit through it, and water to take up the rest of the space allowing the clamp to slide along the DNA. Once the polymerase reaches the end of the template or detects double stranded DNA (see below), the sliding clamp undergoes a conformational change which releases the DNA polymerase. The clamp loader, a multisubunit protein, is able to bind to the sliding clamp and DNA polymerase. When ATP is hydrolyzed, it loses affinity for the sliding clamp allowing DNA polymerase to bind to it. Furthermore, the sliding clamp can only be bound to a polymerase as long as single stranded DNA is being synthesized. Once the single stranded DNA runs out, the polymerase is able to bind to the a subunit on the clamp loader and move to a new position on the lagging strand. On the leading strand, DNA polymerase III associates with the clamp loader and is bound to the sliding clamp. Recent evidence suggests that the enzymes and proteins involved in DNA replication remain stationary at the replication forks while DNA is looped out to maintain bidirectionality observed in replication. This is a result of an interaction between DNA polymerase, the sliding clamp, and the clamp loader. DNA replication differs somewhat between eukaryotic and prokaryotic cells. Much of our knowledge of the process DNA replication was derived from the study of E. coli, while yeast has been used as a model organism for understanding eukaryotic DNA replication. It is not known how RNA polymerase produces enough energy to carry out replication. Mechanism of replication Once priming of DNA is complete, DNA polymerase is loaded into the DNA and replication begins. The catalytic mechanism of DNA polymerase involves the use of two metal ions in the active site and a region in the active site that can discriminate between deoxynucleotides and ribonucleotides. The metal ions are general divalent cations that help the 3'-OH initiate a nucleophilic attack onto the alpha-phosphate of the deoxyribonucleotide and orient and stabilize the negatively-charged triphosphate on the deoxyribonucleotide. Nucleophillic attack by the 3'-OH on the alpha phosphate releases pyrophosphate, which is then subsequently hydrolyzed by inorganic pyrophosphatase into two phosphates. This hydrolysis drives DNA synthesis to completion. Furthermore, DNA polymerase must be able to distinguish between correctly paired bases and incorrectly paired bases. This is accomplished by distinguishing Watson-Crick base pairs through the use of an active site pocket that is complementary in shape to the structure of correctly paired nucleotides. This pocket has a tyrosine residue that is able to form van der Waals interactions with the correctly paired nucleotide. In addition, double stranded DNA in the active site has a wider and shallower minor groove that permits the formation of hydrogen bonds with the third nitrogen of purine bases and the second oxygen of pyrimidine bases. Finally, the active site makes extensive hydrogen bonds with the DNA backbone. These interactions result in the DNA polymerase III closing around a correctly paired base. If a base is inserted and incorrectly paired, these interactions could not occur due to disruptions in hydrogen bonding and van der Waals interactions. The mechanism of replication is similar in eukaryotes and prokaryotes. DNA is read in the 3' → 5' direction, relative to the parent strand, therefore, nucleotides are synthesized (or attached to the template strand) in the 5' → 3' direction, relative to the daughter strand. However, one of the parent strands of DNA is 3' → 5' and the other is 5' → 3'. To solve this, replication must proceed in opposite directions. The leading strand runs towards the replication fork and is thus synthesized in a continuous fashion, only requiring one primer. On the other hand, the lagging strand runs in the opposite direction, heading away from the replication fork, and is synthesized in a series of short fragments known as Okazaki fragments, consequently requiring many primers. The RNA primers of Okazaki fragments are subsequently degraded by RNase H and DNA Polymerase I (exonuclease), and the gap (or nicks) are filled with deoxyribonucleotides and sealed by the enzyme ligase. DNA replication in bacteria (E.coli) Initiation of replication and the bacterial origin DNA replication in E. coli is bi-directional and originates at a single origin of replication (OriC). The initiation of replication is mediated by DnaA, a protein that binds to a region of the origin known as the DnaA box. In E. coli, there are 5 DnaA boxes, each of which contains a highly conserved 9-base pair consensus sequence 5' - TTATCCACA - 3'. Binding of DnaA to this region causes it to become negatively supercoiled. Following this, a region of OriC upstream of the DnaA boxes (known as DnaB boxes) melts. There are three of these regions. Each are 13 base pairs long and rich in A-T base pairs. This facilitates melting because less energy is required to break the two hydrogen bonds that form between A and T nucleotides. This region has the consensus sequence 5' - GATCTNTTNTTTT - 3. Melting of the DnaB boxes requires ATP (which is hydrolyzed by DnaA). Following melting, DnaA recruits a hexameric helicase (six DnaB proteins) to opposite ends of the melted DNA. This is where the replication fork will form. Recruitment of helicase requires six DnaC proteins, each of which is attached to one subunit of helicase. Once this complex is formed, an additional five DnaA proteins bind to the original five DnaA proteins to form five DnaA dimers. DnaC is then released, and the prepriming complex is complete. In order for DNA replication to continue, single-strand binding proteins (SSBs) are needed to prevent the single strands of DNA from forming any secondary structures and to prevent them from reannealing, and DNA gyrase is needed to relieves the stress (by creating negative supercoils) created by the action of DnaB helicase. The unwinding of DNA by DnaB helicase allows for primase (DnaG) an RNA polymerase to prime each DNA template so that DNA synthesis can begin. Termination of replication Termination of DNA replication in E. coli is completed through the use of termination sequences and the Tus protein. These sequences allow the two replication forks to pass through in only one direction, but not the other. In order to slow down and stop the movement of the replication fork in the termination region of the E. coli chromosome, the Tus protein is required. This protein binds to the termination sites, and prevents DnaB from displacing DNA strands. However, these sequences are not required for termination of replication. Regulation of replication Regulation of DNA replication is achieved through several mechanisms. Mechanisms of regulation involve the ratio of ATP to ADP, the ratio of DnaA protein to DnaA boxes and the hemimethylation and sequestering of OriC. The ratio of ATP to ADP indicates that the cell has reached a specific size and is ready to divide. This "signal" occurs because in a rich medium, the cell will grow quickly and will have a lot of excess ATP. Furthermore, DnaA binds equally well to ATP or ADP, but only the DnaA-ATP complex is able to initiate replication. Thus, in a fast growing cell, there will be more DnaA-ATP than DnaA-ADP. Another mode of regulation involves the levels of DnaA in the cell. 5 DnaA-DnaA dimers are needed to initiate replication. Thus, the ratio of DnaA to the number of DnaA boxes in the cell is important. After DNA replication is complete, this number is halved and replication cannot occur until the levels of DnaA protein increase. Finally, upon completion of DNA replication, DNA is sequestered to a membrane-binding protein called SeqA. This protein binds to hemimethylated GATC DNA sequences. This 4-base pair sequence occurs 11 times in OriC. Only the parent strand is methylated upon completion of DNA synthesis. DAM methyltransferase methylates the adenine residues in the newly synthesized strand of DNA only if it is not bound to SeqA. The importance of this form of regulation is twofold: 1) OriC becomes inaccessible to DnaA and 2) DnaA binds better to fully methylated DNA than hemimethylated DNA. Rolling circle replication Rolling circle replication is initiated by an initiator protein encoded by the plasmid or bacteriophage DNA. This protein is able to nick one strand of the double-stranded, circular DNA molecule at a site called the double-strand origin (DSO) and remains bound to the 5'-PO4 end of the nicked strand. The free 3'-OH end is released and can serve as a primer for DNA synthesis by DNA polymerase III. Using the unnicked strand as a template, replication proceeds around the circular DNA molecule, displacing the nicked strand as single-stranded DNA. Continued DNA synthesis can produce multiple single-stranded linear copies of the original DNA in a continuous head-to-tail series. These linear copies can be converted to double-stranded circular molecules through the following process: First, the initiator protein makes another nick to terminate synthesis of the first (leading) strand. RNA polymerase and DNA polymerase III then replicate the single-stranded origin (SSO) DNA to make another double-stranded circle. DNA polymerase I removes the primer, replacing it with DNA, and DNA ligase joins the ends to make another molecule of double-stranded circular DNA. A striking feature of rolling circle replication is the uncoupling of the replication of the two strands of the DNA molecule. In contrast to common modes of DNA replication where both the parental DNA strands are replicated simultaneously, in rolling circle replication one strand is replicated first (which protrudes after being displaced, giving the characteristic appearance) and the second strand is replicated after completion of the first one. Rolling circle replication has found wide uses in academic research and biotechnology, and has been successfully used for amplification of DNA from very small amounts of starting material. Plasmid replication: Origin and regulation The regulation of plasmids differs considerably from the regulation of chromosomal replication. However, the machinery involved in the replication of plasmids is similar to that of chromosomal replication. The plasmid origin is commonly termed OriV, and at this site DNA replication is initiated. The ori region of plasmids, unlike that found on the host chromosome, contain the genes required for its replication. In addition, the ori region determines the host range. Plasmids carrying the ColE1 origin have a narrow host range and are restricted to the relatives of E. coli. Plasmids of utilizing the RK2 ori and ones that replicate using rolling circle replication have a broad host range and are compatible with gram positive and gram negative bacteria. Another important characteristic of the ori region is the regulation of plasmid copy number. Generally, high copy number plasmids have mechanisms that inhibit the initiation of replication. Regulation of plasmids based on the ColE1 origin, a high copy number origin, require an antisense RNA. A gene close to the origin, RNAII is transcribed and the 3'-OH of the transcript primes the origin only if it is cleaved by RNase H. Transcription of RNAI, the antisense RNA, inhibits the RNAII from priming the DNA because it prevents the formation of the RNA-DNA hybrid recognized by RNase H. Eukaryotic DNA replication Although the mechanisms of DNA synthesis in eukaryotes and prokaryotes are similar, DNA replication in eukaryotes is much more complicated. Though DNA synthesis in prokaryotes such as E. coli is regulated, DNA replication is initiated before the end of the cell cycle. Eukaryotic cells can only initiate DNA replication at a specific point in the cell cycle known as the S phase. DNA replication in eukaryotes occurs only in the S phase of the cell cycle. However, pre-initiation occurs in G1. Due to the sheer size of chromosomes in eukaryotes, eukaryotic chromosomes contain multiple origins of replication. Some origins are well characterized, such as the autonomously replicating sequences (ARS) of yeast while other eukaryotic origins, particularly those in metazoa, can be found in spans of thousands of base pairs. However, the assembly and initiation of replication is similar in both the protozoa and metazoa. You can find detailed information on Yeast ARS elements on this website http://www.oridb.org/index.php Initiation of replication The first step in the eukaryotic DNA replication is the formation of the pre-initiation replication complex (the pre-RC). The formation of this complex occurs in two stages. The first stage requires that there is no cyclin-dependent kinase (CDK) activity. This can only occur in early G1. The formation of the pre-RC is known as licensing, but a licensed pre-RC cannot initiate replication. Initiation of replication can only occur during the S-phase. Thus, the separation of licensing and activation ensures that the origin can only fire once per cell cycle. DNA replication in eukaryotes is not very well characterized. However, researchers believe that it begins with the binding of the origin recognition complex (ORC) to the origin. This complex is a hexamer of related proteins and remains bound to the origin, even after DNA replication occurs. Furthermore, ORC is the functional analogue of DnaA. Following the binding of ORC to the origin, Cdc6/Cdc18 and Cdt1 coordinate the loading of the minichromosome maintenance functions (MCM) complex to the origin by first binding to ORC and then binding to the MCM complex. The MCM complex is thought to be the major DNA helicase in eukaryotic organisms, and is a hexamer (mcm2-7). Once binding of MCM occurs, a fully licensed pre-RC exists. Activation of the complex occurs in S-phase and requires Cdk2-Cyclin E and Ddk. The activation process begins with the addition of Mcm10 to the pre-RC, which displaces Cdt1. Following this, Ddk phosphorylates Mcm3-7, which activates the helicase. It is believed that ORC and Cdc6/18 are phosphorylated by Cdk2-Cyclin E. Ddk and the Cdk complex then recruits another protein called Cdc45, which then recruits all of the DNA replication proteins to the replication fork. At this stage the origin fires and DNA synthesis begins. Regulation of replication Activation of a new round of replication is prevented through the actions of the cyclin-dependent kinases and a protein known as geminin. Geminin binds to Cdt1 and sequesters it. It is a periodic protein that first appears in S-phase and is degraded in late M-phase, possibly through the action of the anaphase promoting complex (APC). In addition, phosphorylation of Cdc6/18 prevent it from binding to the ORC (thus inhibiting loading of the MCM complex) while the phosphorylation of ORC remains unclear. Cells in the G0 stage of the cell cycle are prevented from initiating a round of replication because the MCM proteins are not expressed. Researchers believe that termination of DNA replication in eukaryotes occurs when two replication forks encounter each other. It is the first phase of translation. Numerous polymerases can replicate DNA in eukaryotic cells. Currently, six families of polymerases (A, B, C, D, X, Y) have been discovered. At least four different types of DNA polymerases are involved in the replication of DNA in animal cells (POLA, POLG, POLD1 and POLE). POL1 functions by extending the primer in the 5' -> 3' . However, it lacks the ability to proofread DNA. POLD1 has a proofreading ability and is able to replicate the entire length of a template only when associated with PCNA. POLE is able to replicate the entire length of a template in the absence of PCNA and is able to proofread DNA while POLG replicates mitochondrial DNA via the D-Loop mechanism of DNA replication. All primers are removed by RNaseH1 and Flap Endonuclease I. The general mechanisms of DNA replication on the leading and lagging strand, however, are the same as to those found in prokaryotic cells. Eukaryotic DNA replication takes place in discrete sites in the nucleus. These replication foci contain replication machinery (proteins involved in DNA replication) A unique problem that occurs during the replication of linear chromosomes is chromosome shortening. Chromosome shortening occurs when the primer at the 5' end of the lagging strand is degraded. Because DNA polymerase cannot add new nucleotides to the 5' end of DNA (there is no place for a new primer), the ends would shorten after each round of replication. However, in most replicating cells a small amount of telomerase is present, and this enzyme extends the ends of the chromosomes so that this problem does not occur. This extension occurs when the telomerase enzyme binds to a section of DNA on the 3' end and extends it using the normal replication machinery. This then allows for a primer to bind so that the complementary strand can be extended by normal lagging strand synthesis. Finally, telomeres must be capped by a protein to prevent chromosomal instability. Replication of mitochondrial DNA and chloroplast DNA D-loop replication is a process by which chloroplasts and mitochondria replicate their genetic material. An important component of understanding D-loop replication is that chloroplasts and mitochondria have a single circular chromosome like bacteria instead of the linear chromosomes found in eukaryotes. Replication begins at the leading strand origin. The leading strand is replicated in one direction and after about 2/3 of the chromosome's leading strand has been replicated, the lagging strand origin is exposed. Replication of the lagging strand is 1/3 complete when the replication of the leading strand is finished. The resulting structure looks like the letter D, and this occurs because the synthesis of the leading strand displaces the lagging strand. The D-loop region is important for phylogeographic studies. Because the region does not code for any genes, it is free to vary with only a few selective limitations on size and heavy/light strand factors. The mutation rate is among the fastest of anywhere in either the nuclear or mitochondrial genomes in animals. Mutations in the D-loop can effectively track recent and rapid evolutionary changes such as within species and among very closely related species. DNA replication in archaea Understanding DNA replication in the archaea is just beginning, and it is the goal of this section to provide a basic understanding of how DNA replication occurs in these unique prokaryotes. In addition, this section aims to provide a comparison between the three domains. Origin of replication The origins of archaea are AT rich, and generally have one or more AT stretches. In addition, long inverted repeats flank both ends of the origin, and are thought to be important in the initiation process and may serve a function similar to the DnaA boxes in the eubacteria. The genes that code for Cdc6/Orc1 are also located near the origin region, and this arrangement may allow these proteins to associate with the origin as soon as they are translated. Initiation of replication begins with the binding of Cdc6/Orc1 to the origin in an ATP independent manner. This complex is constitutively expressed and most likely forms the origin binding proteins (OBP). Due to their similarity with proteins involved in eukaryotic initiation, Cdc6/Orc1 may be involved in helicase loading in archaea. However, other evidence suggests that this complex may function as an initiator and create a sufficiently large replication bubble to allow the helicase (Mcm) to load without the presence of a loader. Once loading of this complex is complete, however, the DNA melts, and helicase can be loaded. In archaea, a hexameric protein known as the Mcm complex may function as the primary helicase. This protein is homologous to the eukaryotic Mcm complex. In archaea, there is no cdt1 homologue, and the helicase may be able to self-assemble at an archaeal origin without the need for a helicase loader. These proteins possess 3'->5' helicase capability. Single stranded binding protein (SSB) prevents exposed single stranded DNA from forming any secondary structures or reannealing. This complex is able to recruit primase, DNA polymerase and other replication machinery. The mechanisms of this process are similar to those in eukaryotes. Similarities to eukaryotic and eubacterial replication - ORC is homologous to Cdc6/Orc1 in archaea and may represent the ancestral state of the eukaryotic pre-RC. - A homologous Mcm protein exists between eukarya and archaea - The structure of Cdc6/Orc1 resembles the tertiary structure of DnaA in eubacteria - Both eukaryotic and archaeal helicases possess 3'->5' helicase capability - Archaeal SSB is similar to RPA - Berg J, Tymoczko JL, Stryer L (2006). Biochemistry (6th ed. ed.). San Francisco: W. H. Freeman. ISBN 0716787245. - Lehninger, Albert; et al. "24 DNA Metabolism". Principles of Biochemistry (Second ed.). 33 Irving Place, New York, NY 10003: Worth Publishers. pp. 818–829. ISBN 0-87901-500-4. Unknown parameter - DnaA protein binding to individual DnaA boxes in the Escherichia coli replication origin, oriC. C Weigel, A Schmidt, B Rückert, R Lurz, and W Messer - DnaA protein/DNA interaction. Modulation of the recognition sequence - Effects of Escherichia coli SSB protein on the single-stranded DNA-dependent ATPase activity of Escherichia coli RecA protein. Evidence that SSB protein facilitates the binding of RecA protein to regions of secondary structure within single-stranded DNA. - Voet and Voet. Biochemistry, Third Edition (2004). ISBN 0-471-19350-X. Wiley International Edition. - Watson, Baker, Bell, Gann, Levine, Losick. Molecular Biology of the Gene, Fifth Edition (2003). ISBN 0-8053-4635-X. Pearson/Benjamin Cummings Publishing. - Weem, Minka Peeters. International Baccalaureate, Biology, Second Edition (2001). IBID Press, Box 9, Camberwell, 3124, Australia. - Russell, P. J. 2002. iGenetics. Benjamin Cummings, San Francisco. - Snyder and Champness. Molecular Genetics of Bacteria, Second Edition (2003). ISBN 1-55581-204-X. ASM Press. - Bell and Dutta. 2002. Annu. Rev. Biochem 71:333–74. - Barry, E. R., & Bell, S. D. (2006). DNA replication in the archaea. Microbiology and molecular biology reviews : MMBR, 70(4), 876-887. - Kelman, L. M., & Kelman, Z. (2003). Archaea: An archetype for replication initiation studies? Molecular microbiology, 48(3), 605-615. - Heinrich Leonhardt,* Hans-Peter Rahn,* Peter Weinzierl, Anje Sporbert,* Thomas Cremer, Daniele Zink, and M. Cristina Cardoso, "Dynamics of DNA Replication Factories in Living Cells", The Journal of Cell Biology, Volume 149, Number 2, April 17, 2000 271–279 - DNA Workshop - WEHI-TV - DNA Replication video Detailed DNA replication animation from different angles with description below. - Breakfast of Champions Does Replication Creative primer on the process from the Science Creative Quarterly - Basic Polymerase Chain Reaction Protocol - Animated Biology - DNA makes DNA (Flash Animation) - DNA replication info page by George Kakaris, Biologist MSc in Applied Genetics and Biotechnology - Reference website on eukaryotic DNA replication - Molecular visualization of DNA Replication ar:تناسخ الحمض النووي الريبي منقوص الأكسجين de:Replikation et:DNA replikatsioon el:Αντιγραφή του DNA ko:DNA 복제 id:Replikasi DNA he:שכפול DNA nl:Replicatie (DNA) no:DNA-replikering simple:DNA replication sr:Репликација ДНК sv:Replikation uk:Реплікація ДНК ur:ڈی این اے تنسخ
<urn:uuid:98e403a0-489b-4061-b98b-40e647c5e429>
CC-MAIN-2021-21
https://wikidoc.org/index.php/DNA_replication
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988831.77/warc/CC-MAIN-20210508001259-20210508031259-00176.warc.gz
en
0.907253
6,676
4.46875
4
Greenhouse gases affect Earth’s energy balance and climate. The Sun serves as the primary energy source for Earth’s climate. Some of the incoming sunlight is reflected directly back into space, especially by bright surfaces such as ice and clouds, and the rest is absorbed by the surface and the atmosphere. Much of this absorbed solar energy is re-emitted as heat (longwave or infrared radiation). The atmosphere in turn absorbs and re-radiates heat, some of which escapes to space. Any disturbance to this balance of incoming and outgoing energy will affect the climate. For example, small changes in the output of energy from the Sun will affect this balance directly. If all heat energy emitted from the surface passed through the atmosphere directly into space, Earth’s average surface temperature would be tens of degrees colder than today. Greenhouse gases in the atmosphere, including water vapour, carbon dioxide, methane, and nitrous oxide, act to make the surface much warmer than this because they absorb and emit heat energy in all directions (including downwards), keeping Earth’s surface and lower atmosphere warm [FIGURE B1]. Without this greenhouse effect, life as we know it could not have evolved on our planet. Adding more greenhouse gases to the atmosphere makes it even more effective at preventing heat from escaping into space. When the energy leaving is less than the energy entering, Earth warms until a new balance is established. Greenhouse gases emitted by human activities alter Earth’s energy balance and thus its climate. Humans also affect climate by changing the nature of the land surfaces (for example by clearing forests for farming) and through the emission of pollutants that affect the amount and type of particles in the atmosphere. Scientists have determined that, when all human and natural factors are considered, Earth’s climate balance has been altered towards warming, with the biggest contributor being increases in CO2. Human activities have added greenhouse gases to the atmosphere. The atmospheric concentrations of carbon dioxide, methane, and nitrous oxide have increased significantly since the Industrial Revolution began. In the case of carbon dioxide, the average concentration measured at the Mauna Loa Observatory in Hawaii has risen from 316 parts per million (ppm)1 in 1959 (the first full year of data available) to more than 411 ppm in 2019 [FIGURE B2]. The same rates of increase have since been recorded at numerous other stations worldwide. Since preindustrial times, the atmospheric concentration of CO2 has increased by over 40%, methane has increased by more than 150%, and nitrous oxide has increased by roughly 20%. More than half of the increase in CO2 has occurred since 1970. Increases in all three gases contribute to warming of Earth, with the increase in CO2 playing the largest role. See page B3 to learn about the sources of human emitted greenhouse gases. Scientists have examined greenhouse gases in the context of the past. Analysis of air trapped inside ice that has been accumulating over time in Antarctica shows that the CO2 1that is, for every million molecules in the air, 316 of them were CO2 concentration began to increase significantly in the 19th century [FIGURE B3], after staying in the range of 260 to 280 ppm for the previous 10,000 years. Ice core records extending back 800,000 years show that during that time, CO2 concentrations remained within the range of 170 to 300 ppm throughout many “ice age” cycles — see infobox, pg. B4 to learn about the ice ages — and no concentration above 300 ppm is seen in ice core records until the past 200 years. Measurements of the forms (isotopes) of carbon in the modern atmosphere show a clear fingerprint of the addition of “old” carbon (depleted in natural radioactive 14C) coming from the combustion of fossil fuels (as opposed to “newer” carbon coming from living systems). In addition, it is known that human activities (excluding land use changes) currently emit an estimated 10 billion tonnes of carbon each year, mostly by burning fossil fuels, which is more than enough to explain the observed increase in concentration. These and other lines of evidence point conclusively to the fact that the elevated CO2 concentration in our atmosphere is the result of human activities. Climate records show a warming trend. Estimating global average surface air temperature increase requires careful analysis of millions of measurements from around the world, including from land stations, ships, and satellites. Despite the many complications of synthesising such data, multiple independent teams have concluded separately and unanimously that global average surface air temperature has risen by about 1 °C (1.8 °F) since 1900 [FIGURE B4]. Although the record shows several pauses and accelerations in the increasing trend, each of the last four decades has been warmer than any other decade in the instrumental record since 1850. Going further back in time before accurate thermometers were widely available, temperatures can be reconstructed using climate-sensitive indicators “proxies” in materials such as tree rings, ice cores, and marine sediments. Comparisons of the thermometer record with these proxy measurements suggest that the time since the early 1980s has been the warmest 40-year period in at least eight centuries, and that global temperature is rising towards peak temperatures last seen 5,000 to 10,000 years ago in the warmest part of our current interglacial period. Many other impacts associated with the warming trend have become evident in recent years. Arctic summer sea ice cover has shrunk dramatically. The heat content of the ocean has increased. Global average sea level has risen by approximately 16 cm (6 inches) since 1901, due both to the expansion of warmer ocean water and to the addition of melt waters from glaciers and ice sheets on land. Warming and precipitation changes are altering the geographical ranges of many plant and animal species and the timing of their life cycles. In addition to the effects on climate, some of the excess CO2in the atmosphere is being taken up by the ocean, changing its chemical composition (causing ocean acidification). Many complex processes shape our climate. Based just on the physics of the amount of energy that CO2 absorbs and emits, a doubling of atmospheric CO2 concentration from pre-industrial levels (up to about 560 ppm) would by itself cause a global average temperature increase of about 1 °C (1.8 °F). In the overall climate system, however, things are more complex; warming leads to further effects (feedbacks) that either amplify or diminish the initial warming. The most important feedbacks involve various forms of water. A warmer atmosphere generally contains more water vapour. Water vapour is a potent greenhouse gas, thus causing more warming; its short lifetime in the atmosphere keeps its increase largely in step with warming. Thus, water vapour is treated as an amplifier, and not a driver, of climate change. Higher temperatures in the polar regions melt sea ice and reduce seasonal snow cover, exposing a darker ocean and land surface that can absorb more heat, causing further warming. Another important but uncertain feedback concerns changes in clouds. Warming and increases in water vapour together may cause cloud cover to increase or decrease which can either amplify or dampen temperature change depending on the changes in the horizontal extent, altitude, and properties of clouds. The latest assessment of the science indicates that the overall net global effect of cloud changes is likely to be to amplify warming. The ocean moderates climate change. The ocean is a huge heat reservoir, but it is difficult to heat its full depth because warm water tends to stay near the surface. The rate at which heat is transferred to the deep ocean is therefore slow; it varies from year to year and from decade to decade, and it helps to determine the pace of warming at the surface. Observations of the sub-surface ocean are limited prior to about 1970, but since then, warming of the upper 700 m (2,300 feet) is readily apparent, and deeper warming is also clearly observed since about 1990. Surface temperatures and rainfall in most regions vary greatly from the global average because of geographical location, in particular latitude and continental position. Both the average values of temperature, rainfall, and their extremes (which generally have the largest impacts on natural systems and human infrastructure), are also strongly affected by local patterns of winds. Estimating the effects of feedback processes, the pace of the warming, and regional climate change requires the use of mathematical models of the atmosphere, ocean, land, and ice (the cryosphere) built upon established laws of physics and the latest understanding of the physical, chemical and biological processes affecting climate, and run on powerful computers. Models vary in their projections of how much additional warming to expect (depending on the type of model and on assumptions used in simulating certain climate processes, particularly cloud formation and ocean mixing), but all such models agree that the overall net effect of feedbacks is to amplify warming. Human activities are changing the climate. Rigorous analysis of all data and lines of evidence shows that most of the observed global warming over the past 50 years or so cannot be explained by natural causes and instead requires a significant role for the influence of human activities. In order to discern the human influence on climate, scientists must consider many natural variations that affect temperature, precipitation, and other aspects of climate from local to global scale, on timescales from days to decades and longer. One natural variation is the El Niño Southern Oscillation (ENSO), an irregular alternation between warming and cooling (lasting about two to seven years) in the equatorial Pacific Ocean that causes significant year-to-year regional and global shifts in temperature and rainfall patterns. Volcanic eruptions also alter climate, in part increasing the amount of small (aerosol) particles in the stratosphere that reflect or absorb sunlight, leading to a short-term surface cooling lasting typically about two to three years. Over hundreds of thousands of years, slow, recurring variations in Earth’s orbit around the Sun, which alter the distribution of solar energy received by Earth, have been enough to trigger the ice age cycles of the past 800,000 years. Fingerprinting is a powerful way of studying the causes of climate change. Different influences on climate lead to different patterns seen in climate records. This becomes obvious when scientists probe beyond changes in the average temperature of the planet and look more closely at geographical and temporal patterns of climate change. For example, an increase in the Sun’s energy output will lead to a very different pattern of temperature change (across Earth’s surface and vertically in the atmosphere) compared to that induced by an increase in CO2 concentration. Observed atmospheric temperature changes show a fingerprint much closer to that of a long-term CO2 increase than to that of a fluctuating Sun alone. Scientists routinely test whether purely natural changes in the Sun, volcanic activity, or internal climate variability could plausibly explain the patterns of change they have observed in many different aspects of the climate system. These analyses have shown that the observed climate changes of the past several decades cannot be explained just by natural factors. How will climate change in the future? Scientists have made major advances in the observations, theory, and modelling of Earth’s climate system, and these advances have enabled them to project future climate change with increasing confidence. Nevertheless, several major issues make it impossible to give precise estimates of how global or regional temperature trends will evolve decade by decade into the future. Firstly, we cannot predict how much CO2 human activities will emit, as this depends on factors such as how the global economy develops and how society’s production and consumption of energy changes in the coming decades. Secondly, with current understanding of the complexities of how climate feedbacks operate, there is a range of possible outcomes, even for a particular scenario of CO2 emissions. Finally, over timescales of a decade or so, natural variability can modulate the effects of an underlying trend in temperature. Taken together, all model projections indicate that Earth will continue to warm considerably more over the next few decades to centuries. If there were no technological or policy changes to reduce emission trends from their current trajectory, then further globally-averaged warming of 2.6 to 4.8 °C (4.7 to 8.6 °F) in addition to that which has already occurred would be expected during the 21st century [FIGURE B5]. Projecting what those ranges will mean for the climate experienced at any particular location is a challenging scientific problem, but estimates are continuing to improve as regional and local-scale models advance. Climate change means not only changes in globally averaged surface temperature, but also changes in atmospheric circulation, in the size and patterns of natural climate variations, and in local weather. La Niña events shift weather patterns so that some regions are made wetter, and wet summers are generally cooler. Stronger winds from polar regions can contribute to an occasional colder winter. In a similar way, the persistence of one phase of an atmospheric circulation pattern known as the North Atlantic Oscillation has contributed to several recent cold winters in Europe, eastern North America, and northern Asia. Atmospheric and ocean circulation patterns will evolve as Earth warms and will influence storm tracks and many other aspects of the weather. Global warming tilts the odds in favour of more warm days and seasons and fewer cold days and seasons. For example, across the continental United States in the 1960s there were more daily record low temperatures than record highs, but in the 2000s there were more than twice as many record highs as record lows. Another important example of tilting the odds is that over recent decades heatwaves have increased in frequency in large parts of Europe, Asia, South America, and Australia. Marine heat waves are also increasing. Some differences in seasonal sea ice extent between the Arctic and Antarctic are due to basic geography and its influence on atmospheric and oceanic circulation. The Arctic is an ocean basin surrounded largely by mountainous continental land masses, and Antarctica is a continent surrounded by ocean. In the Arctic, sea ice extent is limited by the surrounding land masses. In the Southern Ocean winter, sea ice can expand freely into the surrounding ocean, with its southern boundary set by the coastline of Antarctica. Because Antarctic sea ice forms at latitudes further from the South Pole (and closer to the equator), less ice survives the summer. Sea ice extent in both poles changes seasonally; however, longer-term variability in summer and winter ice extent is different in each hemisphere, due in part to these basic geographical differences. Sea ice in the Arctic has decreased dramatically since the late 1970s, particularly in summer and autumn. Since the satellite record began in 1978, the yearly minimum Arctic sea ice extent (which occurs in September) has decreased by about 40% [FIGURE 5]. Ice cover expands again each Arctic winter, but the ice is thinner than it used to be. Estimates of past sea ice extent suggest that this decline may be unprecedented in at least the past 1,450 years. Because sea ice is highly reflective, warming is amplified as the ice decreases and more sunshine is absorbed by the darker underlying ocean surface. Sea ice in the Antarctic showed a slight increase in overall extent from 1979 to 2014, although some areas, such as that to the west of the Antarctic Peninsula experienced a decrease. Short-term trends in the Southern Ocean, such as those observed, can readily occur from natural variability of the atmosphere, ocean and sea ice system. Changes in surface wind patterns around the continent contributed to the Antarctic pattern of sea ice change; ocean factors such as the addition of cool fresh water from melting ice shelves may also have played a role. However, after 2014, Antarctic ice extent began to decline, reaching a record low (within the 40 years of satellite data) in 2017, and remaining low in the following two years. HOW DOES CLIMATE CHANGE AFFECT THE STRENGTH AND FREQUENCY OF FLOODS, DROUGHTS, HURRICANES, AND TORNADOES? As Earth’s climate has warmed, more frequent and more intense weather events have both been observed around the world. Scientists typically identify these weather events as “extreme” if they are unlike 90% or 95% of similar weather events that happened before in the same region. Many factors contribute to any individual extreme weather event—including patterns of natural climate variability, such as El Niño and La Niña—making it challenging to attribute any particular extreme event to human-caused climate change. However, studies can show whether the warming climate made an event more severe or more likely to happen. A warming climate can contribute to the intensity of heat waves by increasing the chances of very hot days and nights. Climate warming also increases evaporation on land, which can worsen drought and create conditions more prone to wildfire and a longer wildfire season. A warming atmosphere is also associated with heavier precipitation events (rain and snowstorms) through increases in the air’s capacity to hold moisture. El Niño events favour drought in many tropical and subtropical land areas, while La Niña events promote wetter conditions in many places. These short-term and regional variations are expected to become more extreme in a warming climate. Earth’s warmer and moister atmosphere and warmer oceans make it likely that the strongest hurricanes will be more intense, produce more rainfall, affect new areas, and possibly be larger and longer-lived. This is supported by available observational evidence in the North Atlantic. In addition, sea level rise (see Question 14) increases the amount of seawater that is pushed on to shore during coastal storms, which, along with more rainfall produced by the storms, can result in more destructive storm surges and flooding. While global warming is likely making hurricanes more intense, the change in the number of hurricanes each year is quite uncertain. This remains a subject of ongoing research. Some conditions favourable for strong thunderstorms that spawn tornadoes are expected to increase with warming, but uncertainty exists in other factors that affect tornado formation, such as changes in the vertical and horizontal variations of winds. This sea level rise has been driven by expansion of water volume as the ocean warms, melting of mountain glaciers in all regions of the world, and mass losses from the Greenland and Antarctic ice sheets. All of these result from a warming climate. Fluctuations in sea level also occur due to changes in the amounts of water stored on land. The amount of sea level change experienced at any given location also depends on a variety of other factors, including whether regional geological processes and rebound of the land weighted down by previous ice sheets are causing the land itself to rise or sink, and whether changes in winds and currents are piling ocean water against some coasts or moving water away. The effects of rising sea level are felt most acutely in the increased frequency and intensity of occasional storm surges. If CO2 and other greenhouse gases continue to increase on their current trajectories, it is projected that sea level may rise, at minimum, by a further 0.4 to 0.8 m (1.3 to 2.6 feet) by 2100, although future ice sheet melt could make these values considerably higher. Moreover, rising sea levels will not stop in 2100; sea levels will be much higher in the following centuries as the sea continues to take up heat and glaciers continue to retreat. It remains difficult to predict the details of how the Greenland and Antarctic Ice Sheets will respond to continued warming, but it is thought that Greenland and perhaps West Antarctica will continue to lose mass, whereas the colder parts of Antarctica could gain mass as they receive more snowfall from warmer air that contains more moisture. Sea level in the last interglacial (warm) period around 125,000 years ago peaked at probably 5 to 10 m above the present level. During this period, the polar regions were warmer than they are today. This suggests that, over millennia, long periods of increased warmth will lead to very significant loss of parts of the Greenland and Antarctic Ice Sheets and to consequent sea level rise. CO2 dissolves in water to form a weak acid, and the oceans have absorbed about a third of the CO2 resulting from human activities, leading to a steady decrease in ocean pH levels. With increasing atmospheric CO2, this chemical balance will change even more during the next century. Laboratory and other experiments show that under high CO2 and in more acidic waters, some marine species have misshapen shells and lower growth rates, although the effect varies among species. Acidification also alters the cycling of nutrients and many other elements and compounds in the ocean, and it is likely to shift the competitive advantage among species, with as-yet-to-be-determined impacts on marine ecosystems and the food web. Warming due to the addition of large amounts of greenhouse gases to the atmosphere can be understood in terms of very basic properties of greenhouse gases. It will in turn lead to many changes in natural climate processes, with a net effect of amplifying the warming. The size of the warming that will be experienced depends largely on the amount of greenhouse gases accumulating in the atmosphere and hence on the trajectory of emissions. If the total cumulative emissions since 1875 are kept below about 900 gigatonnes (900 billion tonnes) of carbon, then there is a two-thirds chance of keeping the rise in global average temperature since the pre-industrial period below 2 °C (3.6 °F). However, two-thirds of this amount has already been emitted. A target of keeping global average temperature rise below 1.5 °C (2.7 °F) would allow for even less total cumulative emissions since 1875. Based just on the established physics of the amount of heat CO2 absorbs and emits, a doubling of atmospheric CO2 concentration from preindustrial levels (up to about 560 ppm) would by itself, without amplification by any other effects, cause a global average temperature increase of about 1 °C (1.8 °F). However, the total amount of warming from a given amount of emissions depends on chains of effects (feedbacks) that can individually either amplify or diminish the initial warming. The most important amplifying feedback is caused by water vapour, which is a potent greenhouse gas. As CO2 increases and warms the atmosphere, the warmer air can hold more moisture and trap more heat in the lower atmosphere. Also, as Arctic sea ice and glaciers melt, more sunlight is absorbed into the darker underlying land and ocean surfaces, causing further warming and further melting of ice and snow. The biggest uncertainty in our understanding of feedbacks relates to clouds (which can have both positive and negative feedbacks), and how the properties of clouds will change in response to climate change. Other important feedbacks involve the carbon cycle. Currently the land and oceans together absorb about half of the CO2 emitted from human activities, but the capacities of land and ocean to store additional carbon are expected to decrease with additional warming, leading to faster increases in atmospheric CO2 and faster warming. Models vary in their projections of how much additional warming to expect, but all such models agree that the overall net effect of feedbacks is to amplify the warming. Both theory and direct observations have confirmed that global warming is associated with greater warming over land than oceans, moistening of the atmosphere, shifts in regional precipitation patterns, increases in extreme weather events, ocean acidification, melting glaciers, and rising sea levels (which increases the risk of coastal inundation and storm surge). Already, record high temperatures are on average significantly outpacing record low temperatures, wet areas are becoming wetter as dry areas are becoming drier, heavy rainstorms have become heavier, and snowpacks (an important source of freshwater for many regions) are decreasing. These impacts are expected to increase with greater warming and will threaten food production, freshwater supplies, coastal infrastructure, and especially the welfare of the huge population currently living in low-lying areas. Even though certain regions may realise some local benefit from the warming, the long-term consequences overall will be disruptive. It is not only an increase of a few degrees in global average temperature that is cause for concern—the pace at which this warming occurs is also important (see Question 6). Rapid human-caused climate changes mean that less time is available to allow for adaptation measures to be put in place or for ecosystems to adapt, posing greater risks in areas vulnerable to more intense extreme weather events and rising sea levels. Comparisons of model predictions with observations identify what is well-understood and, at the same time, reveal uncertainties or gaps in our understanding. This helps to set priorities for new research. Vigilant monitoring of the entire climate system—the atmosphere, oceans, land, and ice—is therefore critical, as the climate system may be full of surprises. Together, field and laboratory data and theoretical understanding are used to advance models of Earth’s climate system and to improve representation of key processes in them, especially those associated with clouds, aerosols, and transport of heat into the oceans. This is critical for accurately simulating climate change and associated changes in severe weather, especially at the regional and local scales important for policy decisions. Simulating how clouds will change with warming and in turn may affect warming remains one of the major challenges for global climate models, in part because different cloud types have different impacts on climate, and the many cloud processes occur on scales smaller than most current models can resolve. Greater computer power is already allowing for some of these processes to be resolved in the new generation of models. Dozens of groups and research institutions work on climate models, and scientists are now able to analyse results from essentially all of the world’s major Earth-System Models and compare them with each other and with observations. Such opportunities are of tremendous benefit in bringing out the strengths and weaknesses of various models and diagnosing the causes of differences among models, so that research can focus on the relevant processes. Differences among models allow estimates to be made of the uncertainties in projections of future climate change. Additionally, large archives of results from many different models help scientists to identify aspects of climate change projections that are robust and that can be interpreted in terms of known physical mechanisms. Studying how climate responded to major changes in the past is another way of checking that we understand how different processes work and that models are capable of performing reliably under a wide range of conditions. ARE DISASTER SCENARIOS ABOUT TIPPING POINTS LIKE “TURNING OFF THE GULF STREAM” AND RELEASE OF METHANE FROM THE ARCTIC A CAUSE FOR CONCERN? The composition of the atmosphere is changing towards conditions that have not been experienced for millions of years, so we are headed for unknown territory, and uncertainty is large. The climate system involves many competing processes that could switch the climate into a different state once a threshold has been exceeded. A well-known example is the south-north ocean overturning circulation, which is maintained by cold salty water sinking in the North Atlantic and involves the transport of extra heat to the North Atlantic via the Gulf Stream. During the last ice age, pulses of freshwater from the melting ice sheet over North America led to slowing down of this overturning circulation. This in turn caused widespread changes in climate around the Northern Hemisphere. Freshening of the North Atlantic from the melting of the Greenland ice sheet is gradual, however, and hence is not expected to cause abrupt changes. Another concern relates to the Arctic, where substantial warming could destabilise methane (a greenhouse gas) trapped in ocean sediments and permafrost, potentially leading to a rapid release of a large amount of methane. If such a rapid release occurred, then major, fast climate changes would ensue. Such high-risk changes are considered unlikely in this century, but are by definition hard to predict. Scientists are therefore continuing to study the possibility of exceeding such tipping points, beyond which we risk large and abrupt changes. In addition to abrupt changes in the climate system itself, steady climate change can cross thresholds that trigger abrupt changes in other systems. In human systems, for example, infrastructure has typically been built to accommodate the climate variability at the time of construction. Gradual climate changes can cause abrupt changes in the utility of the infrastructure—such as when rising sea levels suddenly surpass sea walls, or when thawing permafrost causes the sudden collapse of pipelines, buildings, or roads. In natural systems, as air and water temperatures rise, some species—such as the mountain pika and many ocean corals—will no longer be able to survive in their current habitats and will be forced to relocate (if possible) or rapidly adapt. Other species may fare better in the new conditions, causing abrupt shifts in the balance of ecosystems; for example, warmer temperatures have allowed more bark beetles to survive over winter in some regions, where beetle outbreaks have destroyed forests. IF EMISSIONS OF GREENHOUSE GASES WERE STOPPED, WOULD THE CLIMATE RETURN TO THE CONDITIONS OF 200 YEARS AGO? If emissions of CO2 stopped altogether, it would take many thousands of years for atmospheric CO2 to return to “pre-industrial” levels due to its very slow transfer to the deep ocean and ultimate burial in ocean sediments. Surface temperatures would stay elevated for at least a thousand years, implying a long-term commitment to a warmer planet due to past and current emissions. Sea level would likely continue to rise for many centuries even after temperature stopped increasing [FIGURE 9]. Significant cooling would be required to reverse melting of glaciers and the Greenland ice sheet, which formed during past cold climates. The current CO2-induced warming of Earth is therefore essentially irreversible on human timescales. The amount and rate of further warming will depend almost entirely on how much more CO2 humankind emits. Scenarios of future climate change increasingly assume the use of technologies that can remove greenhouse gases from the atmosphere. In such “negative emissions” scenarios, it assumed that at some point in the future, widespread effort will be undertaken that utilises such technologies to remove CO2 from the atmosphere and lower its atmospheric concentration, thereby starting to reverse CO2-driven warming on longer timescales. Deployment of such technologies at scale would require large decreases in their costs. Even if such technological fixes were practical, substantial reductions in CO2 emissions would still be essential.
<urn:uuid:ebfcf530-ad4d-4ecc-b2b8-fa7172074dcf>
CC-MAIN-2021-21
https://www.nap.edu/read/25733/chapter/5
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991224.58/warc/CC-MAIN-20210516140441-20210516170441-00016.warc.gz
en
0.941106
6,207
4.46875
4
Coverage abundance and sociability scales of Braun-Blanquet. Phytosociological surveys have been applied to studies on agroecosystems, especially in relation to weed populations into arable fields. These surveys can indicate trends of variation of the importance of plant populations within a crop, and whether the variations are associated to agricultural practices adopted, which can be further used to support the development of weed management programs. However, to understand the applicability of phytosociological studies for weeds, it is necessary to understand the ecological basis and determine the most appropriate methods to be used when surveying arable fields. Therefore, the aim of the present chapter is to introduce a new approach of phytosociological survey to be used as a tool for the weed science. Throughout the chapter, this new approach is presented in details covering aspects related to methods for sampling and describing weed communities. The following sequence of steps is proposed as the most suitable for a weed phytosociological and association survey: (1) overall infestation; (2) phytosociological tables/graphs; (3) intra-characterization by diversity; (4) inter-characterization and grouping by multivariate analysis; and (5) weeds association through contingency tables. - weed species - data processing The classification of plant species is necessary to understand the complexity of environments, being based mainly on morphology and recently aided by genetics and its functional properties . Plant communities are a set of plant species within a given geographic unit, which form relatively uniform patches, distinguishable from patches of different types of vegetation adjacent to that limited area . Vegetation ecology seeks to identify the species found within the same habitat, thus, describing the physiognomy of the landscape, in order to determine why the communities have a given structure as well as their mechanisms of adaptation . When it comes to community studies, it is necessary to understand how environmental conditions and species interactions influence the patterns of coexistence and relative abundance of species on the local scale, but it should be taken into account the important role of spatio-temporal dynamics [4–6]. The classification of plant communities into a hierarchical system is made in an inductively synthetic way; the types of vegetation units are abstracted as basic syntaxonomic units and compiled as associations [2, 7]. The basis of the phytosociological categorization of plants, according to Blasi and Frondoni , is drawn into a context of botanical geography, with the primordial observation that plant species are grouped in associations in which they differ in composition and/or physiognomy, according to geographic regions and environmental conditions (e.g., climate, altitude, latitude). In this sense, phytosociology is the science that seeks to understand, through the composition and distribution of plant species in a given phytogeographic region, the diversity of plant communities . However, this is a phytogeographic and non-phytosociological approach , since the phytosociological approach in the original Braun-Blanquet concept is a floristic statistic based on the occurrence of plant species. In the concept proposed by Oosting and Harper , phytosociology is the science of plant communities as well as the relationships with the environment and the processes that modify these communities. This approach is more related to functional ecology, which seeks to understand how and why ecological systems, and their components interact differently in different environments . The term phytosociology, however, is directly associated with the structure of a community of plant species , while phytocenosis is defined as the study of plant cover . In this sense, when it comes to a phytosociological survey through an inductive statistical process of inventory comparison, it is necessary to establish a conceptual class that represents a model of phytocenosis. Another concept refers to the possibility of representing patterns in floristic structure and combination, thus modeling phytocenosis as a resource for the dynamic systems theory [15, 16]. A phytosociological study considers three principles: the Basically, the procedures and methods for sampling and registering plant communities follow the Braun-Blanquet method as it considers the analysis and description of selected plant populations as basic types . This method, however, has some limitations and alternatives have been developed along the years . Phytosociological surveys have been applied to studies on agroecosystems, especially in relation to weed plant populations into crops . An infesting population is a result of the interactional relationship between phenotypic plasticity of each individual and long-term processes that provided adaptive flexibility to eventual changes in the natural or artificial environment [20, 21]. Thus, when conducting a phytosociological study of weed species in crops, these can indicate trends of variation of the importance of plant populations within a crop, and these variations may be associated to the agricultural practices adopted as well as subsidize the development of weed management programs. In general, the phytosociological studies of weed plant communities in agroecosystems allow the determination of periods of control and/or coexistence between crop and weeds, and through the phytosociological indices, it is possible to determine which species are the most important in the different periods of growth of the weed community . However, to understand the applicability of phytosociological studies for weed species in crops, it is necessary to understand the ecological basis and determine the most appropriate methods to be used when surveying arable fields. 1.1. Two contrasting theories The study of vegetation played an important role in the evolution of ecological concepts through the formulation of several vegetation theories as well as methods of surveying and analyzing phytosociological data [10, 22]. In phytosociological terms, the concept of community is based on the principle of associations (different groupings of plant species, usually found in sites with similar environmental conditions) . The definition of associations proposed by some authors [1, 23] considers the type of vegetation that represents the real plant communities and shares a certain combination of statistically reliable characteristics, in terms of physiognomy and stratification, ecological conditions, dynamic meaning, area of distribution, and history. This gives the association a greater value of information in ecological and geographical terms, which increases the indicator value of vegetation in a site . However, there is some discussion in terms of synecological methods related to the concept of community. For instance, the concept of community proposed by Begon refers to a set of species that inhabit the same area at a given time, while Gurevitch refers to a group of populations that coexist in space and time, interacting directly or indirectly. Two great ecologists, Frederick E. Clements and H.A. Gleason, presented a series of discussions on community ecology. The driving issue was whether the community was a self-organized system of co-occurring species or simply a random collection of populations with minimal functional integration. Two extreme views prevailed: one view considered a community as a “super-organism” whose species were strongly united by interactions that contributed to repetitive patterns of species abundance in space and time; in contrast, communities were a result of interactions among species, as well as between species and environment, combined with historically extreme and occasional climatic events . For Clements, in his theory called “super-organisms”, organisms and communities not only have their own growth and development but also evolve from predecessor communities. They supposedly have an ontogeny that one could study, just as is done with individuals and species, so one could classify the communities in a way comparable to the Linnaean taxonomy. It ultimately assumed a common evolutionary history for the integrated species , and the emergence and disappearance of a particular plant community was supposed to be easily and accurately estimated because it was considered as a single organism . In short, the concept of plant community in this theory is defined as an autonomous, discrete, individualizable entity possessing its own structural and functional properties [27, 28]. This theory predicted that the optimum and the amplitude of the species presented distinct clusters, so expected changes in the vegetation would be abrupt . On the other side, the theory of Henry A. Gleason focused on the traits of individual species that allow each to be within specific habitats or geographic areas . This is a much more arbitrary unity than that imagined by Clements, since in Gleason's view, the spatial barriers of communities are not clear and assemblies of species can change considerably over time and space allowing each species to have its own tolerance to certain selection factors, thus, responding to environmental stresses in particular ways . Thus, it was proposed a concept of a plant continuum in which species combinations result mostly from individual responses to environmental factors, and the occurrence of dispersion of individuals is random as a response to environmental fluctuations . This theory states that the level of occurrence of a given plant species is proportional to the level of the stress the species can tolerate . The theoretical opposition allowed the origin of two strands of current vegetation studies, the 2. Aims and methods Before reviewing the main methods and criteria involved in the sampling and evaluation of flora and vegetation, it is necessary to define some basic concepts: Flora is the set of plant species present in a given place or area; Vegetation refers to the quantitative aspects of plant architecture, that is, its horizontal and vertical distribution on the surface; and The plant community should be understood as a set of plants of two or more plant species that coexist in a certain area, and according to the dominance of some of its species, it can be differentiated from other natural and/or altered plant communities [32, 33]. Based on Whittaker , “ The purpose of phytosociological studies in the weed science does not differ much from the ecological field and rather tends to combine efforts between two disciplines (botany and ecology), in order to improve agricultural productivity and decrease the competition between crops and weeds . Many of the traditional studies in weed science carried out in nonindustrialized (usually considered also under developed) countries have focused on adopting foreign technologies, with little research on biological and ecological aspects of weeds, diagnosis of population dynamics, and integrated weed management . In this context, the use of phytosociological methods in the weed science can be directly associated with the nature of the treatments applied to arable fields, its intrinsic factors, and the history of the area where it will be established . In arable fields, however, two implications must be considered: The plots are usually much smaller in size than expected in phytosociological sampling of wild ecosystems and There is a stronger set of factors influencing arable fields, such as plant density and height, history of land use, soil tillage, and application of agrochemicals, compared to natural environments. Finally, communities of invasive plants show a behavior similar to that proposed by Gleason, in his theory of the individualistic concept of vegetable association, which states that “ 2.1. Methods for sampling the community The choice between the different variants of the methodologies depend on the sampling objective and the characteristics of the populations (richness and distribution) to be evaluated in each particular agroecosystem [9, 26]. Conventionally, the weed sampling methodologies do not always have prior information about these characteristics or some type of support in the decision-making process or sometimes they are not convenient. For example, on the number of points to be taken, conventional weed population sampling methodologies assume that they have homogeneous or random distribution in space and this is not always the case as numerous studies claim that weed distribution is in patches [38, 39]. It is necessary to emphasize that the sampling of weeds has two objectives: Knowledge on the community richness and abundance, which in turn provides information for biodiversity studies (richness and structure of the communities) on the medium and long-term weed-management plans and Mapping and spatial dynamics studies . Some authors [26, 30] point out several sampling methods, but taking into account the limitations imposed by sampling arable fields, only two of them will be dealt with in this chapter: relevé and random quadrats. Braun-Blanquet made an analogy between organisms and communities by comparing a species with a plant community for the purpose of establishing a classification of communities similar to the way organisms are classified into taxonomic groups. For him, the plant community is the basic unit of the taxonomic classification, which serves to establish a hierarchical system of classification of the communities on a world scale. The same author proposed that the selection of the area to be sampled should be carried out through the determination of the A small area, say 0.25 m2, is delineated, and the list of species present on that surface is recorded; By adding the same area to the original one (2× size), the number of species should be recorded again, and the total number of species in the new quadrat should be counted again; This should be done repeatedly (increase the area, count the species) until the number of species tends to stabilization; and The values of the cumulative total of species (on the ordinate) corresponding to each of the successively duplicated areas (on the abscissa) are represented on a pair of perpendicular axes (Figure 1). A strong slope in its initial part normally characterizes the resulting curve because the first areas incorporate a larger number of new species. Subsequently, as the sampled surface is increased, the appearance of new species in the quadrat becomes rarer and, consequently, the slope of the curve decreases tending to stabilization (Figure 1). The appropriate size of the sample unit should be found in the horizontal portion of the curve, and the point of inflection of the sample unit (when it is manifest) projected on the axis of the abscissa will indicate the minimum area. In general, it is convenient to use a size that exceeds a little the minimum area. It is proposed also that each species in the list is accompanied by an estimate of its abundance-dominance by using the combined coverage-abundance scale and also by its degree of sociability (Table 1), both stated by Braun-Blanquet . |Number of individuals||Area coverage| |5||Any number||>75%||Large, almost pure stands| |4||Any number||50–74%||Small colonies or carpets| |3||Any number||25–49%||Small patches or cushions| |2||Any number||5–24%||Small but dense clumps| 2.1.2. Random quadrats In certain communities, the determination of frequency estimators (abundance, coverage) depends too much on the criteria of the expert in charge of the evaluation [26, 40], especially in herbaceous formations such as meadows, pastures, or high wetlands, in which it is most useful to use the method known as “random quadrat”. This consists of finding subjective patterns within the community to be sampled and to conduct sampling in such a way so as not to favor a particular pattern [26, 30]. It means that for the data to be reliable, sampling should be performed as randomly as possible. Several methods are available to help the researcher to go through and sample the area properly, but three of these methods are highlighted to be used in weed science: even spaced, by chance, and random by zones (Figure 2). The geometric forms of the sample unit (called “quadrat”) are basically of three types: square, rectangular, or circular . These types of units allow the registration of all variables of dominance, frequency, and density of plant individuals . Based on Goodall , surface geometry affects two aspects that significantly influence the vegetation sampling result. First, let us consider the magnitude of the edge effect given by the area/perimeter ratio of the sample . In the case of plots with rectangular shapes, the long-wide relation of the units and their directional position/orientation influence the degree of heterogeneity registered into the plot. To be considered part of the unit, the plants must be rooted within the perimeter, and the perimeter is longer in a rectangular quadrat compared to square or circular forms. Longer perimeters may increase the chance of the observer to be mistaken when deciding if a plant in the very border of the quadrat is actually in or out of the quadrat [26, 40]; this type of error is called “edge effect” . If rooting occurs outside the plot area but its shoots occupy the airspace of the unit, the plant can be optionally registered as present, depending on the purpose of the survey [25, 32]. It is necessary, however, to make explicit the criterion established when defining the variable, but usually only plants rooted into the quadrat are considered. 2.2. Methods for describing the community There is a wide variety of methods that allow the floristic characterization of a plant community, whose suitability or applicability depends on the specific objectives of each study and the structure of the community studied. However, and regardless of the method used for the floristic study, each sampling unit (quadrat) must meet the following criteria : It must be of sufficient size to contain the most possible proportion of species belonging to the plant community; The habitat must be uniform into the sampling area, within the levels one can determine; and Plant cover should be as homogeneous as possible. 2.2.1. Importance components A fundamental aspect in the floristic characterization of a plant community is that the methodology adopted should provide an adequate representation of all the species present in the community in natural ecosystems . For arable fields, one may hypothesize that it should properly represent at least most of the weed species present. Once field sampling is accomplished and all data are collected, the following parameters can be calculated : 2.2.2. Diversity indices The calculation of diversity indices, α (alpha), β (beta), and γ (gamma), allows the comparative analysis of homogeneous or heterogeneous plant formations. They measure, respectively, the species richness of a community, the degree of change or replacement in species composition among different communities, and their richness in the set of communities [34, 40]. The most widely used diversity indices are Margalef (α), Menhinick ( where α = Margalef index, This method can determine the number of taxa and the number of individuals in an ecosystem, comparing species richness among samples collected from different habitats. The Menhinick index ( The Simpson index is obtained by Eq. (7). Its calculation is strongly influenced by the importance of the most dominant species. Since its value is inverse to equity, diversity by Simpson is usually calculated by Eq. (8), which indicates that closer to the value of “1”, the greater the equity. Simpson's where λ = Simpson index, The Shannon–Weiner diversity index ( It is a relationship between abundance and richness and expresses the uniformity of abundance values across all species in the sample. It ranges from “0”, when there is only one species and the Neperian logarithm of “ In addition to the diversity indices, the Shannon–Weiner evenness proportion (SEP) sustainability coefficient , Eq. (11), is able to infer about sustainability of managements applied to production systems from static data. It considers the diversity of Shannon–Weiner calculated both from density (Eq. (9)) and from dry mass data, Eq. (10). As it is a division of one by the other, differences between 2.2.3. Multivariate analysis Descriptive multivariate analysis provides complementary tools to phytosociology. In this sense, classification and ordering techniques allow the identification of variation patterns in large data sets using algebraic procedures that can be translated into mathematical algorithms. Consequently, these techniques facilitate the work of comparing large sets of data from surveys with the help of computer programs. Samples of plant communities, whether they are described by the presence or by abundance of the species that compose them, are multivariate because they present values of different variables (species) in each of the studied sites [46, 47]. Based on Matteucci and Colma and Moreno , the degree of species turnover (beta diversity) has been evaluated mainly considering proportions or differences. The proportions can be evaluated using indices as well as coefficients that indicate how similar/dissimilar two communities or samples are. Many of these similarities and differences can also be expressed or visualized by distances. These similarities or differences can be either qualitative (using presence-absence data) or quantitative (using proportional abundance data for each species or study group, as number of individuals, biomass, relative density, coverage, etc.). The methods for quantifying beta diversity can be divided into two classes: similarity-dissimilarity and exchange/replacement of species. The different indices considered in the methods should be applied depending on the nature of the data (qualitative/quantitative) and what the relationship between the samples is, what it implies, how samples are organized, and how they were obtained, according to the question of interest. Thus, the similarity or dissimilarity expresses the degree of comparability in species composition and its abundances between two samples (communities). 2.2.4. Clustering by similarity The beta diversity indices of Jaccard ( Sørensen’s index, Eq. (13), puts more weight on the co-occurrence of species, compared to Jaccard’s index (Eq. (12)). Sørensen relates the number of shared species with the arithmetic mean of the species in both compared sites, while Jaccard relates the number of shared species with the total number of exclusive species [26, 40]. To group the areas according to their similarity, it is advised to first obtain the dissimilarity (differences) between areas ( After obtaining the dissimilarity matrix of treatment versus treatment (Table 2), multivariate analysis of hierarchical clustering may be performed by the unweighted pair group method with arithmetic mean (UPGMA) hierarchical clustering method (Figure 5). The critical level for separation of groups in the cluster analysis is advised to be based on the arithmetic mean of the original matrix , disregarding crossing points between the same areas. Group validation is usually accomplished by the cophenetic correlation coefficient , obtained by Pearson's linear correlation between the original matrix of dissimilarity and its respective cophenetic matrix. 3. Association between plant species The relationships between weed species and the crop in arable fields are described by several authors in terms of competitive aspects [53, 54] and crop-yield losses [55–57]. The balancing in occurrence of weed species in these same arable fields is usually described by means of phytosociological surveys, as previously reported [14, 19, 38], being used as auxiliary to the competitive data aiming to subsidize recommendations for weed control in arable fields. But beyond a simple characterization of both crop losses by competition and the composition of weed occurrence, there is need to understand how weeds interact among them . The principle of interaction among plant species is based on Currently, it is believed that plant association exists to a certain degree; the gradient of plant composition of weed clusters is defined by the environment (and by management in arable areas) and abrupt changes in plant composition into clusters are observed when abrupt selection factors are applied . In arable fields with repeated weed management in sequential cropping seasons (same herbicide, soil tillage, crop species), associations among weed species are expected to be valid at a higher degree compared to natural environments, since unfavored species are often eliminated from the area by the weed control techniques. For instance, following repeated application of a single herbicide, those weeds that still remain into the field are most probably those who present, as a common feature, the ability to tolerate that given herbicide, be it tolerant or resistant to that herbicide . In weed science, an overall comprehension regarding plant associations is usually ignored, but its importance lays on two aspects: (1) weed species with positive association among them may answer better to environmental stresses as temperature and water shortage or excess [26, 40, 54], thus, associated plants are most prone to survive, reproduce, and increase its frequency into the community as they work together and (2) the understanding of the association among weed species in arable fields would make it possible to elaborate optimized control plans, be it chemical or cultural, which are efficient over a wider range of weed species at the same time since the technician previously knows they occur together. With the characterization of weed clusters in arable fields, it would be possible to estimate the appearance of weed species into the previously characterized clusters, even before its emergence, by observing the weed species already present and comparing to the usual cluster for that given crop and management. Thus, understanding the association among weed species in arable fields would ultimately subsidize the development of sustainable techniques for weed control, including optimized herbicide recommendations. The limitation of its application, however, is that clusters would have to be defined for every combination of crop species (soybeans, rice, maize…), cropping system (direct seeding, water seeding, conventional tillage…), and environmental conditions (mainly based on edaphoclimatic characteristics). Several methods are available in the literature related to plant ecology to assess plant associations in natural environments [25, 26, 40], where low levels of stress and disturbance are usually present [40, 54], and the climax of the vegetation may be most wide and dynamic. Climax is roughly defined as the final and relatively permanent condition of species occurrence in a given environment, as function of climate and soil characteristics . In arable fields, the vegetation climax is heavily biased by crop management; thus, the weed climax tends to be narrower compared to the observed in natural environments, with probable lower degree of uncertainties in weed cluster characterization. In the present chapter, we aim only to use the ecological approach of plant association as a tool for the weed science, so part of the methodologies available for detailed ecological studies, for instance as presented by Braun-Blanquet and Barbour et al. , will not be covered in the present text. The basic steps to achieve a relatively complete characterization of association among weed species, as will be discussed in the following sections, are presented in Figure 6. 3.1. History and management of the area The field survey for plant association, in arable fields, should be started only after an overview of the area has already been obtained from those who know the field very well. The first step in a study of plant association in arable fields includes obtaining data about the history of the area. Suitable weed management programs include extensive field scout for identifying weed populations and its seeds , as well as what growth stage the weeds are in Ref. ; talking to the farmer and his field workers will also supply valuable information about the predominance of weed species in the preceding years. Other information that should be obtained is the history about soil tillage, liming, and fertilization, as this information is needed to understand the biological nature of the predominant weed species in the area. The main point, however, is the history of the herbicides previously applied to the field—those with long residual effect may heavily select weeds which are less susceptible to it ; the same may occur with frequent applications of non-residual herbicides . The last 3 years of herbicide application, at least, should be known to the researcher. Perennial and long-term crops, as fruit trees or sugarcane, may mean that longer residual herbicides could have been applied, and in that case, the species associations are valid only under those or similar conditions, as in the absence of the residual effect of herbicides, other weed species would occur in that crop, at that location, and plant clusters would most probably be different. 3.2. Contingency tables After the survey about the history and predominant management of the area is concluded, the second step in the determination of plant associations is a field survey by launching random quadrats with fixed size into the area. Methods for optimizing quadrat distribution in the field survey are available in ecology books [25, 26, 30, 37]. In general, sampling 100 quadrats should be enough for a reliable survey in average size fields, although for plant association, both the correct quadrat geometrical form and size are of great importance. The optimal geometrical form for the quadrat is round or square , as it will reduce the total perimeter of the quadrat to the minimum and thus help reduce the error associated with the observer deciding if an individual of a rare weed species is in or out of the quadrat . Quadrat size, however, is much more important than quadrat form as the data are of frequency type; correct quadrat size is preponderant as the Another point to be discussed is that the For each sampling point, all plant species rooted into the quadrat should be identified and recorded; there is no need to count the number of individuals per species or to assess its dry mass. Plant species which are not known at the time of the evaluation should be identified by a number and have a sample collected for posterior identification by a plant taxonomist (Figure 6). The plant species should be listed by sampled quadrat and compared in pairs, the data being organized in 2 × 2 contingency tables, as follows : The association between plant species is estimated by the chi-square ( where: = traditional chi square estimation; obs = observed values for species occurrence; and exp = expected values for species occurrence. The expected values for each pair of occurrence in the 2 × 2 contingency tables are estimated by using the observed values from field sampling, as follows (Table 3). |“x” and “y” present||((a + b)/n) × (a + c) = K| |“y” present||(a+b) − K = L| |“x” present||(a+c) − K = M| |none present||n − (K + L + M)| As the association analysis uses 2 × 2 contingency tables, there is only one degree of freedom for the The results of the calculated A dissimilarity between areas, Eqs. (14) and (15), is also useful for studies of plant associations. Similarly, diversity for intra-characterization of the areas may be applied to plant association studies in the same way they are applied for pure phytosociological studies (Eqs. (8) and (9)). The relative occurrence of species and botanical families in the sampled area may be determined by its frequency of occurrence, usually considering the number of quadrats in which a given species is reported, related to the total number of quadrats, disregarding the number of individuals per quadrat: The frequency may be presented in several ways, but wordclouds make it easy to be understood. In wordclouds, the font size used to write the name of each species or family is proportional to their respective values of frequency (Figure 6). 4. Objections to the phytosociological method and application of the theory Although being used for some years as a tool for the weed science, phytosociological surveys applied to arable fields have its drawbacks. As these methods were originally designed to describe natural environments, usually free from heavy anthropogenic effect, adaptations were needed for the agricultural context where the current flora present into the field is usually and mostly a result of the last cropping season’s management (soil tillage system, fertilization levels, and herbicides applied, among other factors). The main adaptations were (1) to establish the basic five steps for a reasonably complete phytosociological analysis, as described in the present text (overall infestation, phytosociological tables, diversity, similarity, and association); (2) to suggest and give preference to formulas which are less impacted by the most preponderant factors which could distort the phytosociological analysis, mainly for diversity and similarity; and (3) to use the method not only directly to the current flora into a given area but also to its seedbank through a germination study into controlled environment, as suggested by Concenço , and later comparing both studies (surface and seedbank samplings). Another issue in the application of the method is its difficulty for both data collection in the field and its processing into the office, compared to what the researchers are familiar to analyze. Most weed science researchers usually adopt the visual method of evaluation for quantifying the occurrence of weeds into a given arable field, but this information is as easy as vague; it consists in taking note of the percentage of occurrence of each weed species into the field or alternatively—mainly following a herbicide application—evaluating the percentage of weed control some days after herbicide application. This method, although traditional and easy, does not supply at all information regarding the long-term behavior of weeds into the evaluated fields or its trend of occurrence for the next cropping seasons. Another difficulty in applying the phytosociological methods for weed surveys is probably to convince the established weed science researchers to shift from the traditional evaluation methods (based on percentage of weed occurrence and control) to the phytosociological scope. The literature, however, proves that the adoption of such methods is highly positive for the sustainability of herbicide recommendations and weed management in the long term. One of the first Brazilian studies to apply the phytosociological method to the weed science, although in simple terms, was conducted by Carvalho and Pitelli . Later, studies by Jakelaitis , Tuffi-Santos , Adegas , and several others adopted with success the phytosociological method for studies in weed science. Although the use of phytosociological methods in the weed science is not new, the set of methods adopted is not standardized and ranges from basic to complex and from suitable to nearly unsuitable, depending on the paper. This makes almost impossible to compare studies conducted by different researchers as formulas and procedures are unlikely to be equivalent. The present chapter, however, partly intends to standardize the methods and its application. 5. Future insights Weed science researchers will soon note that the traditional way of evaluating weed occurrence, infestation, or severity needs to move from a passive and subjective visually based assessment to most data-based decisions, and the phytosociology tends to be consolidated as the preponderant tool in this new universe of the weed science. The difficulty in data collection for the phytosociological methods is still to be solved, but in the next few years, technologies such as GPS-driven drones with infrared imaging ability may be able to make data collection easier. Regarding data processing, the office work may still be an issue, but there are specific scripts for statistical softwares which could make the task of processing and interpreting the data easier, as the one published by Concenço , which makes possible to automatize phytosociological data processing into the statistical environment “R”. This script, unfortunately, does not process the section of plant associations in its current version but is still a valuable tool that is freely available and adaptable. Finally, an automatized integration from data collection into the field by GPS-driven drones, its transference to office and automatic processing by phytosociology software would provide farmers and technicians valuable tables and graphs for supporting both immediate and long-term decision-making in weed management. 6. Final considerations This chapter discussed how some elements of phytosociology in ecology and botany can be used into the weed science as a tool for several inferences in arable fields. This is important to support recommendations for good agricultural practices while keeping up with biological conservation. While choosing a sampling methodology for population studies in weed science, two questions should be taken into consideration: (1) “ Weed relationships with edaphoclimatic traits show that the weed community is sensitive to variations in pH, water, temperature, and other resources and conditions. Each weed population is mostly competitive and dominant in those locations that meet particular conditions, and this would allow weeds to be used also as bioindicators as well as help understand the long-term dynamics of weed communities. It is important to evaluate other sampling methods in order to know the sensitivity and accuracy of such alternatives in comparison with the ones presented here for distinct sampling objectives. One should ever think that as more species coexist in an association, and as greater sized are the plants, as bigger should be the minimum area to be sampled. Phytosociological surveys are useful as tools to shed light on the dynamics of weed species and their interactions in arable fields. The methods, however, are the most diverse as several indices and coefficients are available, depending on the literature used as a reference by a given author. Basic care should be taken, however, when sampling and describing the plant community as well. The following sequence of steps is proposed as suitable for phytosociological studies: (1) overall infestation; (2) phytosociological tables and/or graphs; (3) intra-characterization by diversity coefficients; (4) inter-characterization and area grouping by multivariate analysis; and (5) weeds association through contingency tables by means of the chi-square test. Other ways for presenting data should still be suitable, depending on the nature of the environment to be studied—arable fields in this case. Literature is not clear about a set of methods for phytosociological studies, and one will hardly be able to find all the information and equations into the same source. Even classical references miss some aspects of phytosociological surveys, and some papers were published by using an unsuitable set of ecological methods to describe the weed community. In the present chapter, a summary of methods was made in order to assist weed science researchers through their first steps into the realm of phytosociology.
<urn:uuid:6ad45822-5510-45eb-978f-1a8a5d355152>
CC-MAIN-2021-21
https://www.intechopen.com/books/plant-ecology-traditional-approaches-to-recent-trends/phytosociological-surveys-in-weed-science-old-concept-new-approach
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989819.92/warc/CC-MAIN-20210518094809-20210518124809-00336.warc.gz
en
0.938592
7,943
2.953125
3
The Story of Cain and Abel 1 Now#tn The disjunctive clause (conjunction + subject + verb) introduces a new episode in the ongoing narrative. the man had marital relations with#tn Heb “the man knew,” a frequent euphemism for sexual relations. his wife Eve, and she became pregnant#tn Or “she conceived.” and gave birth to Cain. Then she said, “I have created#tn Here is another sound play (paronomasia) on a name. The sound of the verb קָנִיתִי (qaniti, “I have created”) reflects the sound of the name Cain in Hebrew (קַיִן, qayin) and gives meaning to it. The saying uses the Qal perfect of קָנָה (qanah). There are two homonymic verbs with this spelling, one meaning “obtain, acquire” and the other meaning “create” (see Gen 14:19, 22; Deut 32:6; Ps 139:13; Prov 8:22). The latter fits this context very well. Eve has created a man. a man just as the Lord did!”#tn Heb “with the Lord.” The particle אֶת־ (’et) is not the accusative/object sign, but the preposition “with” as the ancient versions attest. Some take the preposition in the sense of “with the help of” (see BDB 85 s.v. אֵת; cf. NEB, NIV, NRSV), while others prefer “along with” in the sense of “like, equally with, in common with” (see Lev 26:39; Isa 45:9; Jer 23:28). Either works well in this context; the latter is reflected in the present translation. Some understand אֶת־ as the accusative/object sign and translate, “I have acquired a man – the Lord.” They suggest that the woman thought (mistakenly) that she had given birth to the incarnate Lord, the Messiah who would bruise the Serpent’s head. This fanciful suggestion is based on a questionable allegorical interpretation of Gen 3:15 (see the note there on the word “heel”).sn Since Exod 6:3 seems to indicate that the name Yahweh (יְהוָה, yÿhvah, translated Lord) was first revealed to Moses (see also Exod 3:14), it is odd to see it used in quotations in Genesis by people who lived long before Moses. This problem has been resolved in various ways: (1) Source critics propose that Exod 6:3 is part of the “P” (or priestly) tradition, which is at odds with the “J” (or Yahwistic) tradition. (2) Many propose that “name” in Exod 6:3 does not refer to the divine name per se, but to the character suggested by the name. God appeared to the patriarchs primarily in the role of El Shaddai, the giver of fertility, not as Yahweh, the one who fulfills his promises. In this case the patriarchs knew the name Yahweh, but had not experienced the full significance of the name. In this regard it is possible that Exod 6:3b should not be translated as a statement of denial, but as an affirmation followed by a rhetorical question implying that the patriarchs did indeed know God by the name of Yahweh, just as they knew him as El Shaddai. D. A. Garrett, following the lead of F. Andersen, sees Exod 6:2-3 as displaying a paneled A/B parallelism and translates them as follows: (A) “I am Yahweh.” (B) “And I made myself known to Abraham…as El Shaddai.” (A') “And my name is Yahweh”; (B') “Did I not make myself known to them?” (D. A. Garrett, Rethinking Genesis, 21). However, even if one translates the text this way, the Lord’s words do not necessarily mean that he made the name Yahweh known to the fathers. God is simply affirming that he now wants to be called Yahweh (see Exod 3:14-16) and that he revealed himself in prior times as El Shaddai. If we stress the parallelism with B, the implied answer to the concluding question might be: “Yes, you did make yourself known to them – as El Shaddai!” The main point of the verse would be that El Shaddai, the God of the fathers, and the God who has just revealed himself to Moses as Yahweh are one and the same. (3) G. J. Wenham suggests that pre-Mosaic references to Yahweh are the product of the author/editor of Genesis, who wanted to be sure that Yahweh was identified with the God of the fathers. In this regard, note how Yahweh is joined with another divine name or title in Gen 9:26-27; 14:22; 15:2, 8; 24:3, 7, 12, 27, 42, 48; 27:20; 32:9. The angel uses the name Yahweh when instructing Hagar concerning her child’s name, but the actual name (Ishma-el, “El hears”) suggests that El, not Yahweh, originally appeared in the angel’s statement (16:11). In her response to the angel Hagar calls God El, not Yahweh (16:13). In 22:14 Abraham names the place of sacrifice “Yahweh Will Provide” (cf. v. 16), but in v. 8 he declares, “God will provide.” God uses the name Yahweh when speaking to Jacob at Bethel (28:13) and Jacob also uses the name when he awakens from the dream (28:16). Nevertheless he names the place Beth-el (“house of El”). In 31:49 Laban prays, “May Yahweh keep watch,” but in v. 50 he declares, “God is a witness between you and me.” Yahweh’s use of the name in 15:7 and 18:14 may reflect theological idiom, while the use in 18:19 is within a soliloquy. (Other uses of Yahweh in quotations occur in 16:2, 5; 24:31, 35, 40, 42, 44, 48, 50, 51, 56; 26:22, 28-29; 27:7, 27; 29:32-35; 30:24, 30; 49:18. In these cases there is no contextual indication that a different name was originally used.) For a fuller discussion of this proposal, see G. J. Wenham, “The Religion of the Patriarchs,” Essays on the Patriarchal Narratives, 189-93. 2 Then she gave birth#tn Heb “And she again gave birth.” to his brother Abel.#sn The name Abel is not defined here in the text, but the tone is ominous. Abel’s name, the Hebrew word הֶבֶל (hevel), means “breath, vapor, vanity,” foreshadowing Abel’s untimely and premature death. Abel took care of the flocks, while Cain cultivated the ground.#tn Heb “and Abel was a shepherd of the flock, and Cain was a worker of the ground.” The designations of the two occupations are expressed with active participles, רֹעֵה (ro’eh, “shepherd”) and עֹבֵד (’oved, “worker”). Abel is occupied with sheep, whereas Cain is living under the curse, cultivating the ground. 3 At the designated time#tn Heb “And it happened at the end of days.” The clause indicates the passing of a set period of time leading up to offering sacrifices. Cain brought some of the fruit of the ground for an offering#tn The Hebrew term מִנְחָה (minkhah, “offering”) is a general word for tribute, a gift, or an offering. It is the main word used in Lev 2 for the dedication offering. This type of offering could be comprised of vegetables. The content of the offering (vegetables, as opposed to animals) was not the critical issue, but rather the attitude of the offerer. to the Lord. 4 But Abel brought#tn Heb “But Abel brought, also he….” The disjunctive clause (conjunction + subject + verb) stresses the contrast between Cain’s offering and Abel’s. some of the firstborn of his flock – even the fattest#tn Two prepositional phrases are used to qualify the kind of sacrifice that Abel brought: “from the firstborn” and “from the fattest of them.” These also could be interpreted as a hendiadys: “from the fattest of the firstborn of the flock.” Another option is to understand the second prepositional phrase as referring to the fat portions of the sacrificial sheep. In this case one may translate, “some of the firstborn of his flock, even some of their fat portions” (cf. NEB, NIV, NRSV).sn Here are two types of worshipers – one (Cain) merely discharges a duty at the proper time, while the other (Abel) goes out of his way to please God with the first and the best. of them. And the Lord was pleased with#tn The Hebrew verb שָׁעָה (sha’ah) simply means “to gaze at, to have regard for, to look on with favor [or “with devotion”].” The text does not indicate how this was communicated, but it indicates that Cain and Abel knew immediately. Either there was some manifestation of divine pleasure given to Abel and withheld from Cain (fire consuming the sacrifice?), or there was an inner awareness of divine response. Abel and his offering, 5 but with Cain and his offering he was not pleased.#sn The Letter to the Hebrews explains the difference between the brothers as one of faith – Abel by faith offered a better sacrifice. Cain’s offering as well as his reaction to God’s displeasure did not reflect faith. See further B. K. Waltke, “Cain and His Offering,” WTJ 48 (1986): 363-72. So Cain became very angry,#tn Heb “and it was hot to Cain.” This Hebrew idiom means that Cain “burned” with anger. and his expression was downcast.#tn Heb “And his face fell.” The idiom means that the inner anger is reflected in Cain’s facial expression. The fallen or downcast face expresses anger, dejection, or depression. Conversely, in Num 6 the high priestly blessing speaks of the Lord lifting up his face and giving peace. 6 Then the Lord said to Cain, “Why are you angry, and why is your expression downcast? 7 Is it not true#tn The introduction of the conditional clause with an interrogative particle prods the answer from Cain, as if he should have known this. It is not a condemnation, but an encouragement to do what is right. that if you do what is right, you will be fine?#tn The Hebrew text is difficult, because only one word occurs, שְׂאֵת (sÿ’et), which appears to be the infinitive construct from the verb “to lift up” (נָאָשׂ, na’as). The sentence reads: “If you do well, uplifting.” On the surface it seems to be the opposite of the fallen face. Everything will be changed if he does well. God will show him favor, he will not be angry, and his face will reflect that. But more may be intended since the second half of the verse forms the contrast: “If you do not do well, sin is crouching….” Not doing well leads to sinful attack; doing well leads to victory and God’s blessing. But if you do not do what is right, sin is crouching#tn The Hebrew term translated “crouching” (רֹבֵץ, rovets) is an active participle. Sin is portrayed with animal imagery here as a beast crouching and ready to pounce (a figure of speech known as zoomorphism). An Akkadian cognate refers to a type of demon; in this case perhaps one could translate, “Sin is the demon at the door” (see E. A. Speiser, Genesis [AB], 29, 32-33). at the door. It desires to dominate you, but you must subdue it.”#tn Heb “and toward you [is] its desire, but you must rule over it.” As in Gen 3:16, the Hebrew noun “desire” refers to an urge to control or dominate. Here the desire is that which sin has for Cain, a desire to control for the sake of evil, but Cain must have mastery over it. The imperfect is understood as having an obligatory sense. Another option is to understand it as expressing potential (“you can have [or “are capable of having”] mastery over it.”). It will be a struggle, but sin can be defeated by righteousness. In addition to this connection to Gen 3, other linguistic and thematic links between chaps. 3 and 4 are discussed by A. J. Hauser, “Linguistic and Thematic Links Between Genesis 4:1-6 and Genesis 2–3,” JETS 23 (1980): 297-306. 8 Cain said to his brother Abel, “Let’s go out to the field.”#tc The MT has simply “and Cain said to Abel his brother,” omitting Cain’s words to Abel. It is possible that the elliptical text is original. Perhaps the author uses the technique of aposiopesis, “a sudden silence” to create tension. In the midst of the story the narrator suddenly rushes ahead to what happened in the field. It is more likely that the ancient versions (Samaritan Pentateuch, LXX, Vulgate, and Syriac), which include Cain’s words, “Let’s go out to the field,” preserve the original reading here. After writing אָחִיו (’akhiyv, “his brother”), a scribe’s eye may have jumped to the end of the form בַּשָּׂדֶה (basadeh, “to the field”) and accidentally omitted the quotation. This would be an error of virtual homoioteleuton. In older phases of the Hebrew script the sequence יו (yod-vav) on אָחִיו is graphically similar to the final ה (he) on בַּשָּׂדֶה. While they were in the field, Cain attacked#tn Heb “arose against” (in a hostile sense). his brother#sn The word “brother” appears six times in vv. 8-11, stressing the shocking nature of Cain’s fratricide (see 1 John 3:12). Abel and killed him. 9 Then the Lord said to Cain, “Where is your brother Abel?”#sn Where is Abel your brother? Again the Lord confronts a guilty sinner with a rhetorical question (see Gen 3:9-13), asking for an explanation of what has happened. And he replied, “I don’t know! Am I my brother’s guardian?”#tn Heb “The one guarding my brother [am] I?”sn Am I my brother’s guardian? Cain lies and then responds with a defiant rhetorical question of his own in which he repudiates any responsibility for his brother. But his question is ironic, for he is responsible for his brother’s fate, especially if he wanted to kill him. See P. A. Riemann, “Am I My Brother’s Keeper?” Int 24 (1970): 482-91. 10 But the Lord said, “What have you done?#sn What have you done? Again the Lord’s question is rhetorical (see Gen 3:13), condemning Cain for his sin. The voice#tn The word “voice” is a personification; the evidence of Abel’s shed blood condemns Cain, just as a human eyewitness would testify in court. For helpful insights, see G. von Rad, Biblical Interpretations in Preaching; and L. Morris, “The Biblical Use of the Term ‘Blood,’” JTS 6 (1955/56): 77-82. of your brother’s blood is crying out to me from the ground! 11 So now, you are banished#tn Heb “cursed are you from the ground.” As in Gen 3:14, the word “cursed,” a passive participle from אָרָר (’arar), either means “punished” or “banished,” depending on how one interprets the following preposition. If the preposition is taken as indicating source, then the idea is “cursed (i.e., punished) are you from [i.e., “through the agency of”] the ground” (see v. 12a). If the preposition is taken as separative, then the idea is “cursed and banished from the ground.” In this case the ground rejects Cain’s efforts in such a way that he is banished from the ground and forced to become a fugitive out in the earth (see vv. 12b, 14). from the ground, which has opened its mouth to receive your brother’s blood from your hand. 12 When you try to cultivate#tn Heb “work.” the ground it will no longer yield#tn Heb “it will not again (תֹסֵף, tosef) give (תֵּת, tet),” meaning the ground will no longer yield. In translation the infinitive becomes the main verb, and the imperfect verb form becomes adverbial. its best#tn Heb “its strength.” for you. You will be a homeless wanderer#tn Two similar sounding synonyms are used here: נָע וָנָד (na’ vanad, “a wanderer and a fugitive”). This juxtaposition of synonyms emphasizes the single idea. In translation one can serve as the main description, the other as a modifier. Other translation options include “a wandering fugitive” and a “ceaseless wanderer” (cf. NIV). on the earth.” 13 Then Cain said to the Lord, “My punishment#tn The primary meaning of the Hebrew word עָוֹן (’avon) is “sin, iniquity.” But by metonymy it can refer to the “guilt” of sin, or to “punishment” for sin. The third meaning applies here. Just before this the Lord announces the punishment for Cain’s actions, and right after this statement Cain complains of the severity of the punishment. Cain is not portrayed as repenting of his sin. is too great to endure!#tn Heb “great is my punishment from bearing.” The preposition מִן (min, “from”) is used here in a comparative sense. 14 Look! You are driving me off the land#tn Heb “from upon the surface of the ground.” today, and I must hide from your presence.#sn I must hide from your presence. The motif of hiding from the Lord as a result of sin also appears in Gen 3:8-10. I will be a homeless wanderer on the earth; whoever finds me will kill me.” 15 But the Lord said to him, “All right then,#tn The Hebrew term לָכֵן (lakhen, “therefore”) in this context carries the sense of “Okay,” or “in that case then I will do this.” if anyone kills Cain, Cain will be avenged seven times as much.”#sn The symbolic number seven is used here to emphasize that the offender will receive severe punishment. For other rhetorical and hyperbolic uses of the expression “seven times over,” see Pss 12:6; 79:12; Prov 6:31; Isa 30:26. Then the Lord put a special mark#tn Heb “sign”; “reminder.” The term “sign” is not used in the translation because it might imply to an English reader that God hung a sign on Cain. The text does not identify what the “sign” was. It must have been some outward, visual reminder of Cain’s special protected status. on Cain so that no one who found him would strike him down.#sn God becomes Cain’s protector. Here is common grace – Cain and his community will live on under God’s care, but without salvation. 16 So Cain went out from the presence of the Lord and lived in the land of Nod,#sn The name Nod means “wandering” in Hebrew (see vv. 12, 14). east of Eden. The Beginning of Civilization 17 Cain had marital relations#tn Heb “knew,” a frequent euphemism for sexual relations. with his wife, and she became pregnant#tn Or “she conceived.” and gave birth to Enoch. Cain was building a city, and he named the city after#tn Heb “according to the name of.” his son Enoch. 18 To Enoch was born Irad, and Irad was the father#tn Heb “and Irad fathered.” of Mehujael. Mehujael was the father of Methushael, and Methushael was the father of Lamech. 19 Lamech took two wives for himself; the name of the first was Adah, and the name of the second was Zillah. 20 Adah gave birth to Jabal; he was the first#tn Heb “father.” In this passage the word “father” means “founder,” referring to the first to establish such lifestyles and occupations. of those who live in tents and keep#tn The word “keep” is not in the Hebrew text, but is supplied in the translation. Other words that might be supplied instead are “tend,” “raise” (NIV), or “have” (NRSV). livestock. 21 The name of his brother was Jubal; he was the first of all who play the harp and the flute. 22 Now Zillah also gave birth to Tubal-Cain, who heated metal and shaped#tn The traditional rendering here, “who forged” (or “a forger of”) is now more commonly associated with counterfeit or fraud (e.g., “forged copies” or “forged checks”) than with the forging of metal. The phrase “heated metal and shaped [it]” has been used in the translation instead. all kinds of tools made of bronze and iron. The sister of Tubal-Cain was Naamah. 23 Lamech said to his wives, “Adah and Zillah! Listen to me! You wives of Lamech, hear my words! I have killed a man for wounding me, a young man#tn The Hebrew term יֶלֶד (yeled) probably refers to a youthful warrior here, not a child. for hurting me. 24 If Cain is to be avenged seven times as much, then Lamech seventy-seven times!”#sn Seventy-seven times. Lamech seems to reason this way: If Cain, a murderer, is to be avenged seven times (see v. 15), then how much more one who has been unjustly wronged! Lamech misses the point of God’s merciful treatment of Cain. God was not establishing a principle of justice when he warned he would avenge Cain’s murder. In fact he was trying to limit the shedding of blood, something Lamech wants to multiply instead. The use of “seventy-seven,” a multiple of seven, is hyperbolic, emphasizing the extreme severity of the vengeance envisioned by Lamech. 25 And Adam had marital relations#tn Heb “knew,” a frequent euphemism for sexual relations. with his wife again, and she gave birth to a son. She named him Seth, saying, “God has given#sn The name Seth probably means something like “placed”; “appointed”; “set”; “granted,” assuming it is actually related to the verb that is used in the sentiment. At any rate, the name שֵׁת (shet) and the verb שָׁת (shat, “to place, to appoint, to set, to grant”) form a wordplay (paronomasia). me another child#tn Heb “offspring.” in place of Abel because Cain killed him.” 26 And a son was also born to Seth, whom he named Enosh. At that time people#tn The word “people” is not in the Hebrew text, but is supplied in the translation. The construction uses a passive verb without an expressed subject. “To call was begun” can be interpreted to mean that people began to call. began to worship#tn Heb “call in the name.” The expression refers to worshiping the Lord through prayer and sacrifice (see Gen 12:8; 13:4; 21:33; 26:25). See G. J. Wenham, Genesis (WBC), 1:116. the Lord.
<urn:uuid:c8649b97-fabc-4ec6-a452-1de8012e9dfd>
CC-MAIN-2021-21
https://chop.bible.com/bible/416/GEN.4.GNBDC?parallel=107
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988986.98/warc/CC-MAIN-20210509122756-20210509152756-00496.warc.gz
en
0.959952
5,824
2.8125
3
Allergy is defined as an immune-mediated inflammatory response to common environmental allergens that are otherwise harmless. The diagnosis of allergy is dependent on a history of symptoms on exposure to an allergen together with the detection of allergen-specific IgE. The detection of allergen-specfic IgE may be reliably performed by blood specific testing or skin prick testing. Skin prick testing is not without its attendant risks, and appropriate precautions need to be taken. A doctor should be present for safety and test interpretation. Accurate diagnosis of allergies opens up therapeutic options that are otherwise not appropriate, such as allergen immunotherapy and allergen avoidance. Allergen immunotherapy is an effective treatment for stinging insect allergy, allergic rhinitis and asthma. The most effective methods for primary prevention of allergic disease in children that can currently be recommended are breastfeeding and ceasing smoking. Emerging trends in allergen treatment include sublingual immunotherapy. Allergic diseases are common and increasing in prevalence in Western countries, resulting in morbidity and mortality in all age groups. Drug therapy offers the opportunity for effective treatment, and a clear understanding of the spectrum of allergic diseases and the accurate identification of environmental triggers can enable the doctor to recommend optimal allergen-specific treatment, thereby minimising morbidity and mortality. To give general practitioners and non-allergy specialists a framework on which to base the clinical assessment of patients with allergic disease, we outline here the general principles of diagnosis, treatment and prevention. Allergy can be defined as a detrimental immune-mediated hypersensitivity response to common environmental substances. While the word “allergy” can mean many things to the lay person, the clinician needs to keep in mind that diagnosis of allergies is critically dependent on identifying the immune processes involved in the allergic response. The immune processes of allergy usually rely on the production of IgE antibodies specific to common allergens. Allergic diseases are caused by the activation of mast cells and basophils through cell-surface-bound IgE. This causes the release of histamine and other mediators, leading to allergic inflammation. Chronic allergic inflammation characteristically involves a cellular tissue infiltrate of eosinophils and lymphocytes associated with chronic tissue damage. This definition of allergy is intentionally restrictive and, for the purposes of this article, excludes cutaneous contact allergy, which is mediated by T cells rather than IgE. In the community, diverse symptoms are often attributed to “allergy”. A useful test for the clinician is to ask whether the symptoms are, or could be, IgE-mediated (IgE-mediated symptoms include asthma, rhinitis, urticaria, eczema, food hypersensitivity and anaphylaxis). If not, then the symptoms are unlikely to be the result of true allergy. IgE is produced by B lymphocytes directed by cytokine release from T helper (TH) lymphocytes (Box 1). In people with allergies, the TH lymphocytes secrete cytokines that stimulate the production of IgE antibodies to allergens. The condition of secreting IgE in response to common environmental allergens is called “atopy”. Predisposition to atopy is determined by both genetic and environmental influences, particularly in infancy, when immune responses to allergens are maturing, and T lymphocyte cytokine production is influenced by environmental exposures. Allergic diseases include allergic rhinitis and asthma; food and stinging insect allergies leading to anaphylaxis; and allergic dermatitis. The diagnosis of allergic disease depends on identifying both the symptoms on allergen exposure and the relevant allergen-specific IgE. For example, an individual who develops rhinitis in early spring may be sensitive to grass pollen, and identifying IgE specific to rye grass pollen confirms the likely aetiology. However, identification of house dust mite-specific IgE in the same individual in the absence of rye grass pollen-specific IgE may suggest it is not an allergic process, as house dust mite is a perennial (year-round) allergen, and seasonal exacerbation of symptoms is unlikely to be related to exposure to this agent. The manifestation of allergic diseases changes throughout life: food allergies and eczema are most likely to develop in infants, asthma in young children, and rhinitis in older children and adults (Box 2).1 There is increasing evidence that appropriate treatment of allergies can prevent and alter the natural history of allergic diseases. Optimal treatment requires accurate determination of allergic triggers. Moreover, if an allergen avoidance strategy is to be pursued in relation to food or aeroallergens, it is critical to minimise the inconvenience of this strategy by making a correct diagnosis as early as possible. Accurate diagnosis of allergic disease and the relevant allergens helps to determine appropriate treatment options. Allergen-specific IgE can be detected by skin prick testing and by blood specific IgE testing (ie, serum allergen-specific IgE testing [as distinct from total IgE testing]). Skin prick testing relies on the introduction of a very small amount of allergen extract into the epidermis using a standardised method to ensure reproducibility and comparability of results (Box 3). The results of skin prick testing are read at 10 minutes (for the positive control [histamine dihydrochloride or codeine]) and 15 minutes (for the allergen), and the diameter of the resulting weal is recorded in two dimensions. By convention, a positive test is one in which the mean of the two weal diameters is at least 3 mm greater than the negative control (saline), although if the reaction is as small as this, the relevance of the response is in question. Positive and negative controls are critical to enable interpretation of test results.2 When performed correctly, skin prick testing with aeroallergens (eg, house dust mite allergen, pollens, domestic pet allergens) shows good correlation with blood specific IgE testing in a semi-quantitative manner.3 However, careful patient selection for skin prick testing is critical for both safety and interpretation: absolute and relative contraindications to skin prick testing are listed in Box 4. Although very rare, systemic reactions to skin prick testing, and even fatalities, have been reported, and therefore equipment and supplies for treating anaphylaxis (including oxygen and adrenaline) should be available at the testing site. Systemic reactions to skin prick testing are more common in infants or in cases where the reaction being investigated is systemic (as in true food allergies or allergies to latex or stinging insects). In these cases, skin prick tests should be performed with particular caution or avoided in favour of blood specific IgE testing. Dermatographism is a skin condition in which wealing occurs after stroking of the skin. When present, it is a contraindication to skin prick testing, as the tests will be very difficult to interpret due to formation of weals in all tests. Commercially prepared allergens for skin prick testing are usually standardised against either laboratory controls or by in vivo methods to ensure comparability between tests and reagents. Tests using mixes of foods or inhalant allergens are not recommended, as they can give results that are difficult to interpret. Where standardised reagents are not available, crude allergens can be used for testing, but the results require interpretation by an allergy specialist. Intradermal allergy testing (in which a small amount of diluted allergen is injected into the dermis) has a very high non-specific reaction rate, but is useful in specific protocols for investigating drug and stinging insect allergy. Its use should be restricted to specialist clinics. Other methods of skin testing such as “scratch” testing are no longer used, owing to inconsistency of results. Doctors wishing to conduct skin prick testing should refer to specific guidelines for conducting skin prick tests.2 Standardised conduct of testing is critical to identifying the relevant allergens, and interpretation of the results is equally critical. Where feasible, the requesting doctor should observe the patient’s skin prick tests to aid interpretation. Blood specific IgE testing to a wide range of allergens detects and quantifies allergen-specific IgE. It can be used to diagnose all types of allergies, but is generally less sensitive than skin prick testing. Blood specific IgE testing is particularly useful when anaphylaxis is being investigated, as testing carries no associated risk of anaphylaxis and there are very few contraindications. Blood specific IgE testing can be performed in patients who are taking antihistamines or other drugs that are contraindicated in skin prick testing, and in patients whose risk of an adverse reaction to skin prick testing is high (eg, those with unstable asthma or anaphylaxis). Generally, a blood specific IgE grading of ≥ 2 (a ratio of specific to non-specific binding) denotes a specific response to an allergen. Blood specific IgE testing can be difficult to interpret in patients who have very high levels of total IgE (> 1000 kU/L) (eg, patients with eczema), as they may have low-grade reactions to many allergens. Although blood specific levels of IgG antibodies, especially to food allergens, can be measured, such testing should not be requested, as there is no evidence that it is relevant to allergy diagnosis. Sublingual immunotherapy kit Immunotherapy is an effective way of inducing physiological and immunological tolerance to allergens such as house dust mite allergen and grass pollens. Increasing evidence supports the effectiveness of the sublingual route of administration. Careful avoidance of the specific allergens responsible for allergic disease should always be the first consideration in managing patients with allergies. This is the primary form of treatment for food allergies and some stinging insect allergies, as avoidance can be a very effective strategy if patients are well educated about precautionary measures. For example, a person allergic to jumper-ant venom can minimise the chances of being stung by wearing shoes and long-sleeved shirts when outdoors and gloves when gardening. Accurate diagnosis of food allergies can enable patients to minimise the disruption to their lives caused by an unnecessarily restrictive diet. However, allergen avoidance is particularly contentious when applied to the area of aeroallergens and respiratory allergic disease. People who are clearly allergic to animal allergens (eg, cat allergens) are generally not troubled by the allergy unless they encounter the animal, providing a strong case for allergen avoidance. Similarly, to give an example from the health care environment, avoidance of powdered latex gloves has been effective in reducing symptomatic latex allergy and the incidence of new cases in hospital staff.5 But the situation is less clear with respect to house dust mite allergen, the most common domestic allergen in Australia. While older trials of allergen avoidance suggested that it reduced asthma symptoms, bronchial reactivity and eczema, two recent studies in patients with asthma6 and rhinitis,7 confirmed by a meta-analysis,8 question these benefits and suggest that further studies of secondary treatment of asthma by allergen avoidance are unlikely to prove that the method is effective. So, what should the treating doctor recommend? The evidence suggests that house dust mite avoidance should be recommended cautiously, if at all, and certainly only in people with clear sensitivity to house dust mite allergen. In symptomatic animal allergy, there is some evidence that removal from the home of a pet to which a person is allergic significantly reduces allergic symptoms and medication requirements.9 Although it is intuitively reasonable to reduce relevant allergen exposure in people with allergic symptoms, recent studies challenge the effectiveness of universal allergen avoidance strategies for allergies to domestic allergens. Allergen-specific immunotherapy involves administration of increasing doses of allergen to a patient to achieve clinical and immunological tolerance over time. Allergen injection immunotherapy induces T cell tolerance by several methods, including decreased allergen-induced proliferation, alteration of secreted cytokines, stimulation of apoptosis, and the production of T regulatory cells. This results in a reduction in inflammatory cells and mediators in the affected tissues, the production of blocking antibodies, and the suppression of IgE.10 The only absolute indication for immunotherapy is in patients who develop systemic reactions to insect venom, in whom incremental subcutaneous doses of venom can achieve tolerance to insect stings in 80%–90% of cases.11 However, immunotherapy for stinging insect sensitivity needs to be continued for at least 5 years to achieve durable tolerance.12 Conventional (subcutaneous) immunotherapy for allergic respiratory disease is clearly effective compared with placebo and requires 3 or more years of treatment to obtain durable efficacy. Subcutaneous immunotherapy is very effective for seasonal allergic rhinitis caused by grass pollens. It has been shown in some studies to reduce symptoms by over 60%.13 While not first-line treatment for asthma, allergen immunotherapy has been shown to be effective in reducing airway responsiveness and exacerbation rates.14 Although the benefits of subcutaneous immunotherapy are apparent in both asthma and allergic rhinitis, the use of immunotherapy needs to be balanced against the inconvenience of its delivery and the risks associated with anaphylaxis due to allergen administration. More recently, allergen immunotherapy for aeroallergens has been delivered by sublingual/swallow immunotherapy (SLIT). Meta-analysis of the many trials of this form of treatment confirms its safety and efficacy,15 but there are insufficient trials comparing sublingual immunotherapy with subcutaneous immunotherapy to compare similar dosing regimens. Moreover, efficacy with some allergens and in children is still under debate. However, if its efficacy for a broad range of allergens is proven, sublingual immunotherapy offers treatment that is probably more acceptable to patients and parents than subcutaneous immunotherapy. The major current drawback of sublingual immunotherapy is cost, as allergen doses required for effective treatment are at least 100-fold greater than those needed for subcutaneous immunotherapy. This translates into medication costs at least three times higher than for subcutaneous therapy. Poor patient adherence to prolonged courses of sublingual treatment may also be a factor reducing effectiveness. There are also promising reports of sublingual immunotherapy for food allergies. While this approach needs to be further confirmed in extensive studies, and will need to be performed in specialist centres because of its high risk, this is a promising avenue of treatment for food allergy — an area in which current treatment relies on long-term avoidance for secondary prevention.16 “What can I do to prevent allergic diseases in my children?” This is a very common question asked by parents. Although there has been much conjecture on how to influence the infantile immune response to reduce the likelihood of allergen sensitisation and subsequent allergic disease, effective specific preventive therapies have not yet been developed. Nevertheless, the following recommendations all have some evidence of efficacy in preventing either allergen sensitisation or disease such as wheeze or eczema, or both, especially in children born to high-risk families.17 Exclusive breastfeeding to 4–6 months of age; Use of hydrolysed milk formulas for babies unable to be breastfed; and Maternal avoidance of certain foods during pregnancy and lactation has not been effective in preventing the onset of allergic disease, and cannot be recommended. As there is conflicting evidence for the effectiveness of avoidance of house dust mites or pets in infancy for preventing subsequent allergic sensitisation, no recommendations can be made at this time regarding these initiatives. Fact or fiction — true or false? 3. True. There is extensive evidence that allergen immunotherapy is effective in asthma treatment,14 and emerging evidence suggests that it may prevent the development of asthma in children with rhinitis. 4. False. Not all children with asthma are allergic to house dust mite allergen, and allergen avoidance is only indicated if allergy to a specific allergen is clearly identified; moreover, current evidence does not support house dust mite avoidance as an effective treatment for asthma. More relevant to primary prevention are large trials of multifactorial interventions, such as the Canadian Childhood Asthma Primary Prevention Study18 and the Australian Childhood Asthma Prevention Study.19 The Canadian study, which has now been going for 7 years, shows a significant odds ratio (0.39) for the prevention of current asthma in a cohort of high-risk children as a result of a multifaceted intervention that has included encouragement of breastfeeding and avoidance of house dust, pets and tobacco smoke. The results of other similar trials, and data demonstrating the durability of benefits, will be needed to formulate public health measures in this direction. In addition to allergen avoidance in the presence of established disease, allergen-specific treatments can be used to reduce the development of allergic disease in sensitised individuals. Subcutaneous allergen immunotherapy has been shown to halve the rate of subsequent development of asthma in children with seasonal allergic rhinitis, indicating that allergen immunotherapy may have particular benefits in these children.20 Sublingual immunotherapy may also offer promise in reducing asthma onset in children with both perennial and seasonal rhinitis and asthma.21 Also, daily antihistamine treatment of children with eczema has been shown to reduce rates of asthma in those with grass pollen allergy.22 Researchers into allergen immunotherapy continue to seek safer and more convenient allergy “vaccines”.23 Peptide therapies based on the T lymphocyte epitopes of allergens offer hope in this area, but their clinical utility is yet to be demonstrated for all but cat allergy.24 Other approaches to allergic disease have been the development of humanised monoclonal anti-IgE antibodies, which have been found to have some efficacy in treating asthma25 and food allergies.26 Anti-IgE treatments may offer therapeutic opportunities for people with multiple sensitivities. Anti-cytokine therapies have also been investigated to treat asthma. Anti-interleukin-5 therapy has produced some reduction in inflammation, but has failed to improve bronchial hyper-responsiveness.27 Early trials of tumour necrosis factor alpha (TNF-α) blocking have shown some success, but further work is needed on specific blockers in inflammatory pathways.28 Future options for treating allergic disease will focus on allergen-specific routes, including further development of immunotherapy and targeting of specific mediators — an area with a great deal of promise, especially in people with refractory disease. 1 The allergic cascade In people with allergies, the T helper (TH) lymphocytes secrete cytokines, which predominantly stimulate B lymphocytes to produce IgE antibodies to allergens and also help to stimulate other pro-inflammatory cells, such as eosinophils. Cross-linking of IgE molecules by the allergen leads to mast cell degranulation and the secretion of mediators responsible for allergic inflammation. The points in the allergic cascade at which therapy may interrupt the process are indicated. 2 The atopic march* * Modified from Spergel.1 3 Skin prick testing 4 Skin prick testing Patients should be > 2 years of age. (Due to difficulties in interpretation of results of allergy testing in children under 2 years, as well as concerns about safety, such testing is best done by specialist allergists.) A diffuse dermatological condition is present. Testing must be performed on normal healthy skin. Severe dermatographism is present. Patient cooperation is poor. The patient is unable to stop using drugs that may interfere with the test result. Persistent severe or unstable asthma is present. There is a severe initial reaction (anaphylaxis). The patient is pregnant. The patient is taking certain types of drugs: 5 Case scenario* A 24-year-old man presented complaining of food allergies involving many foods. He reported long-standing seasonal rhinitis, which had been a particular problem when he lived in Europe as a teenager, but had been less troublesome since his return to Australia 3 years previously. He had been aware for many years of oral tingling and minor throat swelling on eating apricots, and generally avoided them. He had had no systemic or gastrointestinal symptoms on eating apricots, and had not experienced anaphylaxis after eating any foods. However, he had noticed more recently that bananas, raw apples, kiwifruit and hazelnut chocolates were giving him similar symptoms to apricots and was concerned that these would get worse. He was otherwise well. He had moved into a new housing estate 2 years previously, and had a cat in the house. On questioning, he recalled having antihistamine treatment for hay fever in August and September of the past year. The patient’s history of rhinitis and the skin prick test result strongly suggest birch pollen allergy. He would have acquired this while growing up in Europe, and it had probably been exacerbated by living in his new house, which was situated in an estate liberally planted with birch trees. This would explain the recurrence of his hay fever in the pollination season for birch pollen. Food sensitivities are a common complication of birch pollen allergy. Described as “oral allergy syndrome”, the condition is thought to be due to cross-reactivity. Foods such as apples, hazelnuts, apricots and other stone fruits cross-react with IgE antibodies to birch pollen, giving rise to oral symptoms but rarely anaphylaxis. The best current treatment is avoidance, if possible, but immunotherapy to birch pollen offers promise of both treating allergic rhinitis and relieving (but probably not curing) oral allergy symptoms. 6 Evidence-based practice tips* Allergen immunotherapy is an effective treatment for stinging insect allergy, allergic rhinitis and asthma (Level I). Avoidance of house dust mites cannot be currently recommended to improve asthma or rhinitis (Level I). Stopping smoking in the home and when pregnant is an effective way of reducing respiratory disease in children (Level II). * Based on National Health and Medical Research Council levels of evidence.4 - 1. Spergel JM. Atopic march: link to upper airways. Curr Opin Allergy Clin Immunol 2005; 5: 17-21. - 2. Skin tests used in type I allergy testing. Position paper. Sub-Committee on Skin Tests of the European Academy of Allergology and Clinical Immunology. Allergy 1989; 44 Suppl 10: 1-59. - 3. Wood RA, Phipatanakul W, Hamilton RG, Eggleston PA. A comparison of skin prick tests, intradermal skin tests, and RASTs in the diagnosis of cat allergy. J Allergy Clin Immunol 1999; 103 (5 Pt 1): 773-779. - 4. National Health and Medical Research Council. How to use the evidence: assessment and application of scientific evidence. Canberra: NHMRC, 2000. http://www.nhmrc.gov.au/publications/_files/cp69.pdf (accessed Jul 2006). - 5. Latza U, Haamann F, Baur X. Effectiveness of a nationwide interdisciplinary preventive programme for latex allergy. Int Arch Occup Environ Health 2005; 78: 394-402. - 6. Woodcock A, Forster L, Matthews E, et al; Medical Research Council General Practice Research Framework. Control of exposure to mite allergen and allergen-impermeable bed covers for adults with asthma. N Engl J Med 2003; 349: 225-236. - 7. Terreehorst I, Hak E, Oosting AJ, et al. Evaluation of impermeable covers for bedding in patients with allergic rhinitis. N Engl J Med 2003; 349: 237-246. - 8. Gøtzsche PC, Johansen HK, Schmidt LM, Burr ML. House dust mite control measures for asthma. Cochrane Database Syst Rev 2004; (4): CD001187. - 9. Shirai T, Matsui T, Suzuki K, Chida K. Effect of pet removal on pet allergic asthma. Chest 2005; 127: 1565-1571. - 10. Gardner LM, Thien FC, Douglass JA, et al. Induction of T “regulatory” cells by standardized house dust mite immunotherapy: an increase in CD4+ CD25+ interleukin-10+ T cells expressing peripheral tissue trafficking markers. Clin Exp Allergy 2004; 34: 1209-1219. - 11. Westall GP, Thien FCK, Czarny D, et al. Adverse events associated with rush Hymenoptera venom immunotherapy. Med J Aust 2001; 174: 227-230. <MJA full text> - 12. Moffitt JE, Golden DB, Reisman RE, et al. Stinging insect hypersensitivity: a practice parameter update. J Allergy Clin Immunol 2004; 114: 869-886. - 13. Plaut M, Valentine MD. Clinical practice. Allergic rhinitis. N Engl J Med 2005; 353: 1934-1944. - 14. Abramson MJ, Puy RM, Weiner JM. Allergen immunotherapy for asthma. Cochrane Database Syst Rev 2003; (4): CD001186. - 15. Wilson DR, Lima MT, Durham SR. Sublingual immunotherapy for allergic rhinitis: systematic review and meta-analysis. Allergy 2005; 60: 4-12. - 16. Enrique E, Pineda F, Malek T, et al. Sublingual immunotherapy for hazelnut food allergy: a randomized, double-blind, placebo-controlled study with a standardized hazelnut extract. J Allergy Clin Immunol 2005; 116: 1073-1079. - 17. Prescott SL, Tang MLK. The Australasian Society of Clinical Immunology and Allergy position statement: summary of allergy prevention in children. Med J Aust 2005; 182: 464-467. <MJA full text> - 18. Chan-Yeung M, Ferguson A, Watson W, et al. The Canadian Childhood Asthma Primary Prevention Study: outcomes at 7 years of age. J Allergy Clin Immunol 2005; 116: 49-55. - 19. Peat JK, Mihrshahi S, Kemp AS, et al. Childhood Asthma Prevention Study. Eighteen-month outcomes of house dust mite avoidance and dietary fatty acid modification in the Childhood Asthma Prevention Study (CAPS). J Allergy Clin Immunol 2004; 114: 807-813. - 20. Moller C, Dreborg S, Ferdousi HA, et al. Pollen immunotherapy reduces the development of asthma in children with seasonal rhinoconjunctivitis (the PAT-study). J Allergy Clin Immunol 2002; 109: 251-256. - 21. Novembre E, Galli E, Landi F, et al. Coseasonal sublingual immunotherapy reduces the development of asthma in children with allergic rhinoconjunctivitis. J Allergy Clin Immunol 2004; 114: 851-857. - 22. Warner JO; ETAC Study Group. Early treatment of the atopic child. A double-blinded, randomized, placebo-controlled trial of cetirizine in preventing the onset of asthma in children with atopic dermatitis: 18 months’ treatment and 18 months’ posttreatment follow-up. J Allergy Clin Immunol 2001; 108: 929-937. - 23. Gardner L, O’Hehir RE, Rolland JM. T-cell targeted allergen derivatives for improved efficacy and safety of specific immunotherapy for allergic disease. Curr Med Chem 2003; 2: 351-365. - 24. Alexander C, Tarzi M, Larche M, Kay AB. The effect of Fel d 1-derived T-cell peptides on upper and lower airway outcome measurements in cat-allergic subjects. Allergy 2005; 60: 1269-1274. - 25. Niebauer K, Dewilde S, Fox-Rushby J, Revicki DA. Impact of omalizumab on quality-of-life outcomes in patients with moderate-to-severe allergic asthma. Ann Allergy Asthma Immunol 2006; 96: 316-326. - 26. Leung DY, Sampson HA, Yunginger JW, et al; Avon Longitudinal Study of Parents and Children Study Team. Effect of anti-IgE therapy in patients with peanut allergy. N Engl J Med 2003; 348: 986-993. - 27. Flood-Page P, Menzies-Gow A, Phipps S, et al. Anti-IL-5 treatment reduces deposition of ECM proteins in the bronchial subepithelial basement membrane of mild atopic asthmatics. J Clin Invest 2003; 112: 1029-1036. - 28. Berry MA, Hargadon B, Shelley M, et al. Evidence of a role of tumor necrosis factor alpha in refractory asthma. N Engl J Med 2006; 354: 697-708. Publication of your online response is subject to the Medical Journal of Australia's editorial discretion. You will be notified by email within five working days should your response be accepted.
<urn:uuid:36ada345-2fdc-4299-a683-abcf97be4215>
CC-MAIN-2021-21
https://www.mja.com.au/journal/2006/185/4/1-diagnosis-treatment-and-prevention-allergic-disease-basics
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991737.39/warc/CC-MAIN-20210514025740-20210514055740-00296.warc.gz
en
0.911668
6,211
3.890625
4
Introduction to various reserves of interest. Masai Mara Game Reserve. The world renown Maasai Mara Game Reserve is a northern extension of the Serengeti National Park which is located in Tanzania. Maasai Mara covers an area of 1510km². The Maasai Mara ecosystem is composed of rivers i.e the Talek river and Mara river which are the main water supply for the ecosystem. The Mara river is a huddle to the wildebeest migration as the wildebeests have to cross the river from Serengeti most of them perishing in the jaws of crocodiles and big cats. The Western part of Maasai Mara lies the Siria escarpment, Loita plains and the rest is the Masai pastoral land. The Maasai Mara game reserve is owned and run by the county council of Narok which is the richest county council in Kenya due to the revenue collected as park entrance fee. Part of the Maasai Mara which is called the Mara triangle is contracted out and privately run. Park fees are paid by the number of nights one spends in the Mara Conservancy. The Maasai Mara lies at an altitude of 1500 meters to 2100 meters. It rains twice a year in the game reserve that is during the long rains that fall between the month of March and May and during the short rains that fall on the month of October, November and part of December. June and July are the coldest months and January and February the hottest months. Temperatures during the day rarely exceed 85°F (30°C) and during the night it hardly drops below 60°F (15°C). Maasai Mara is a mosquito-prone area but campsite are sprayed with mosquito repellents and the tents have treated mosquito nets. Maasai Mara has a big population of wildlife. All big five can be seen in this reserve, a large number of ungulates are also easily visible they include the wildebeest, Thomson gazelles, grant gazelles, buffalos, rhinos, impalas, topis, elands, zebras, giraffes and duikers. The common predators include the lions, cheetahs, leopards, hyenas, jackals and foxes. Maasai Mara has over 450 identified species. Some common birds include the common ostrich, secretary bird, Kori bustard, hornbills, storks, eagles and vultures. The wildebeest migration happens annually, this spectacle is considered as one of the 7th wonders of the world. More than a million wildebeest, accompanied by topis, zebras, gazelles and elands make their journey from Serengeti National Park to Masai Mara Game reserve. Many of them perish while crossing the Mara river where crocodiles and big cats make a kill on the vulnerable ungulates. The migration happens every year during the month of July after the long rains. The grass is big and plenty and for the next three months, the wildebeests clear the lush grass of the Maasai Mara. The migration varies annually due to the climate change. If the climate changes and it doesn’t rain as usual the wildebeest may delay to cross over or cross over and go back since there isn’t grass to feed on. The Masai people whom by definition speak the Maa language hence the name Maasai have held on to their culture even in these times of modernization. A Maasai’s home is called a manyatta where he lives with his wives and children. From childhood boys are obligated to look after their fathers’ cows while girls are obligated to doing house chores, fetching water and milking the cows. After every fifteen years, there is an initiation where boys are circumcised and they become young morans and the existing morans graduate to junior elders. The Maasai enjoy eating meat, milk mixed with blood during rituals such as initiation and marriage. The use of herbs as medicine is still embedded in their day to day life. The Maasai are an attraction in Kenya since they managed to stick to their culture. Lake Nakuru National Park. Lake Nakuru is one of the alkaline lakes of the Great Rift Valley. The lake is also known as “Pink Lake” or Africa Bird’s Paradise. The lake is ideally located in central Kenya within Lake Nakuru National park. The park occupies an area of 188 km2 while the lake occupies an area of 62 km 2. The lake is famous for the millions of flamingos that flock the lake although flamingos are unpredictable birds and are not always to be found in the lake is such vast numbers. From a distance i.e. the baboon cliff, the lake looks pink in colour due to the flamingos. The topography at Lake Nakuru is comprised of grasslands alternating with rocky cliffs and outcrops, acacia woodlands and a forest made up of Euphorbia trees. In the early 1960’s Tilapia Grahami was introduced to the lake and it flourished despite the alkaline nature of the lake. There are two species of flamingos namely lesser flamingo and greater flamingos, they feed on algae, which flourishes due to the warm alkaline waters of Lake Nakuru. It is believed that flamingos consume about 250,000 kg of algae per hectare of surface area per year. The abundance of algae in the lake is what attracts millions of flamingos to Lake Nakuru. Apart from flamingos, other bird species include ducks, pelicans, cormorants, plovers, vultures, eagles, and buzzards. Over 50 animal species which include hippos, reedbucks, waterbucks, Rothschild giraffe’s, baboons, black and white Columbus monkey, hyenas, cheetahs, leopards, lions, gazelles and impalas are found in this park. Lake Bogoria National Reserve. Lake Bogoria covers an area of 32 square kilometres (12 sq miles) and lies in a trough below the Ngendelel Escarpment, a sheer wall 600 metres (2,000 ft) high. The lake is geothermically active on the western shore, with geysers and hot springs. The geologist J.W. Gregory described the lake in 1892 as “the most beautiful view in Africa”. Lake Bogoria was formerly known as Lake Hannington. Lake Bogoria is dominated by the countless hot springs which pour boiling water into the sterile lake. Sterile, except for the massive flocks of Lesser Flamingos that flood into Bogoria each year. Millions of them have been recorded at peak times of the year and hundreds of thousands is common. The lake is alkaline, feeding blue-green algae which in turn feed flamingoes. Raptors such as Tawny Eagles prey on the flamingoes. The reserve has a herd of the relatively uncommon Greater Kudu. Other large mammals include buffalo, zebra, cheetah, baboon, warthog, caracal, spotted hyena, impala and dik-dik. Lake Baringo National Reserve. Lake Baringo is one of the Rift Valley lakes located north of Lake Nakuru, The lake has a surface area of about 130 square kilometres (50 sq miles) and an elevation of about 970 metres (3,180 ft). The lake is fed by several rivers, El Molo, Perkerra and Ol Arabel, and has no obvious outlet; the waters are assumed to seep through lake sediments into the faulted volcanic bedrock. It is one of the two freshwater lakes in the Rift Valley in Kenya, the other being Lake Naivasha. The acacia woodland has a lot of bird species. The lake also provides an invaluable habitat for seven freshwater fish species including the Nile Tilapia, which is endemic to the lake. Lake fishing is important to local social and economic development. Additionally, the area is a habitat for many species of animals including the hippopotamus, crocodile and many other mammals, amphibians, reptiles and the invertebrate. The lake used to boast of a large Goliath Heron nesting colony which has disappeared although Goliath Herons are still breeding around the lake. In addition to bird watching walks and boat trips, with the guidance of a professional ornithologist, the lake offers a range of activities which include fishing, water sports (ski, wind-surfing), camel rides, day trips to the nearby Lake Bogoria National Reserve or visiting a Njemps village, where you can get a sip of the local handicrafts and dances. Samburu Game Reserve Samburu game reserve is the most popular parks of the northern frontier fauna sanctuaries. The game park occupies an area of 165 km2. The driving distance from Nairobi is 350 km and 65km from Isiolo town to Archer’s post gate. The park lies on the northern bank of Uaso Nyiro River, the river serves as the only source of water without which the game in the reserve could not survive in the arid country. Samburu National Reserve was one of the two areas in which conservationists George Adamson and Joy Adamson raised Elsa the Lioness made famous in the best-selling book and award-winning movie Free. The Samburu National Reserve is also home of Kamunyak, a lioness famous for adopting oryx calves. Samburu’s topography is composed of river Uaso Nyiro which flows from the Kenyan highlands and flows to Lorian swamp, scattered acacia, riverine forest, thorn trees and grassland vegetation. The climate for Samburu is hot dry with cool nights with an average annual maximum temperature of 30ºc (86F) and minimum annual temperature of 20ºc (68F). There is a wide variety of animal and bird life seen at Samburu National Reserve. Several species are considered unique to the region, including its unique dry-country animal life: All three big cats, lion, cheetah and leopard, can be found here, as well as elephants, buffalo and hippos, Olive baboon, gerenuk, warthogs, Grant’s gazelle, Kirk’s dik-dik, impala, waterbuck, Grevy’s zebra, Beisa oryx, reticulated giraffe and over 350 bird species. Samburu is also a Maasai land, the Maasai people whom by definition speak the Maa language hence the name Maasai have held on to their culture even in these times of modernization. A Maasai’s home is called a manyatta where he lives with his wives and children. From childhood boys are obligated to look after their father’s cows while girls are obligated to doing house chores, fetching water and milking the cows. After every fifteen years, there is an initiation where boys are circumcised and they become young morans and the existing morans graduate to junior elders. The Maasai enjoy eating meat, milk mixed with blood during rituals such as initiation and marriage. The use of herbs as medicine is still embedded in their day to day life. The Maasai are an attraction in Kenya since they managed to stick to their culture. Lake Naivasha is at the highest elevation of all the Kenyan Rift valley lakes standing at 1,890 metres (6,200 ft). The lake is fed by two rivers namely Malewa and Gilgil rivers and has no visible outlet. It covers an area of 140 km² but this varies annually due to the rainfall. The lake has an average depth of 8 meters and it is a freshwater lake. Much of the lake is surrounded by forests of the yellow barked Acacia Xanthophlea, known as the yellow fever tree. These forests abound with bird life, and Naivasha is known as a world-class birding destination. The lake habitats schools of hippos and many bird species. The most common is the fish eagle. A wonderful way to spend the afternoon or morning is to take a boat ride. Amboseli National Park Amboseli National Park is located south of Nairobi 140 kilometres (3 ½ hrs drive). The park occupies an area of 392 km². The ecosystem is made up of a seasonal lake called Lake Amboseli where the park derives its name from, swamps, open plains, acacia woodland, rocky outcrops, thorn bushes and marches. The landscape is dominated by the backdrop of the majestic snow-cap of Mount Kilimanjaro the highest mountain in Africa. The snow cap is visible when the clouds are clear mainly early morning and late evenings and this scene give one the opportunity to capture wonderful memories on camera for friends and loved ones back at home. Amboseli national park is considered Kenya second best after Maasai Mara game reserve by many tourists and is the only national park in Kenya that has the biggest population of elephants. The ecosystem of Amboseli, though small compared to other parks, sustain a large number of bird species and game. Amboseli offers some of the best opportunities to see African animals because its vegetation is sparse due to the long dry months. The park is considered most ideal for writers, filmmakers and researchers. The Maasai are the local habitat of this area, which they call Empusel meaning “Dusty place”. Other community tribes have moved to Amboseli in search of greener pastures. Beside game viewing and the ecstatic views of Mount Kilimanjaro, one can visit a local Maasai village to learn their way of life and to interact with the locals. Tsavo West National Park. Tsavo West National Park covers an area of 9065Km² and is located South Eastern Kenya, 240 km from Nairobi or 250km from Mombasa to Mtito Andei Gate. The park has magnificent scenery, Mzima Springs, rich and varied wildlife, good road system, rhino reserve, rock climbing at Kichwa Tembo Cliffs and guided walks along the Tsavo River. Tsavo West National Park has a variety of wildlife, such as black rhino, cape buffalo, elephant, leopard and Maasai lion. There are also other smaller animals that can be spotted in the park, such as the bushbaby, hippo, hartebeest, lesser kudu and Maasai giraffe. Mzima springs are a natural reservoir under the Chyulu Hills to the north. The Chyulu range is composed of volcanic lava rock and ash, which is too porous to allow rivers to flow. Instead, rainwater percolates through the rock and may spend 25 years underground before emerging 50 kilometres away at Mzima springs. The spring produces 450 million litres of water in a day that serves the Tsavo ecosystem and some of the water serves the coastal region through a pipe. In the spring you will find schools of hippos, crocodiles, fish and water birds like cormorants. During the night hippos come out to graze and during the day they just laze in the full or half submerged. The Shetani Lava flow, a black lava flow of 8 km long, 1.6 km wide and 5 meters deep, is the remain of volcanic eruptions which were subject of tales among local communities who named the flow “shetani” meaning evil in Kiswahili after it spewed from the earth 240 years ago. Climbing the flow is not an easy task as the thick black soil is composed of uneven chunks of solid magma. The cave, located near the centre of the outflow, has two large opening and one ancient tree is growing between them. Although the cave is only a few meters long, the exit is not accessible (although it can be seen) as the place is too narrow. The Roaring Rocks will give you magnificent panoramic views, usually only seen by the eagles and buzzards that fly around. these cliffs, over the plain called Rhino valley and the Ngulia Hills (1,821 m – 5,975 ft.). The Roaring rock, located near the Rhino Sanctuary, has been for long an observation point for the protection of black rhinoceros and the fight against poaching. The eerie Roaring Rocks are named after the buzz of cicadas that inhabit them and the howl of wind that hits the bare rocks by producing a roaring sound. Tsavo East National park. Tsavo East National Park is one of the oldest and largest parks in Kenya covering an area of 11,747 square kilometres. The park is located near the village of Voi in the Taita-Taveta District of Coast Province and is divided into east and west sections by the A109 road and a railway. The park borders the Chyulu Hills National Park, and the Mkomazi Game Reserve in Tanzania. The climate in this area is warm and dry. One requires a smart card to access the park and the card can be topped up at Voi gate. Attractions of Tsavo East National Park include “The Red Elephants”. This effect is achieved from the wallowing and rolling in the Galana river and spraying of the red soils of Tsavo. The beautiful Aruba dam located on the north bank of the seasonal Voi River. It is visited by thousands of animals and a great game viewing point. The Mudanda Rock. The Mudanda Rock is a 1.6 km inselberg of stratified rock that acts as a water catchment that supplies a natural dam below. It offers an excellent vantage point for the hundreds of elephants and other wildlife that come to drink during the dry season. The Yatta Plateau, the world’s longest lava flow, runs along the western boundary of the park above the Athi River. Its 290 km length was formed by lava from Ol Doinyo Sabuk Mountain. Lugard Falls, named after Frederick Lugard, is actually a series of whitewater rapids on the Galana River Tsavo East has vast amounts of the diverse wildlife that can be seen, including the famous ‘big five’ consisting of the lion, black rhino, cape buffalo, elephant and leopard. The park also is also home to a great variety of bird life such as the black kite, crowned crane, lovebird and the sacred bird. Meru National Park. Meru National Park is a Kenyan Game park located east of Meru, 350 km from Nairobi. Covering an area of 870 km², it is one of the most famous known wilderness parks of Kenya. It has abundant rainfall, 635–762 mm in the west of the park and 305–356 mm in the east. The rainfall results in tall grass and lush swamps. It has a wide range of wild beasts like elephant, hippopotamus, lion, Leopard, cheetah, black rhinoceros and some rare antelopes. Meru was one of the two areas in which conservationists George Adamson and Joy Adamson raised “Elsa, the lioness’’ made famous in the best-selling book and award-winning movie “Born Free”. Elsa the Lioness is buried in this park and part of Joy’s ashes was scattered on her gravesite. The park became targeted by poachers and was considered unsafe until the KWS Kenya Wildlife Service, helped by the IFAW – International Fund for Animal Welfare, restored Meru National Park from near ruin to one of the most promising tourist destinations in Eastern Africa, solving the parks poaching problem. IFAW donated $1.25 million to this major restoration project and with this money aided in improving the basic infrastructure and provided essential equipment and vehicles for law enforcement activities.
<urn:uuid:940265ef-8644-4419-b47a-d178dfe67960>
CC-MAIN-2021-21
https://www.maraexpeditions.co.ke/kenya-safaris/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991178.59/warc/CC-MAIN-20210516171301-20210516201301-00294.warc.gz
en
0.954732
4,144
2.578125
3
- Systematic Map Protocol - Open Access What evidence exists on the impacts of flow variability on fish and macroinvertebrates of temperate floodplain rivers in Central and Western Europe? A systematic map protocol Environmental Evidence volume 10, Article number: 10 (2021) Flow variability is considered a fundamental factor affecting riverine biota. Any alterations to flow regime can influence freshwater organisms, and this process is expected to change with the projected climate change. This systematic map, therefore, aims at investigating the impacts of natural (resulting from climatic variability), anthropogenic (resulting from direct human pressure), and climate change-induced flow variability on fish and macroinvertebrates of temperate floodplain rivers in Central and Western Europe. Particular focus will be placed on the effects of extreme low and high discharges. These rare events are known to regulate population size and taxonomic diversity. All studies investigating the effects of flow variability on metrics concerning freshwater fish and macroinvertebrates will be considered in the map, particularly metrics such as: abundance, density, diversity, growth, migration, recruitment, reproduction, survival, or their substitutes, such as biomonitoring indices. Relevant flow variability will reflect (1) anthropogenic causes: dams, reservoirs, hydroelectric facilities, locks, levees, water abstraction, water diversion, land-use changes, road culverts; (2) natural causes: floods, droughts, seasonal changes; or (3) climate change. Geographically, the map will cover the ecoregion of Central and Western Europe, focusing on its major habitat type, namely “temperate floodplain rivers and wetlands”. The review will employ search engines and specialist websites, and cover primary and grey literature. No date, language, or document type restrictions will be applied in the search strategy. We expect the results to be primarily in English, although evidence (meeting all eligibility criteria) from other languages within the study area will also be included. We will also contact relevant stakeholders and announce an open call for additional information. Eligibility screening will be conducted at two levels: title and abstract, and full text. From eligible studies the following information will be extracted: the cause of flow variability, location, type of study, outcomes, etc. A searchable database containing extracted data will be developed and provided as supplementary material to the map report. The final narrative will describe the quantity and key characteristics of the available evidence, and identify knowledge gaps and knowledge clusters, i.e. subtopics sufficiently covered by existing studies allowing full systematic review and meta-analysis. River discharge has been called the “master variable that limits and resets river populations throughout entire drainage networks” , or “the maestro that orchestrates pattern and process in rivers” . The natural flow regime is crucial to maintaining river ecosystems in good health , whereas any departures from the natural state, referred to as ‘flow alterations’, result in overwhelmingly negative responses of these ecosystems [4, 5]. Therefore, this systematic map will deal with river flow as the principal abiotic component of riverine ecosystems. Flow regime may change due to numerous factors. In this map, the relevant causes of change will include (1) anthropogenic causes: dams, reservoirs, hydroelectric facilities, locks, levees, water abstraction, water diversion, land-use changes, road culverts; (2) natural causes: floods, droughts, seasonal changes ; or (3) climate change. These three groups do not only vary in the mechanism of change, but also imply further differences. For example, studies regarding the impact of climate change on riverine biota are predominantly model-based, whereas anthropogenic and natural impacts are more frequently assessed through field sampling. For the purpose of this research, “natural flow variability” refers to near-natural streams, relatively unimpacted by direct human pressure, as opposed to rivers with flow regime changed due to anthropogenic causes. Such an approach (tolerance for some degree of disturbance) is not uncommon in the European context [7, 8]. Climate change is indicated as a separate cause due to its growing, predominantly negative impact on riverine ecosystems . Climate change has already altered flow regimes in Europe , and affected stream macroinvertebrate communities . On the biotic level, fish and macroinvertebrates will be considered in this systematic map, two important groups of organisms from the point of view of ecology of running waters. Both are widely accepted determinants of the river ecological status, especially in the context of the Water Framework Directive (WFD) of the European Union. Previous studies suggest that the body of evidence on flow-ecology relationships for these two biotic groups is the largest among riverine fauna [12, 13]. They are therefore frequently used in environmental flow assessments worldwide [4, 14]. Macroinvertebrates play a prominent role in river ecosystem structure, and are frequently used as indicators of water quality . Aquatic invertebrate fauna is highly diverse, and is of great importance for other riverine organisms, particularly fish. Fish communities also have several advantages as indicator organisms: (1) they are present in almost all lotic ecosystems; (2) because of their long lifespan, they reflect cumulative effects of long-term anthropogenic stressors; (3) because of their high mobility, they are particularly sensitive to disturbances in hydromorphology . It is useful to use both fish and macroinvertebrates, because they respond differently to stressors, operate on dissimilar scales, and represent unique trophic levels . Despite rapid proliferation of studies on flow-ecology relationships over the last two decades, few attempts have been made so far to synthesise the existing evidence in a systematic way [5, 6, 13, 18,19,20]. Only two most recent ones: the systematic map by Rytwinski et al. and the forthcoming systematic review proposed by Harper et al. , followed the guidelines established by the Collaboration for Environmental Evidence . Poff and Zimmerman and Webb et al. devoted their systematic reviews of ecological responses to flow alterations at the global scale. A different approach was adopted by McManamay et al. who dealt with both natural and human-altered flow effects on biota in the South-Atlantic region of the United States. Harper et al. proposed to investigate the impact of hydroelectric power production on fish populations in temperate regions worldwide. This systematic map will focus on both anthropogenic flow alterations and natural flow variability. Geographically, the scope of the review will cover the well-established freshwater ecoregion of Western and Central Europe (an interactive map of Freshwater Ecoregions of the World is available at: http://www.feow.org/), focusing on its major habitat type, namely temperate floodplain rivers and wetlands. Figure 1 depicts the region’s territory. Temperate floodplain rivers in Europe have been recently gaining interest in the context of environmental flow studies and ongoing river and floodplain restoration efforts . Their comparison with tropical or (semi-)arid floodplain rivers, however, is still scarce [24, 25]. Although floodplains are biodiversity hotspots, floodplain rivers are often under pressure of abstraction and storage of flow. The existing environmental flow frameworks focus too much attention on preserving instream flows, whereas the role of high flows sustaining floodplain habitats has been largely neglected . Environmental flow studies are advocated to be performed at a regional scale . Because the temperate climate zone in Europe spans from northern Portugal to western Russia, this systematic map will be limited to one particular ecoregion of Western and Central Europe, as proposed by Abell et al. , where temperate floodplain rivers are the dominant type of habitat. This well-established classification of freshwater ecoregions was based on characteristics such as the occurring freshwater species (primarily fish), and catchments delineation as well as ecological and evolutionary processes rather than relying solely on physiogeographic or taxonomic features. Therefore, environmental conditions “within a given ecoregion are more similar to each other than to those of surrounding ecoregions and together form a conservation unit” . Despite some overlap with the existing evidence syntheses, especially Piniewski et al. and Rytwinski et al. , the proposed systematic map will be a valuable contribution to the subject. The work by Piniewski et al. focused on the responses of biota (fish and macroinvertebrates) to floods and droughts in Europe, but—unlike the proposed systematic map—it did not take into account non-extreme flow variability, and did not follow the CEE guidelines. Moreover, the study was published in 2017, whereas the searches were performed in 2014. The search period covered years 1961–2011. Therefore, the proposed systematic map to be developed in 2021 will definitely capture new evidence. The systematic map by Rytwinski et al. about the impact of flow regime changes on fish in temperate regions included both anthropogenic and natural causes of flow alteration, accounting for a substantial overlap with the proposed map. However, despite the similarities in scope, the two systematic maps differ in several ways. The aforementioned study grouped all natural causes of flow variability (e.g. floods, droughts) together with climate change without a distinction between them. The currently proposed systematic map will differentiate between the various causes of flow regime changes, facilitating the identification of knowledge clusters and gaps among these studies. Moreover, the proposed research question will differ in terms of eligible populations (e.g. addition of macroinvertebrates), outcomes (e.g. addition of biomonitoring indices), and study types (e.g. addition of modelling studies). The preliminary search string tested in Web of Science Core Collection reached approximately 42,000 results. When tested for the same time frame (1900–2017) and biota (fish, with macroinvertebrate terms removed), the search string yielded 29,000 results compared to around 10,500 found in Web of Science Core Collection by Rytwinski et al. . The addition of new inclusion criteria together with development of a more robust search string will hopefully allow for the identification of new pieces of evidence. Geographically, the work by Rytwinski et al. covered temperate regions globally, therefore, entirely encompassing the territory proposed in the current protocol. The study, however, primarily yielded results from the outside of the proposed eco-region: “the most studied were USA (50% of cases), Canada (11% of cases), and Australia (7% of cases)”. It also only included articles written in English due to the project resource restrictions. As indicated by the authors, the “untranslated articles would add strength to the accuracy of the map and any resultant syntheses.” The proposed map will incorporate results in other languages as well. Finally, the search for grey literature substantially focused on Anglo-Saxon websites and institutions (mostly based in Northern America or Australia and Oceania), as indicated in the map protocol [6, 28], possibly omitting evidence from other regions. The proposed map will look for grey literature at institutions based in Europe and conducting research within the indicated eco-region. This map is a part of research project RIFFLES (‘The effect of RIver Flow variability and extremes on biota of temperate FLoodplain rivers under multiple pressurES’). The topic of the review was extensively discussed at the project kick-off meeting in January 2020, attended by a representative of the public administration (a specialist working with biomonitoring data at the Chief Inspectorate of Environmental Protection in Poland) and an advisory group consisting of researchers from Austria, Germany, and the UK. The stakeholders supported the idea of conducting an evidence synthesis. Throughout further expert discussions and preliminary scoping, it was concluded that a systematic map should be carried out first. The knowledge clusters indicated in the final report could contribute to the identification of areas qualifying for a systematic review. Relevant stakeholders (for details see section “Supplementary searches”) will be contacted and asked for their contribution to the systematic map, particularly for submission of or reference to any relevant literature, including grey literature. If needed, the stakeholders will also be asked for advice regarding other aspects of the systematic map, e.g. clarification of inclusion criteria. An open call for additional information, announced through a publicly available post on social media (as opposed to private, targeted communication with the aforementioned stakeholders), will also be held. Objective of the Review The primary question for this systematic map is as follows: What evidence exists on the impacts of natural and/or anthropogenic flow variability on fish and macroinvertebrates of temperate floodplain rivers in Central and Western Europe? The proposed systematic map will provide an overview of the existing literature on the impacts of anthropogenic flow alterations and natural flow variability on fish and macroinvertebrates of temperate floodplain rivers in Central and Western Europe. The planned key outputs will be: A database of evidence containing detailed coding and extracted meta-data. An evidence atlas (a cartographic representation of the included evidence). A series of heat maps to systematically identify knowledge clusters (subtopics that are well-represented by research studies) and knowledge gaps (subtopics that are un- or under-represented by research studies). A list of knowledge clusters suitable for full systematic review and meta-analysis. A list of knowledge gaps, i.e. areas requiring further primary research. Definitions of the question components Intervention/exposure(s) Natural flow variability or anthropogenic flow alteration indicated as components of flow regime and/or cause of flow regime change; flow regime alteration induced by climate change. Comparator(s) No flow variability/flow alteration or alternative levels of flow variability/flow alteration. Studies using design with spatial or temporal trends (with no true comparator) will also be included, particularly in the case of research on natural flow variability. Outcome(s) any component of fish and/or macroinvertebrates population/s (from single species to community level) such as abundance, density, diversity, growth, migration, recruitment, reproduction, survival, or their substitutes. Searching for articles The literature will be collected through: academic databases, web-based search engine, and specialist websites, as well as direct stakeholder contact, an open call for relevant studies, and searching through references of the eligible evidence syntheses. Search strings and search terms The search string has been developed through trial searches conducted in Web of Science between March 2020 and March 2021. More than 30 search strings were tested. The search results were screened at title/abstract level to check for their possible relevance. At least 20% of investigated records had to be assessed as possibly relevant in order to accept a given modification of the search string. Records from several of the tested search strings are included in the supplementary material (Additional file 2). The resulting search string consists of three components: population (subject and habitat), intervention/exposure, and exclusions (Table 1). All three are combined using Boolean operators “AND”, “OR”, and “NOT”. The exclusions (“NOT” operators) have been added in order to remove clearly non-relevant articles (e.g. from medical or paleontological journals). The asterisk (*) represents any character, including no character (e.g. River* includes River, Rivers, Riverine, etc.). Phrases in quotation marks search for exact phrases. Preliminary testing proved that using multiple macroinvertebrate taxonomic names yields most promising results. When the tested search string is not accepted by the database or search engine, the search terms will be customised and presented in the final report as supplementary material. Search strings and search terms proposed as per each database are included in Additional file 3. English search terms will be used to conduct all searches. No date, language, or document type restrictions will be applied. We expect the results to be primarily in English, although records with English title/abstract and main text in other languages within the study area will also be investigated and presented in the systematic map results. Given the geographic location of the selected ecoregion (Fig. 1) and the analysis of the outcomes of two complete evidence syntheses [6, 13], we expect for a great majority of non-English studies to be published in French, German, or Polish. Relevant evidence (meeting all eligibility criteria) from other languages within the study area, however, will also be included. Testing comprehensiveness of the search A total of 54 articles of known high relevance to the systematic map were screened against preliminary search results to examine whether the proposed search string can successfully locate relevant evidence. All articles, except two which are not available in Web of Science, were successfully captured by the developed search string. The list of benchmark articles is included in the supplementary material (Additional file 4). The following databases will be browsed: Digital Access to Research Theses (DART) Directory of Open Access Journals (DOAJ) Electronic Theses Online Service (eThOS) ProQuest Dissertations & Theses Global ProQuest Environmental Science Collection Web of Science BioSciences Information Service of Biological Abstracts (BIOSIS) Citation Index (including: Biological Abstracts, Reports, Reviews, and Meetings) Web of Science Core Collection (including: Science Citation Index Expanded, Conference Proceedings Citation Index – Science, and Emerging Sources Citation Index) Web of Science Zoological Record Searches will be conducted using subscriptions of the Warsaw University of Life Sciences. Searches will also be performed in Google Scholar, considered to be an effective tool in browsing for grey literature . The search will be conducted for the first 500 results. All the relevant results will be extracted as citations using Publish or Perish software , and subject to duplication removal and screening workflow alongside records from other sources. In an attempt to capture grey literature, English and Polish language websites dedicated to research projects and freshwater research will be browsed manually for relevant publications. Records from organisational websites will be screened separately before being combined with other results. List of websites to be searched: Centre for Ecology & Hydrology (https://www.ceh.ac.uk/) Centre for Environment, Fisheries and Aquaculture Science (https://www.cefas.co.uk/) CORDIS, database of European Commission research projects (https://cordis.europa.eu/projects/en) European Centre for River Restoration (https://www.ecrr.org/) European Federation for Freshwater Sciences (http://www.freshwatersciences.eu/effs/) European Regional Centre for Ecohydrology of the Polish Academy of Sciences (http://www.erce.unesco.lodz.pl/) Food and Agriculture Organization of the United Nations (http://www.fao.org/home/en/) Freshwater Information Platform (http://www.freshwaterplatform.eu/) International Centre for Ecohydraulics Research (http://www.icer.soton.ac.uk/) International Centre for Water Resources and Global Change (ICWRGC) (https://www.waterandchange.org/en/) LIFE Programme (https://ec.europa.eu/environment/life/project/Projects/index.cfm) Natural Resources Wales (https://naturalresources.wales/?lang=en) Research project REFORM (REstoring rivers FOR effective catchment Management) (https://www.reformrivers.eu/) Research project REFRESH (Adaptive Strategies to Mitigate the Impacts of Climate Change on European Freshwater Ecosystems (http://www.refresh.ucl.ac.uk/) United Nations Environment Programme (https://www.unep.org/) The reference sections of evidence syntheses (including both systematic and non-systematic literature reviews) included in the screening process will be hand-searched, and any articles not found previously will be added to the library. In order to encompass as wide an array of studies as possible, we will organise an open call for relevant studies and directly contact relevant stakeholders. List of proposed stakeholders: Austrian Limnological Association (VOL) Centre for Ecology & Hydrology (CEH) Czech Limnological Society (CLS) Deutsche Gesellschaft für Limnologie e.V. (DGL) European Regional Centre for Ecohydrology of the Polish Academy of Sciences French Limnological Association (AFL) Freshwater Biological Association (FBA) International Centre for Ecohydraulics Research Leibniz-Institute of Freshwater Ecology and Inland Fisheries (IGB) Polish Angling Association Polish Benthologic Society Polish Hydrobiological Society (PTH) Slovakian Limnological Society (SLS) Stanislaw Sakowicz Inland Fisheries Institute Swiss Society for Hydrology and Limnology (SGHL) Assembling library of search results A library of all search results will be uploaded to EPPI reviewer, literature review management software. Any possible duplicated records will be removed prior to screening. Article screening and study eligibility criteria Screening will be conducted at two levels: title and abstract, and full text. All articles rendered possibly relevant through screening of both title and abstract will be retrieved at full text (reporting in the final report those which could not be accessed, e.g. not found, no subscription). The retrieved records will then be screened at full text, with each record assessed by one reviewer. Reviewers will not be assigned to articles they have authored at any stage of the screening. Consistency check will be performed with all reviewers (4 in total) through independent screening of a random subset of articles at the title and abstract level, prior to the actual screening. It will be conducted in batches of 100 papers. Upon completion of screening of each batch, the results will be cross-examined, with all discrepancies reconciled and eligibility criteria clarified when necessary. If the level of agreement is low (below 80%), further consistency check will be performed on an additional subset of articles. Similar approach will be applied at full-text screening stage, namely two reviewers using a random subset of 10% of all articles that were included at title and abstract, will perform consistency check. A level of agreement above 80% will be required before the actual screening is conducted. Studies found by other means than academic database or search engine searches (e.g. found through a reference from the stakeholders) will be added to the library after the consistency check is complete. Eligible population(s) Fish and macroinvertebrates (both native and introduced) of temperate floodplain rivers in the ecoregion of Central and Western Europe . The ecoregion includes the following European countries: Austria, Belgium, Belarus, the Czech Republic, Denmark, France, Germany, Liechtenstein, Lithuania, Luxembourg, the Netherlands, Poland, Russia, Slovakia, Switzerland, Ukraine, and the United Kingdom; and the large river basins of: Ouse, Mersey, Trent, Thames, Severn, Loire, Seine, Rhône, Rhine, Ems, Weser, Elbe, Oder, Vistula, and Neman; draining into: the North Sea, Baltic Sea, Norwegian Sea, Irish Sea, Atlantic Ocean, and Mediterranean Sea (Rhône) (only Belgium, the Netherlands, and Luxembourg are entirely within the area, and some countries have merely marginal coverage, e.g. Austria, Russia, Slovakia, Ukraine; more about the ecoregion: https://www.feow.org/ecoregions/details/404). Studies concerning lakes, wetlands, estuaries, or coastal areas will be excluded. Eligible intervention(s)/exposure(s) Natural flow variability or anthropogenic flow alteration indicated by cause and/or component/s of flow regime, including: magnitude, frequency, duration, timing (seasonality), rate of change, or their substitutes (e.g. water velocity or depth). Eligible causes of flow regime include (1) anthropogenic causes: dams, reservoirs, hydroelectric facilities, locks, levees, water abstraction, water diversion, land-use changes, road culverts; (2) natural causes originating from climatic variability: floods, droughts, seasonal changes; or (3) climate change (mixed natural and anthropogenic cause). Eligible comparator(s) (1) Similar sections of the same waterbody with no exposure/intervention; (2) separate but similar waterbodies with no exposure/intervention; (3) before exposure/intervention within the same waterbody; (4) an alternative level of exposure/intervention on the same or different waterbody. Studies that evaluate temporal or spatial trends related to a change in flow regime will also be included, particularly in the case of research on natural flow variability. Studies which measure a single point in time, with no comparison to another site, will be excluded. Eligible outcome(s) Change in a component of fish and/or macroinvertebrates population/s, such as abundance, density, diversity, growth, migration, recruitment, reproduction, survival, or their substitutes, including biomonitoring indices, such as European Fish Index (EFI), Lotic-invertebrate Index for Flow Evaluation (LIFE), or biological monitoring working party (BMWP). Eligible study type(s) Field studies, mesocosm, modelling, and literature reviews; laboratory studies and studies with no connection to the dominant type of habitat in the ecoregion (temperate floodplain rivers) will be excluded. A list of records excluded at the title/abstract as well as full-text level will be provided, with reasons for exclusion. The provision of excluded literature at both levels (text/abstract and full text) will improve transparency and allow authors of similar reviews to investigate the results of the proposed search strategy in the future. Study validity assessment The validity of evidence will not be assessed within this systematic map but we will be coding study design elements that may provide some preliminary indication of internal validity. Data coding strategy Coding and meta-data will be extracted for all studies deemed relevant after the full-text screening stage. Meta-data extraction and coding will be performed by multiple reviewers (4 in total) after checking for consistency in coding. The coding will take place simultaneously (every record will be coded by one reviewer). The consistency check will be performed with two reviewers through independent coding of a subset of 10% of relevant studies. Any discrepancies between the reviewers will be reconciled before the actual coding takes place. If the level of agreement is low (below 80%), further consistency check will be performed on an additional subset of articles. If resources allow, if needed, we will contact authors by email requesting missing information or clarification. The corresponding author will be contacted via e-mail address provided in a given article. In the case of grey literature, the first author will be contacted, provided that their contact information can be found online. The following coding categories will be extracted: Bibliographic information (e.g. title, author/s, year of publication, type of document, source, language); Study location (e.g. country, region, geographic location, waterbody name, type); Broad objectives of the study; Study design (e.g. type of study: field/mesocosm/modelling/evidence synthesis, length, number of site/s, sampling method); Intervention/exposure type (e.g. changes in flow magnitude, frequency, duration, timing); Cause of intervention (natural/anthropogenic/climate change); Comparator type (e.g. Before/After, Control/Impact); Outcome type (e.g. changes in growth, abundance); Taxon (e.g. fish/invertebrate, taxon name/s, taxonomic level, number of taxa); Ecological response (e.g. direction and magnitude of change in the studied biota population). Study mapping and presentation The extracted meta-data will be described narratively in the final report. The articles will be grouped and presented according to their distinct characteristics, e.g. modelling studies and evidence syntheses separately due to their non-comparability with other results. Similarly, relevant grey literature in languages other than English will be presented separately from other studies. The identified evidence will also be provided in the form of an interactive open-access database containing detailed coding and extracted meta-data. The contents of the database will be visualised geographically in an evidence atlas. Knowledge gap and cluster identification strategy Heat maps will be used to identify knowledge clusters and knowledge gaps through grouping the studies by coded categories and then investigating which topics cover enough evidence to warrant a systematic review. The identification will be concluded by a methodology expert outside of the reviewers’ team in order to avoid internal bias. Availability of data and materials Datasets supporting the formulation of this article are included within the article and its additional files. Power ME, Sun A, Parker G, Dietrich WE, Wootton JT. Hydraulic food-chain models. Bioscience. 1995;45(3):159–67. Walker KF, Sheldon F, Puckridge JT. A perspective on dryland river ecosystems. Regul Rivers Res Manage. 1995;11(1):85–104. Poff NL, Allan JD, Bain MB, Karr JR, Prestegaard KL, Richter BD, et al. The natural flow regime. Bioscience. 1997;47(11):769–84. Bunn SE, Arthington AH. Basic principles and ecological consequences of altered flow regimes for aquatic biodiversity. Environ Manage. 2002;30(4):492–507. Poff NL, Zimmerman JKH. Ecological responses to altered flow regimes: a literature review to inform the science and management of environmental flows. Freshw Biol. 2010;55(1):194–205. Rytwinski T, Harper M, Taylor JJ, Bennett JR, Donaldson LA, Smokorowski KE, et al. What are the effects of flow-regime changes on fish productivity in temperate regions? A systematic map. Environ Evid. 2020;9(1):7. Murphy C, Harrigan S, Hall J, Wilby RL. Climate-driven trends in mean and high flows from a network of reference stations in Ireland. Hydrol Sci J. 2013;58(4):755–72. Stahl K, Hisdal H, Hannaford J, Tallaksen LM, Van Lanen H, Eric S, et al. Streamflow trends in Europe: evidence from a dataset of near-natural catchments. Hydrol Earth Syst Sci. 2010;14:2367–82. Tonkin J, Poff N, Bond N, Horne A, Merritt D, Reynolds L, et al. Prepare river ecosystems for an uncertain future. Nature. 2019;570:301–3. Blöschl G, Hall J, Parajka J, Perdigão RAP, Merz B, Arheimer B, et al. Changing climate shifts timing of European floods. Science. 2017;357(6351):588. Jourdan J, O’Hara RB, Bottarin R, Huttunen K-L, Kuemmerlen M, Monteith D, et al. Effects of changing climate on European stream invertebrate communities: a long-term data analysis. Sci Total Environ. 2018;621:588–99. Lake PS. Drought and aquatic ecosystems: effects and eesponses. Chichester, UK: Wiley-Blackwell; 2011. Piniewski M, Prudhomme C, Acreman MC, Tylec L, Oglęcki P, Okruszko T. Responses of fish and invertebrates to floods and droughts in Europe. Ecohydrology. 2017;10(1):e1793. Phelan J, Cuffney T, Patterson L, Eddy M, Dykes R, Pearsall S, et al. Fish and invertebrate flow-biology relationships to support the determination of ecological flows for North Carolina. JAWRA J Am Water Resour Assoc. 2017;53(1):42–55. Metcalfe JL. Biological water quality assessment of running waters based on macroinvertebrate communities: history and present status in Europe. Environ Pollut. 1989;60(1):101–39. Adamczyk M, Prus P, Buras P, Wiśniewolski W, Ligięza J, Szlakowski J, et al. Development of a new tool for fish-based river ecological status assessment in Poland (EFI+IBI_PL). Acta Ichthyol Piscat. 2017;47(2):173–84. Flinders CA, Horwitz RJ, Belton T. Relationship of fish and macroinvertebrate communities in the mid-Atlantic uplands: implications for integrated assessments. Ecol Ind. 2008;8(5):588–98. Harper M, Rytwinski T, Taylor JJ, Bennett JR, Smokorowski KE, Cooke SJ. How do changes in flow magnitude due to hydroelectric power production affect fish abundance and diversity in temperate regions? A systematic review protocol. Environ Evid. 2020;9(1):14. McManamay RA, Orth DJ, Kauffman J, Davis MM. A database and meta-analysis of ecological responses to stream flow in the South Atlantic Region. Southeast Nat. 2013;12(m5):1–36. Webb AJ, Miller KA, King EL, Little SC, Stewardson MJ, Zimmerman JKH, et al. Squeezing the most out of existing literature: a systematic re-analysis of published evidence on ecological responses to altered flows. Freshw Biol. 2013;58(12):2439–51. Guidelines and standards for evidence synthesis in environmental management. Collaboration for Environmental Evidence. 5 ed2018. Abell R, Thieme ML, Revenga C, Bryer M, Kottelat M, Bogutskaya N, et al. Freshwater ecoregions of the world: a new map of biogeographic units for freshwater biodiversity conservation. Bioscience. 2008;58(5):403–14. Hayes DS, Brändle JM, Seliger C, Zeiringer B, Ferreira T, Schmutz S. Advancing towards functional environmental flows for temperate floodplain rivers. Sci Total Environ. 2018;633:1089–104. Hughes FMR, Rood SB. Allocation of river flows for restoration of floodplain forest ecosystems: a review of approaches and their applicability in Europe. Environ Manage. 2003;32(1):12–33. Tonkin JD, Jähnig SC, Haase P. The rise of riverine flow-ecology and environmental flow research. Environ Proces. 2014;1(3):323–30. Acreman M, Aldrick J, Binnie C, Black A, Cowx I, Dawson H, et al. Environmental flows from dams: the water framework directive. Proc Instit Civil Eng Eng Sustain. 2009;162(1):13–22. Poff NL, Richter BD, Arthington AH, Bunn SE, Naiman RJ, Kendy E, et al. The ecological limits of hydrologic alteration (ELOHA): a new framework for developing regional environmental flow standards. Freshw Biol. 2010;55(1):147–70. Rytwinski T, Taylor JJ, Bennett JR, Smokorowski KE, Cooke SJ. What are the impacts of flow regime changes on fish productivity in temperate regions? A systematic map protocol. Environ Evid. 2017;6(1):13. Haddaway N, Collins A, Coughlin D, Kirk S. the role of google scholar in evidence reviews and its applicability to grey literature searching. PLoS ONE. 2015;10:e0138237. Harzing AW. Publish or perish. 2007. https://harzing.com/resources/publish-or-perish. The authors acknowledge research funding from the National Science Centre, Poland (more details above). The authors thank the Editor and two anonymous Reviewers whose suggestions helped improve and clarify the manuscript. We would also like to thank Neal Haddaway for sharing his knowledge and expertise on evidence syntheses. This systematic map is a part of research project RIFFLES (‘The effect of RIver Flow variability and extremes on biota of temperate FLoodplain rivers under multiple pressurES’) funded by the National Science Centre (NCN), Poland (Grant 2018/31/D/ST10/03817, https://projekty.ncn.gov.pl/index.php?projekt_id=422476). The funding body did not participate in the design of the study, collection, analysis, or interpretation of data, or writing of the manuscript. Ethics approval and consent to participate Consent for publication The authors declare that they have no competing interests. Reviewers who have authored articles to be considered within the review will be prevented from unduly influencing inclusion decisions, for example by delegating tasks appropriately. Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ROSES form for systematic map protocols. Presentation of several changes tested during search string development. Search strings/search terms proposed for all of the databases indicated in the search strategy. A list of benchmark studies with bibliographic details. About this article Cite this article Keller, A., Chattopadhyay, S. & Piniewski, M. What evidence exists on the impacts of flow variability on fish and macroinvertebrates of temperate floodplain rivers in Central and Western Europe? A systematic map protocol. Environ Evid 10, 10 (2021). https://doi.org/10.1186/s13750-021-00225-z - Climate change - Environmental flow - Freshwater ecology - Riverine biota
<urn:uuid:f13270eb-b5e6-48ab-8f1e-0c861ae4fdd1>
CC-MAIN-2021-21
https://environmentalevidencejournal.biomedcentral.com/articles/10.1186/s13750-021-00225-z
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991685.16/warc/CC-MAIN-20210512070028-20210512100028-00577.warc.gz
en
0.875917
8,141
2.703125
3
Death and Hades to Give Up the Dead “As regards the resurrection of the dead, did you not read what was spoken to you by God?”—Matt. 22:31 1. What ancient religious book alone teaches resurrection? THE resurrection of the human dead during the reign of God’s kingdom—no ancient sacred book of religion teaches this but the Holy Bible. The Bible is the sacred book that was written, the first part of it mostly in Hebrew and the second part of it in the common Greek of nineteen hundred years ago. However, the first part of it was translated from Hebrew into Greek before ever the second part of the Holy Bible was written in common Greek. The Greek translation of the Hebrew Scriptures has been called the Greek Septuagint and is symbolized by the sign LXX, meaning “Seventy.” Nineteen centuries ago Greek was an international language, and thus back there a person who knew Greek could read the whole Bible. In our day the Holy Bible has been translated, as a whole or in part, into upward of 1,202 languages, most likely into your own native language. This sacred Book has the greatest circulation of all books and in the most languages. It stands out alone in its teaching of the raising of dead mankind to life in a righteous order of things during the reign of the kingdom of Almighty God. 2. Why may some scoff at the idea of a resurrection? 2 You, the reader, may see no need for the human dead to be resurrected, because your religion has taught you such a thing as the “immortality of the soul.” So, because the departed ones are dead only as to the human body but are alive in some invisible realm as souls or have transmigrated to another earthly body, you see no need for a resurrection. Some readers may therefore scoff at the idea of a resurrection from the dead. That is quite natural. But the Bible teaching of the resurrection of the dead has such a solid basis on which to rest that the better thing to do is to investigate honestly rather than scoff. We do not want to be like those Grecian philosophers who believed in the immortality of the human soul and to whom the Christian apostle Paul preached the resurrection of Jesus Christ from the dead.—Acts 2:31, 32; Matt. 26:38; Isa. 53:12; Ezek. 18:4, 20. 3. Why do we not want to be like those Greek philosophers? 3 Of those Greeks the historical record says: “Some would say: ‘What is it this chatterer would like to tell?’ Others: ‘He seems to be a publisher of foreign deities.’ This was because he was declaring the good news of Jesus and the resurrection.” And after the apostle Paul told the Supreme Court of judges in Athens, Greece, that God had raised up his Son Jesus Christ from the dead in order to act as judge of all the inhabited earth the record says, “well, when they heard of a resurrection of the dead, some began to mock.” (Acts 17:18, 31, 32) Those Greeks believed in the immortality of the human soul and that there were therefore no dead. Hence they could not accept the teaching that human souls are dead and need to be resurrected in order to live again. 4. How were ancient Greeks like Babylonians in their belief regarding the human dead? 4 The ancient Greeks believed that the human dead were living as shades in an underground, unseen place, over which the god named Haʹdes ruled as king. Afterward this underground place of departed souls over which he ruled was also called by his name, Haʹdes. The name also came to be applied to the grave.* Those ancient Greeks were like the Babylonians of Asia who called the god of their underground realm of departed souls Nergal and who spoke of this invisible realm of the dead as “the land of no return.” Those ancient Babylonians therefore did not believe in a resurrection of the human dead.* 5. How did action by the Reform Jews point up the fact that belief in human immortality and the resurrection teaching run contrary to each other? 5 This Babylonian belief in the immortality of the human soul runs contrary to the Bible’s teaching of the resurrection of the human dead. This fact can be seen in the action taken by the Reform Jews of our twentieth century. On this The Jewish Encyclopedia, under the subject “Resurrection,” says: “In modern times the belief in resurrection has been greatly shaken by natural philosophy, and the question has been raised by the Reform rabbis and in rabbinical conferences . . . whether the old liturgical formulas expressing the belief in resurrection should not be so changed as to give clear expression to the hope of immortality of the soul instead. This was done in all the American Reform prayer books. At the rabbinical conference held in Philadelphia it was expressly declared that the belief in resurrection of the body has no foundation in Judaism, and that the belief in the immortality of the soul should take its place in the liturgy [the collection of formularies for public worship].”—Vol. 10, page 385, ¶2 (1905). 6. In what plain Bible statement do such Reform Jews not believe? 7. What did Paul, a one-time Pharisee, say to Felix about resurrection? 7 The Christian apostle Paul, who lived in the earthly days of Jesus Christ, miraculously saw him after his resurrection from the dead. By birth Paul was a Jew. He had been one of the Jewish Pharisees, who believed in the resurrection of the dead. When he stood before the Roman judge Felix, who likely believed in Pluto the Roman god of the underworld of dead souls, Paul said, with reference to the Jewish Pharisees: “I believe all the things set forth in the Law [of Moses] and written in the Prophets; and I have hope toward God, which hope these men themselves also entertain, that there is going to be a resurrection of both the righteous and the unrighteous. . . . this one utterance which I cried out while standing among them, ‘Over the resurrection of the dead I am today being judged before you!’”—Acts 24:14-21. 8. (a) How many witnesses were there of Christ’s resurrection?’ (b) What does his resurrection mean for dead mankind? 8 In his writings about the resurrection the apostle Paul told of more than five hundred eyewitnesses, including himself, who saw the resurrected Jesus Christ after he had been put to death publicly on a torture stake and had been buried in a sealed tomb, under guard by soldiers to prevent any theft of the dead body inside. (1 Cor. 15:3-9; Matt. 27:57 to 28:4) In proving that he and his fellow believers were not false witnesses of the resurrection of Jesus Christ, Paul pointed out what Christ’s resurrection meant for dead mankind by saying: “Now Christ has been raised up from the dead, the first fruits of those who have fallen asleep in death. For since death is through a man [Adam, the first man], resurrection of the dead is also through a man.” (1 Cor. 15:20, 21) The resurrection of Jesus Christ opened up the way for others, namely, dead mankind, to be resurrected. 9, 10. (a) Why could Jesus face a martyr’s death courageously? (b) How do Jesus’ words make sure there will be a resurrection? 9 In the year 33 of our Common Era Jesus Christ courageously faced a martyr’s death, because he had confidence that Almighty God his heavenly Father would raise him from the dead on the third day. Thereby God would permit him to return to heaven and present the value of his human sacrifice to God personally. On earth Jesus Christ had much to say about resurrection. Once, when he talked about bringing dead humans to final judgment by means of a resurrection, he said: “Just as the Father has life in himself, so he has granted also to the Son to have life in himself. And he has given him authority to do judging, because Son of man he is. Do not marvel at this, because the hour is coming in which all those in the memorial tombs will hear his voice and come out, those who did good things to a resurrection of life, those who practiced vile things to a resurrection of judgment. I cannot do a single thing of my own initiative; just as I hear, I judge; and the judgment that I render is righteous, because I seek, not my own will, but the will of him that sent me.”—John 5:26-30.* 10 We can be sure, then, that there will be a resurrection. A PERSONAL QUESTION 11. What very personal question, therefore, suggests itself? 11 A very personal question, therefore, suggests itself to us. It is this: If, sometime in the future, you and I die and get buried in a tomb or grave, will a resurrection, a return to life from the sleep of death, be granted to us according to God’s will? If so, how may we know? Who will be resurrected with us? Will any not be resurrected from the dead? This very question has caught the interest of many Jews, even though they hold to only the Hebrew Scriptures, the first part of what we call the Holy Bible. 12. What kind of picture of resurrection day does the Bible give? 12 Some religious clergymen of Christendom have attempted to picture what the resurrection day will be like to a person who is still alive on earth at that time. They have imagined some wild and really gruesome things about it, such as widely scattered parts of human corpses whizzing through the air to join the other members to which they belonged in one body at death. The Bible presents no such frightful picture of the resurrection time, not even in the prophet Ezekiel’s vision of the valley of dry bones that Almighty God’s power brought together and clothed with living flesh again. (Ezek. 37:1-10) Far differently, by means of suitable symbols, the last book of the Bible gives us a picture of the earthly resurrection after the wicked powers in heaven and on earth have been chased away. This hope-inspiring vision enables us to determine who will take part in the earthly resurrection. 13. In Revelation 20:11-15, what vision was given to John? 13 The vision, as seen by the Christian apostle John, is described in Revelation 20:11-15 in these words: “And I saw a great white throne and the one seated on it. From before him the earth and the heaven fled away, and no place was found for them. And I saw the dead, the great and the small, standing before the throne, and scrolls were opened. But another scroll was opened; it is the scroll of life. And the dead were judged out of those things written in the scrolls according to their deeds. And the sea gave up those dead in it, and death and Haʹdes gave up those dead in them, and they were judged individually according to their deeds. And death and Haʹdes were hurled into the lake of fire. This means the second death, the lake of fire. Furthermore, whoever was not found written in the book of life was hurled into the lake of fire.”—See also Revelation 21:8. 14 Not all persons dying have died on the dry land and been buried in a grave in the bosom of the earth. (Gen. 1:9, 10) Countless numbers have died at sea in shipwreck and storm and battle and have been buried at sea or their bodies have never been recovered to be given a burial on dry land. (1 Ki. 22:48, 49; 2 Chron. 20:36, 37; Ps. 48:7; Dan. 11:40) Therefore, in describing the day of the resurrection of mankind, Revelation 20:13 says that not only “death and Haʹdes gave up those dead in them” but also “the sea gave up those dead in it.” We can appreciate that this verse, Revelation 20:13, is a more inclusive statement of the resurrection than that of Jesus when he said: “All those in the memorial tombs will hear his voice and come out, . . . to a resurrection.”—John 5:28, 29. 15. Why is the “sea” not hurled into the “lake of fire” also? 15 One other point to notice is this: Whatever Haʹdes is here understood to be, those who are dead in it are not in the same place as those who are dead in the sea, for the dead in the sea are in a watery place. The sea will never cease, in a literal sense, to exist on the earth. That is why Revelation 20:14 says: “Death and Haʹdes were hurled into the lake of fire. This means the second death, the lake of fire.” If the literal sea were hurled into the “lake of fire” it would put out the lake of fire, and the lake of fire would cease to exist, rather than the sea cease to exist. However, the Bible is definite that the “second death” that is symbolized by the “lake of fire” will never cease to exist. Symbolically, that “lake of fire” will burn forever. 16. Is the Bible Haʹdes like that imagined by the Greeks? Why? 16 What, then, is this Haʹdes that is cast into the symbolic “lake of fire”? What is the condition of those in such Haʹdes? One thing is sure, the Haʹdes described in the Holy Bible is not the Haʹdes imagined by the ancient non-Christian Greeks and described in their mythologies. There was no general resurrection from the mythological Haʹdes of the pagan Greeks. 17. In what twofold sense did ancient Greeks speak of resurrection? 17 Under the subheading “B. Resurrection in the Greek World,” the Theological Dictionary of the New Testament, Volume 1, page 369, says: “Apart from transmigration of souls, . . . the Greek speaks of resurrection in a twofold sense. a. Resurrection is impossible. . . . b. Resurrection may take place as an isolated miracle. . . . The raising of an apparently dead girl in Rome by Apollonius of Tyana is recounted. . . , 150,000 denarii being contributed as additional endowment. . . . The idea of a general resurrection at the end of the age is alien to the Greeks. Indeed, it is perhaps attacked on a Phrygian inscription: [Indeed are the wretched ones all looking to a resurrection?]. In Acts 17:18 anástasis [resurrection] seems to be misunderstood by the hearers as a proper name (compare Ac 17:31 and following).”* Of course, the “apparently dead girl” whom Apollonius raised died again. 18. Unlike the heathen, what hope did God’s people have? 18 In its article on “Haʹdes” the Cyclopædia by M’Clintock and Strong, Volume 4, 1891 edition, makes this admission, on page 9, last paragraph: “To the believing Hebrew alone the sojourn in sheol appeared that only of a temporary and intermediate existence. The heathen had no prospect beyond its shadowy realms; its bars for him were eternal: and the idea of a resurrection was utterly strange alike to his religion and his philosophy. But it was in connection with the prospect of a resurrection from the dead that all hope formed itself in the breasts of the true people of God. As this alone could effect the reversion of the evil brought in by sin and really destroy the destroyer, so nothing less was announced in that first promise which gave assurance of the crushing of the tempter.”—See Genesis 3:15; Romans 16:20 19. So how does the Bible Haʹdes differ from that of the Greeks? 19 Thus in the Bible Haʹdes is different from that of the pagan Greeks, in that the Bible repeatedly states that there will be a resurrection from Haʹdes of those who are there. It is not such a place as the ancient Babylonians talked of, that is to say, “the land of no return.” But where, then, is this Biblical Haʹdes, and what is the condition of those in it? Is it a place of “intermediate existence” for the dead? Only if we get the Bible’s own answers shall we get the correct answers, the true answers on which our faith may rest unshakably. What does the Bible say? 20. What must the condition of those in Haʹdes be? 20 In the oldest known handwritten copies of the Christian Greek Scriptures the word Haʹdes occurs ten times.* Are people alive in the Biblical Haʹdes? Honest Bible readers will say that they are lifeless inasmuch as Revelation 20:13 says that those whom “death and Haʹdes gave up” were “those dead in them.” Certainly the dead in death are not alive. Likewise those dead in Haʹdes could not be alive either. However, the religionists of Christendom are infected with pagan Greek mythology and they will say: “Not so. The dead in Haʹdes are not really dead. Only their body is dead, but their soul is alive because it is immortal. For them death means only that they are separated from God. In other respects, those immortal souls in Haʹdes are really alive.” But is this argument of the religionists of Christendom right? Is it what the Bible teaches regarding the condition of those who are dead in Haʹdes and who will have a resurrection from Haʹdes? Search the Bible. 21. (a) Is Haʹdes in heaven? (b) Does the Christian congregation go to Haʹdes? 21 In the Christian Greek Scriptures the first use of the word Haʹdes is in Matthew 11:23. There the Lord Jesus Christ says: “And you, Capernaum, will you perhaps be exalted to heaven? Down to Haʹdes you will come.” (Also in Luke 10:15) For this reason Haʹdes cannot be in heaven. The next use of the word Haʹdes is in Matthew 16:18, in which Jesus says to his apostle Peter: “Also, I say to you, You are Peter, and on this rock-mass I will build my congregation, and the gates of Haʹdes will not overpower it.” This saying of Jesus means that the congregation of his followers would die and enter in through the gates into Haʹdes. They would thus get to be among those who are dead in Haʹdes. 22. Why would Haʹdes’ gates not overpower Jesus’ congregation? 22 However, why would the “gates of Haʹdes” not overpower Jesus’ congregation? Why would those “gates” not remain forever closed upon Jesus’ followers and thus make Haʹdes a “land of no return”? It was because of what Jesus later said to the aged apostle John in the last book of the Holy Bible, in Revelation 1:17, 18. In those verses the resurrected Jesus Christ in heaven said to John: “Do not be fearful. I am the First and the Last, and the living one; and I became dead, but, look! I am living forever and ever, and I have the keys of death and of Haʹdes.” Since he has the keys of death and of Haʹdes, the heavenly Jesus Christ can unlock those “gates of Haʹdes” and let his dead congregation out, in this way restoring them to life. 23. Jesus promised to overpower Haʹdes thus at what time? 23 Because of having this in mind, Jesus said that those gates of Haʹdes would not overpower his congregation. Rather, Jesus would overpower Haʹdes and free his congregation from Haʹdes. Jesus made a direct promise of this when he said, in John 6:39, 40: “This is the will of him that sent me, that I should lose nothing out of all that he has given me but that I should resurrect it at the last day. For this is the will of my Father, that everyone that beholds the Son and exercises faith in him should have everlasting life, and I will resurrect him at the last day.” 24. (a) In the Bible what word is associated with Haʹdes? (b) At death where did Jesus go, according to Psalm 16:10, 11? 24 It is interesting to note that in the ten cases where Haʹdes occurs in the Christian Greek Scriptures the word “death” occurs with it. (Rev. 1:18; 6:8; 20:13, 14) So death, not life, is associated with Haʹdes. In this connection, then, we ask the question, When Jesus Christ himself died and was buried in the memorial tomb of his friend, Joseph of Arimathea, that same day, where did Jesus himself go? (Matt. 27:57-61) A person upon whom we can rely to tell us the truth about this is Jesus’ own close apostle, Simon Peter. On the festival day of Pentecost at Jerusalem, fifty-one days after the death and burial of Jesus, God’s holy spirit was poured down upon Peter and other disciples of Jesus. So under inspiration of God’s spirit Peter spoke and quoted Psalm 16:10, 11, saying: “Because you will not leave my soul in Haʹdes, neither will you allow your loyal one to see corruption. You have made life’s ways known to me, you will fill me with good cheer with your face.” Those words quoted by Peter were written by King David, who wrote as an inspired prophet of God. 25, 26. On Pentecost what did Peter say regarding David and Jesus? 25 Then the apostle Peter, filled with God’s spirit, went on to say to the thousands of Jews observing the festival of Pentecost: 26 “Brothers, it is allowable to speak with freeness of speech to you concerning the family head David, that he both deceased and was buried and his tomb is among us to this day. Therefore, because he was a prophet and knew that God had sworn to him with an oath that he would seat one from the fruitage of his loins upon his throne, he saw beforehand and spoke concerning the resurrection of the Christ, that neither was he forsaken in Haʹdes nor did his flesh see corruption. This Jesus God resurrected, of which fact we are all witnesses. Therefore because he was exalted to the right hand of God and received the promised holy spirit from the Father, he has poured out this which you see and hear. Actually David did not ascend to the heavens, but he himself says, ‘Jehovah said to my Lord: “Sit at my right hand, until I place your enemies as a stool for your feet.”’ Therefore let all the house of Israel know for a certainty that God made him both Lord and Christ, this Jesus whom you impaled.”—Acts 2:27-36. 27. How did Jesus Christ become able to resurrect his congregation from Haʹdes? 27 In that speech the inspired Peter plainly says concerning the Lord Jesus Christ that he was not “forsaken in Haʹdes,” and that in fulfillment of Psalm 16:10 his soul was not left in Haʹdes. Thus when the dead Jesus was buried in the memorial tomb his soul went to Haʹdes. On the third day Almighty God resurrected him from Haʹdes, and then God committed to the resurrected Jesus the “keys of death and of Haʹdes,” so that Jesus could say, in Revelation 1:18: “I became dead, but, look! I am living forever and ever, and I have the keys of death and of Haʹdes.” Because of his possessing those keys, he is able to resurrect all those who are dead in Haʹdes, including his own congregation.* 28. (a) In what language did Peter on Pentecost quote Psalm 16:10? (b) How, then, shall we find out what and where the Bible Haʹdes is? 28 The apostle Peter, being a Hebrew or Jew, evidently spoke in the Hebrew of that day when he gave his speech on the day of Pentecost. So, when he made his quotation from Psalm sixteen, he quoted directly from the Hebrew text, not from the Greek Septuagint translation of the Hebrew text. That being so, Peter did not use the Greek word Haʹdes but used the original word in the Hebrew text, namely, Sheol. The fact of the matter is that the word Haʹdes is the Greek word used in the Septuagint Version in translating the Hebrew word Sheol.* In the inspired Hebrew Scriptures the word Sheol occurs sixty-five times in sixty-three different verses, including Psalm 16:10, which Peter quoted. In the Hebrew this verse reads: “For you will not leave my soul in Sheol. You will not allow your loyal one to see the pit.”* Consequently, if we find out what and where Sheol is and what the condition is of those in Sheol we shall at the same time find out what and where the Bible Haʹdes is and what the condition is of those in Haʹdes. Haʹdes corresponded with the Romans’ god of the underworld named Pluto. As applied to this god of the dead, the name Haʹdes meant “The Invisible-making Deity,” from his power to render human mortals invisible after their death.—See M’Clintock and Strong’s Cyclopædia, Volume 4, page 9, under “Haʹdes”; also, Liddell and Scott’s A Greek-English Lexicon, reprint of 1948, Volume 1, page 21, column 2, under Ἅιδης or ᾅδης. See the book “Babylon the Great Has Fallen!” God’s Kingdom Rules!, page 43, paragraphs 2, 3 For a thorough discussion of these words of Jesus Christ, please see the Watchtower issue of December 1, 1964, under the titles “Out of the Tombs to a ‘Resurrection of Life’” and “Out of the Tombs to a ‘Resurrection of Judgment.’” Edited in German by Gerhard Kittel, and translated into English by Geoffrey W. Bromley, edition of 1964. Printed in the Netherlands. In the most ancient Greek manuscripts the word Haʹdes is not found in 1 Corinthians 15:55. Instead, the word thánatos, meaning “death,” is found there. For an explanation of Haʹdes in Luke 16:23 see the issue of The Watchtower under date of February 1, 1965, page 75, paragraph 11 ff. In the Greek Septuagint Version the word Haʹdes occurs seventy-three times. NW; AS; Yg; RS; AT; but Ro reads “haʹdes” instead of “Sheol.” [Picture on page 39] RESURRECTED JESUS APPEARS TO PAUL
<urn:uuid:e8282b83-3ebf-4a4d-a3cf-9779a79361ca>
CC-MAIN-2021-21
https://wol.jw.org/en/wol/d/r1/lp-e/1965041
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00537.warc.gz
en
0.968029
5,928
2.875
3
The cannabis plant consists of a wide variety of chemicals and compounds. About 140 of these belong to a large class of aromatic organic hydrocarbons known as terpenes (pronounced tur-peens). You may have also heard people talk about terpenoids. The words terpene and terpenoid are increasingly used interchangeably, although these terms do have different meanings. The main difference between terpenes and terpenoids is that terpenes are hydrocarbons (meaning the only elements present are carbon and hydrogen); whereas, terpenoids have been denatured by oxidation (drying and curing the flowers) or chemically modified. Terpenes are synthesized in cannabis in secretory cells inside glandular trichomes, and production is increased with light exposure. These terpenes are mostly found in high concentrations in unfertilized female cannabis flowers prior to senescence (the condition or process of deterioration with age). The essential oil is extracted from the plant material by steam distillation or vaporization. Many terpenes vaporize around the same temperature as THC (which boils at about 157°C), but some terpenes are more volatile than others. Terpenes also play an incredibly important role by providing the plant with natural protection from bacteria and fungus, insects and other environmental stresses. It is well established that cannabis is capable of affecting the mind, emotions and behavior. The main psychotropic cannabinoid, delta-9-tetrahydrocannabinol (THC) has been intensely studied. However, many of the other cannabinoids, terpenoids and flavonoids found in medical marijuana that play a big role in boosting the therapeutic effect of cannabis remain understudied. Terpenes are common constituents of flavorings and fragrances. Terpenes, unlike cannabinoids, are responsible for the aroma of cannabis. The FDA and other agencies have generally recognized terpenes as “safe.” Terpenes act on receptors and neurotransmitters; they are prone to combine with or dissolve in lipids or fats; they act as serotonin uptake inhibitors (similar to antidepressants like Prozac); they enhance norepinephrine activity (similar to tricyclic antidepressants like Elavil); they increase dopamine activity; and they augment GABA (the “downer” neurotransmitter that counters glutamate, the “upper”). However, more specific research is needed for improved accuracy in describing and predicting how terpenes in cannabis can be used medicinally to help treat specific ailments / health conditions. The Carlini et al study demonstrated that there may be potentiation (a form of synaptic plasticity that is known to be important for learning and memory) of the effects of THC by other substances present in cannabis. The double-blind study found that cannabis with equal or higher levels of CBD and CBN to THC induced effects two to four times greater than expected from THC content alone. The effects of smoking twice as much of a THC-only strain were no different than that of the placebo. This suggestion was reinforced by a study done by Wilkinson et al to determine whether there is any advantage in using cannabis extracts compared with using isolated THC. A standardized cannabis extract of THC, CBD and CBN (SCE), another with pure THC, and also one with a THC-free extract (CBD) were tested on a mouse model of multiple sclerosis (MS) and a rat brain slice model of epilepsy. Scientists found that SCE inhibited spasticity in the MS model to a comparable level of THC alone, and caused a more rapid onset of muscle relaxation and a reduction in the time to maximum effect than the THC alone. The CBD caused no inhibition of spasticity. However, in the epilepsy model, SCE was a much more potent and again more rapidly-acting anticonvulsant than isolated THC; however, in this model, the CBD also exhibited anticonvulsant activity. CBD did not inhibit seizures, nor did it modulate the activity of THC in this model. Therefore, as far as some actions of cannabis were concerned (e.g. anti-spasticity), THC was the active constituent, which might be modified by the presence of other components. However, for other effects (e.g. anticonvulsant properties) THC, although active, might not be necessary for the observed effect. Above all, these results demonstrated that not all of the therapeutic actions of cannabis herb is due to the THC content. Dr. Ethan Russo further supports this theory with scientific evidence by demonstrating that non-cannabinoid plant components such as terpenes serve as inhibitors to THC’s intoxicating effects, thereby increasing THC’s therapeutic index. This “phytocannabinoid-terpenoid synergy,” as Russo calls it, increases the potential of cannabis-based medicinal extracts to treat pain, inflammation, fungal and bacterial infections, depression, anxiety, addiction, epilepsy and even cancer. What are Flavonoids? Flavonoids are one of the largest nutrient families known to scientists, and include over 6,000 already-identified family members. About 20 of these compounds, including apigenin, quercetin, cannflavin A and cannflavin B (so far unique to cannabis), β-sitosterol, vitexin, isovitexin, kaempferol, luteolin and orientin have been identified in the cannabis plant. Flavonoids are known for their antioxidant and anti-inflammatory health benefits, as well as their contribution of vibrant color to the many of the foods we eat (the blue in blueberries or the red in raspberries). Some flavonoids extracted from the cannabis plant have been tested for pharmacological effects. The clinical findings are promising, but further research is needed to fully understand what role flavonoids play in the overall therapeutic effects of cannabis treatment, especially how they interact with cannabinoids by either synergistically enhancing them or reducing their effects. The Terpene Wheel Terpenes have been found to be essential building blocks of complex plant hormones and molecules, pigments, sterols and even cannabinoids. Most notably, terpenes are responsible for the pleasant, or not so pleasant, aromas of cannabis and the physiological effects associated with them. Patients will often ask to smell the cannabis when selecting their medicine. The idea is that certain aromas help identify different strains and their effects. As the Casano et al study shows, medical marijuana strains can vary greatly from one source to another, and even from one harvest to another. Those with relatively high concentrations of specific terpenes do, however, make them easier to identify by their smell than other strains. Most agree that varieties that smell of musk or of clove deliver sedative, relaxing effects (high level of the terpene myrcene); piney smells help promote mental alertness and memory retention (high level of the terpene pinene); and lemony aromas are favored for general uplift in mood and attitude (high level of limonene). Flavor wheel (source: GreenHouse Seeds Co.) In a spectral analysis performed by Green House Seed Co., they were able to identify the terpenes in each of their strains, and developed a “flavor wheel” to help medical marijuana patients decide on their strain of choice based on the effects desired. Although one of the primary purposes of the wheel was to market different seeds for this particular company, the concept and vocabulary used is becoming an invaluable tool for medical marijuana patients, caregivers, and cultivators alike. Since then, several companies have developed their own terpene and weed wheels, albeit for the same reasons — to market their own products or services — and that’s OK. By mapping out terpene profiles, we are able to predict and even manipulate the effects and medicinal value of varieties, giving breeders endless opportunities for developing new, highly-desired cannabis strains by basing breeding decisions on real analytical data. The more we are able to communicate using the same language, the easier it is for everyone to understand clearly what medicine they are getting. Check Out Mr Terps for Cannabis Terpenes for sale. Terpenes in Cannabis Myrcene, specifically β-myrcene, is a monoterpene and the most common terpene produced by cannabis (some varieties contain up to 60% of the essential oil). Its aroma has been described as musky, earthy, herbal – akin to cloves. A high myrcene level in cannabis (usually above 0.5%) results in the well-known “couch-lock” effect of classic Indica strains. Myrcene is found in oil of hops, citrus fruits, bay leaves, eucalyptus, wild thyme, lemon grass and many other plants. Myrcene has some very special medicinal properties, including lowering the resistance across the blood to brain barrier, allowing itself and many other chemicals to cross the barrier easier and more quickly. In the case of cannabinoids (like THC), myrcene allows the effects of the cannabinoid to take effect more quickly. More uniquely still, myrcene has been shown to increase the maximum saturation level of the CB1 receptor, allowing for a greater maximum psychoactive effect. Myrcene is a potent analgesic, anti-inflammatory, antibiotic and antimutagenic. It blocks the action of cytochrome, aflatoxin B and other pro-mutagenic carcinogens. The Bonamin et al study focused on the role of β-myrcene in preventing peptic ulcer disease. The study revealed that β-myrcene acts as an inhibitor of gastric and duodenal ulcers, suggesting it may be helpful in preventing peptic ulcer disease. Its sedative and relaxing effects also make it ideal for the treatment of insomnia and pain. Since myrcene is normally found in essential oil from citrus fruit, many claim eating a fresh mango about 45 minutes before consuming cannabis will result in a faster onset of psycho activity and greater intensity. Be sure to choose a mango that is ripe otherwise the myrcene level will be too low to make a difference. Pinene is a bicyclic monoterpenoid. Akin to its name, pinene has distinctive aromas of pine and fir. There are two structural isomers of pinene found in nature: α-pinene and β-pinene. Both forms are important components of pine resin. α-pinene is the most widely encountered terpenoid in nature. Pinene is found in many other conifers, as well as in non-coniferous plants. It is found mostly in balsamic resin, pine woods and some citrus fruits. The two isomers of pinene constitute the main component of wood turpentine. Pinene is one of the principal monoterpenes that is important physiologically in both plants and animals. It tends to react with other chemicals, forming a variety of other terpenes (like limonene) and other compounds. Pinene is used in medicine as an anti-inflammatory, expectorant, bronchodilator and local antiseptic. α-pinene is a natural compound isolated from pine needle oil which has shown anti-cancer activity and has been used as an anti-cancer agent in Traditional Chinese Medicine for many years. It is also believed that the effects of THC may be lessened if mixed with pinene. Limonene is a monocyclic monoterpenoid and one of two major compounds formed from pinene. As the name suggests, varieties high in limonene have strong citrusy smells like oranges, lemons and limes. Strains high in limonene promote a general uplift in mood and attitude. This citrusy terpene is the major constituent in citrus fruit rinds, rosemary, juniper and peppermint, as well as in several pine needle oils. Limonene is highly absorbed by inhalation and quickly appears in the bloodstream. It assists in the absorption of other terpenes through the skin and other body tissue. It is well documented that limonene suppresses the growth of many species of fungi and bacteria, making it an ideal antifungal agent for ailments such as toenail fungus. Limonene may be beneficial in protecting against various cancers, and orally administered limonene is currently undergoing clinical trials in the treatment of breast cancer. Limonene has been found to even help promote weight-loss. Plants use limonene as a natural insecticide to ward off predators. Limonene was primarily used in food and perfumes until a couple of decades ago, when it became better known as the main active ingredient in citrus cleaner. It has very low toxicity and adverse effects are rarely associated with it. Beta-caryophyllene is a sesquiterpene found in many plants such as Thai basils, cloves, cinnamon leaves and black pepper, and in minor quantities in lavender. It’s aroma has been described as peppery, woody and/or spicy. Caryophyllene is the only terpene known to interact with the endocannabinoid system (CB2). Studies show β–caryophyllene holds promise in cancer treatment plans. Research shows shows that β–caryophyllene selectively binds to the CB2 receptor and that it is a functional CB2 agonist. Further, β–caryophyllene was identified as a functional non-psychoactive CB2 receptor ligand in foodstuff and as a macrocyclic anti-inflammatory cannabinoid in cannabis. The Fine/Rosenfeld pain study demonstrates that other phytocannabinoids in combination, especially cannabidiol (CBD) and β-caryophyllene, delivered by the oral route appear to be promising candidates for the treatment of chronic pain due to their high safety and low adverse effects profiles. The Horváth et al study suggests β-caryophyllene, through a CB2 receptor dependent pathway, may be an excellent therapeutic agent to prevent nephrotoxicity (poisonous effect on the kidneys) caused by anti-cancer chemotherapy drugs such as cisplatin. The Jeena, Liju et al study investigated the chemical composition of essential oil isolated from black pepper, of which caryophyllene is a main constituent, and studied its pharmacological properties. Black pepper oil was found to possess antioxidant, anti-inflammatory and antinociceptive properties. This suggests that high-caryophyllene strains may be useful in treating a number of medical issues such as arthritis and neuropathy pain. Beta-caryophyllene is used especially in chewing gum when combined with other spicy mixtures or citrus flavorings. Linalool is a non-cyclic monoterpenoid and has been described as having floral and lavender undertones. Varieties high in linalool promote calming, relaxing effects. Linalool has been used for centuries as a sleep aid. Linalool lessens the anxious emotions provoked by pure THC, thus making it helpful in the treatment of both psychosis and anxiety. Studies also suggest that linalool boosts the immune system; can significantly reduce lung inflammation; and can restore cognitive and emotional function (making it useful in the treatment of Alzheimer’s disease). As shown by the Ma, J., Xu et al study, linalool may significantly reduce lung inflammation caused by cigarette smoke by blocking the carcinogenesis induced by benz[α]anthracene, a component of the tar generated by the combustion of tobacco. This finding indicates limonene may be helpful in reducing the harm caused by inhaling cannabis smoke. Linalool boosts the immune system as it directly activates immune cells through specific receptors and/or pathways. The Sabogal-Guáqueta et al study suggests linalool may reverse the histopathological (the microscopic examination of biological tissues to observe the appearance of diseased cells and tissues in very fine detail) hallmarks of Alzheimer’s Disease and could restore cognitive and emotional functions via an anti-inflammatory effect. The Environmental Protection Agency has approved its use as a pesticide, flavor agent and scent. It is used in a wide variety of bath and body products and is commonly listed under ingredients for these products as beta linalool, linalyl alcohol, linaloyl oxide, p-linalool and alloocimenol. Its vapors have been shown to be an effective insecticide against fruit flies, fleas and cockroaches. Linalool has been isolated in several hundred different plants. The Lamiaceae plant and herb family, which includes mints and other scented herbs, are common sources. The Lauraceae plant family, which includes laurels, cinnamon, and rosewood, is also a readily available source. The Rutaceae family, which contains citrus plants, is another viable source. Birch trees and several different plant species that are found in tropical and boreal climate zones also produce linalool. Although technically not plants, some fungi produce linalool, as well. Linalool is a critical precursor in the formation of Vitamin E. Terpinolene is a common component of sage and rosemary and is found in the oil derived from Monterey cypress. Its largest use in the United States is in soaps and perfumes. It is also a great insect repellent. Terpinolene is known to have a piney aroma with slight herbal and floral nuances. It tends to have a sweet flavor reminiscent of citrus fruits like oranges and lemons. Terpinolene has been found to be a central nervous system depressant used to induce drowsiness or sleep or to reduce psychological excitement or anxiety. Further, terpinolene was found to markedly reduce the protein expression of AKT1 in K562 cells and inhibited cell proliferation involved in a variety of human cancers. Camphene, a plant-derived monoterpene, emits pungent odors of damp woodlands and fir needles. Camphene may play a critical role in cardiovascular disease. The Vallianou et al study found camphene reduces plasma cholesterol and triglycerides in hyperlipidemic rats. Given the importance that the control of hyperlipidemia plays in heart disease, the results of this study provide insight into to how camphene might be used as an alternative to pharmaceutical lipid lowering agents which are proven to cause intestinal problems, liver damage and muscle inflammation. This finding alone warrants further investigation. Camphene is a minor component of many essential oils such as turpentine, camphor oil, citronella oil and ginger oil. It is used as a food additive for flavoring, and also used in the preparation of fragrances. It is produced industrially by catalytic isomerization of the more common α-pinene. α-Terpineol, terpinen-4-ol, and 4-terpineol are three closely related monoterpenoids. The aroma of terpineol has been compared to lilacs and flower blossoms. Terpineol is often found in cannabis varieties that have high pinene levels, which unfortunately mask the fragrant aromas of terpineol. Terpineol, specifically α-terpineol, is known to have calming, relaxing effects. It also exhibits antibiotic, AChe inhibitor and antioxidant antimalarial properties. Phellandrene is described as pepperminty, with a slight scent of citrus. Phellandrene is believed to have special medicinal values. It has been used in Traditional Chinese Medicine to treat digestive disorders. It is one of the main compounds in turmeric leaf oil, which is used to prevent and treat systemic fungal infections. Phellandrene is perhaps the easiest terpene to identify in the lab. When a solution of phellandrene in a solvent (or an oil containing phellandrene) is treated with a concentrated solution of sodium nitrate and then with a few drops of glacial acetic acid, very large crystals of phellandrene nitrate speedily form. Phellandrene was first discovered in eucalyptus oil. It wasn’t until the early 1900s that it was actually constituted and shown that phellandrene from eucalyptus oil contained two isomeric phellandrene (usually referred to as α-phellandrene and β-phellandrene), and on oxidation with potassium permanganate gave distinct acids, concluding that the acids had been derived from two different isomeric phellandrene. Before that, phellandrene was mistaken for pinene or limonene. Today, we are aware of many essential oils where phellandrene is present. It is, however, a somewhat uncertain terpene as it can only be detected in the oils of some species, especially in Eucalypts, at particular times of the year. Phellandrene can be found in a number of herbs and spices, including cinnamon, garlic, dill, ginger and parsley. A number of plants produce β-phellandrene as a constituent of their essential oils, including lavender and grand fir. The recognizable odors of some essential oils depend almost entirely upon the presence of phellandrene. Oil of pepper and dill oil are composed almost entirely of phellandrene. The principal constituent in oil of ginger is phellandrene. Phellandrene, particularly α-phellandrene, is absorbed through the skin, making it attractive for use in perfumes. It is also used as a flavoring for food products. Delta-3-carene is a bicyclic monoterpene with a sweet, pungent odor. It is found naturally in many healthy, beneficial essential oils, including cypress oil, juniper berry oil and fir needle essential oils. In higher concentrations, delta-3-carene can be a central nervous system depressant. It is often used to dry out excess body fluids, such as tears, mucus, and sweat. It is nontoxic, but may cause irritation when inhaled. Perhaps high concentrations of delta-3-carene in some strains may be partially responsible for symptoms of coughing, itchy throat and eye afflictions when smoking cannabis. Delta-3-carene is also naturally present in pine extract, bell pepper, basil oil, grapefruit and orange juices, citrus peel oils from fruits like lemons, limes, mandarins, tangerines, oranges and kumquats. Carene is a major component of turpentine and is used as a flavoring in many products. Humulene is a sesquiterpene also known as α-humulene and α–caryophyllene; an isomer of β–caryophyllene. Humulene is found in hops, cannabis sativa strains, and Vietnamese coriander, among other naturally occurring substances. Humulene is what gives beer its distinct ‘hoppy’ aroma. Humulene is considered to be anti-tumor, anti-bacterial, anti-inflammatory, and anorectic (suppresses appetite). It has commonly been blended with β–caryophyllene and used as a major remedy for inflammation. Humulene has been used for generations in Chinese medicine. It aids in weight loss by acting as an appetite suppressant. Pulegone, a monocyclic monoterpenoid, is a minor component of cannabis. Higher concentrations of pulegone are found in rosemary. Rosemary breaks down acetylcholine in the brain, allowing nerve cells to communicate more effectively with one another. An ethnopharmacology study indicates pulegone may have significant sedative and fever-reducing properties. It may also alleviate the side effects of short-term memory loss sometimes associated with higher levels of THC. Pulegone has a pleasant peppermint aroma and is considered to be a strong insecticide. Sabinene is a bicyclic monoterpene whose aromas are reminiscent of the holidays (pines, oranges, spices). Results of an ongoing study by Valente et al suggest that sabinene should be explored further as a natural source of new antioxidant and anti-inflammatory drugs for the development of food supplements, nutraceuticals or plant-based medicines. Sabinene occurs in many plants, including Norway spruce, black pepper, basil and Myristica fragrans (an evergreen indigenous to the Moluccas)—the Spice Islands of Indonesia. The seeds of the Myristica fragrans are the world’s main source of nutmeg. Sabinene exists as (+)- and (–)-enantiomers. Geraniol produces a sweet, delightful smell similar to roses. This makes geraniol a popular choice for many bath and body products. It is also known to be an effective mosquito repellant. Medically, geraniol shows promise in the treatment of neuropathy.
<urn:uuid:8f597642-7fe6-4e1e-8a58-8cc0401d3635>
CC-MAIN-2021-21
https://blog.mrterps.com/intro-to-terpenes/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.25/warc/CC-MAIN-20210507060253-20210507090253-00017.warc.gz
en
0.947267
5,195
2.96875
3
– Polar Bear Giving Birth, Polar Bear Cubs Facts – Interesting Facts about Baby Polar Bears, Polar Bear Penis – Polar Bear Genitalia and Testes Growth. The United States began regulating hunting in 1971 and adopted the Marine Mammal Protection Act in 1972. A 1000lb bear can 200 lbs worth of food in a single meal. Chemical communication can also be important: bears leave behind their scent in their tracks which allow individuals to keep track of one another in the vast Arctic wilderness. The largest polar bear on record, reportedly weighing 1,002 kg (2,209 lb), was a male shot at Kotzebue Sound in northwestern Alaska in 1960. The polar bear tends to frequent areas where sea ice meets water, such as polynyas and leads (temporary stretches of open water in Arctic ice), to hunt the seals that make up most of its diet. A polar bear can live up to 25 years, with wild bears reaching a record of 32 years and some in captivity reaching 43 years of age. I love these questions because it sparks an endless debate, but in this case it should be an easy battle. In two areas where harvest levels have been increased based on increased sightings, science-based studies have indicated declining populations, and a third area is considered data-deficient. The body condition of polar bears has declined during this period; the average weight of lone (and likely pregnant) female polar bears was approximately 290 kg (640 lb) in 1980 and 230 kg (510 lb) in 2004. The heaviest polar bear ever recorded at the Guinness Book of World Records measured at 849 to 904 pounds. sergei gladyshev / Getty Images. 385 to 410 kg. , With the exception of pregnant females, polar bears are active year-round, although they have a vestigial hibernation induction trigger in their blood. The claws are deeply scooped on the underside to assist in digging in the ice of the natural habitat. So you want to know how strong this guy truly is….. What makes a polar bear so strong is its whopping bite force of 1200 pounds per square inch. A male polar bear can reach a weight between 772 and 1,543 pounds. An adult male ranges from 7.9 to 9.8 feet in length and weighs 770 to 1500 pounds. Between 1987 and 2004, the Western Hudson Bay population declined by 22%, although the population was listed as "stable" as of 2017. The subpopulations display seasonal fidelity to particular areas, but DNA studies show that they are not reproductively isolated. A female polar bear can be around 450 to 500 kg in weight if pregnant. Norway passed a series of increasingly strict regulations from 1965 to 1973, and has completely banned hunting since then. In Quebec, the polar bear is referred to as ours blanc ("white bear") or ours polaire ("polar bear"). Weight of an Adult: A maleadult can weight up to … , There are 19 generally recognized, discrete subpopulations, though polar bears are thought to exist only in low densities in the area of the Arctic Basin. The weight of an adult female is one half of the male’s body mass. They can live for up to 25 years. In unusually warm conditions, the hollow tubes provide an excellent home for algae. Adult males weigh from 400 to 600 kilograms (880 to 1,320 pounds) and occasionally exceed 800 kilograms (1,760 pounds). Polar bears were chased from snowmobiles, icebreakers, and airplanes, the latter practice described in a 1965 New York Times editorial as being "about as sporting as machine gunning a cow. They are … Appenzeller, T. and Dimick, D. R. (2004) "The Heat is On,", List of We Bare Bears characters § Ice Bear, International Agreement on the Conservation of Polar Bears, Ministry of Natural Resources and Environment, International Union for Conservation of Nature, 2019 mass invasion of Russian polar bears, "Late Pleistocene fossil find in Svalbard: the oldest remains of a polar bear (, 10.2305/IUCN.UK.2015-4.RLTS.T22823A14871490.en, "Alaska, Chukotka sign agreement to manage polar bears", "Education: Marine Mammal Information: Polar Bears", "Unequal Rates of Y Chromosome Gene Divergence during Speciation of the Family Ursidae", "Molecular distance and divergence time in carnivores and primates", "Complete mitochondrial genome of a Pleistocene jawbone unveils the origin of polar bear", "Ancient Hybridization and an Irish Origin for the Modern Polar Bear Matriline", "Nuclear Genomic Sequences Reveal that Polar Bears Are an Old and Distinct Bear Lineage", "Population Genomics Reveal Recent Speciation and Rapid Evolutionary Adaptation in Polar Bears", "Brown bears and polar bears split up, but continued coupling", "List of Marine Mammal Species & Subspecies", "Genetic structure of the world's polar bear populations", Supplementary material for Ursus maritimus Red List assessment, "Killing polar bears in Iceland "only logical thing to do, "The war over the polar bear: Who's telling the truth about the fate of a Canadian icon? (0.8% in the 1970s, 7.1% in the 1980s, and 14.6% in the 1990s) Nunavut polar bear biologist, Mitchell Taylor, who was formerly responsible for polar bear conservation in the territory, has insisted that bear numbers are being sustained under current hunting limits. The heaviest male polar bear on record weighed 2209 pounds (1002 kgs). Polar bears can breed with brown bears to produce fertile grizzly–polar bear hybrids; rather than indicating that they have only recently diverged, the new evidence suggests more frequent mating has continued over a longer period of time, and thus the two bears remain genetically similar. Polar bear adults. The family remains in the den until mid-February to mid-April, with the mother maintaining her fast while nursing her cubs on a fat-rich milk. The pair wrestled harmlessly together each afternoon for 10 days in a row for no apparent reason, although the bear may have been trying to demonstrate its friendliness in the hope of sharing the kennel's food. PCBs have received the most study, and they have been associated with birth defects and immune system deficiency. A polar bear’s stomach can hold up to 20% of its own mass. The causes of death in wild adult polar bears are poorly understood, as carcasses are rarely found in the species's frigid habitat. A 1000lb bear can 200 lbs worth of food in a single meal. A polar bear can eat 100 pounds of seal blubber in one meal. In 1927, poisoning was outlawed while in 1939, certain denning sights were declared off limits. , Polar bears are currently listed as "Rare", of "Uncertain Status", or "Rehabilitated and rehabilitating" in the Red Data Book of Russia, depending on population. Bacterial leptospirosis and Morbillivirus have been recorded. They are the only animal known to actively hunt humans. The average polar bear studied weighed about 386 pounds. , Due to warming air temperatures, ice-floe breakup in western Hudson Bay is currently occurring three weeks earlier than it did 30 years ago, reducing the duration of the polar bear feeding season. The polar bear's claws are short and stocky compared to those of the brown bear, perhaps to serve the former's need to grip heavy prey and ice. Norway is the only country of the five in which all harvest of polar bears is banned. The average weight of a female polar bear is usually between 150-250 kg. In 1973, the International Agreement on the Conservation of Polar Bears was signed by all five nations whose territory is inhabited by polar bears: Canada, Denmark, Norway, the Soviet Union, and the United States. The polar bear is the largest land carnivore alive in the world today. It is difficult to estimate a global population of polar bears as much of the range has been poorly studied; however, biologists use a working estimate of about 20–25,000 or 22–31,000 polar bears worldwide. This decision was approved of by members of the IUCN and TRAFFIC, who determined that such an uplisting was unlikely to confer a conservation benefit. Such was the intensity of human fascination with this magnificent predator, the only marine bear. Polar Bear Brown Bear; Average Weight of Mature Male: 900-1,500 pounds: 500-900 pounds: Heaviest Recorded: 2,210 pounds: 2,500+ pounds: Average Length of Mature Male: 8-8.4 feet >7-10 feet Ursus labradorensis Polar bears are called Nanuuq by Eskimos. , The polar bear is found in the Arctic Circle and adjacent land masses as far south as Newfoundland. , The key danger posed by climate change is malnutrition or starvation due to habitat loss. The Western Hudson Bay subpopulation is unusual in that its female polar bears sometimes wean their cubs at only one and a half years. Several animal species, particularly Arctic foxes (Vulpes lagopus) and glaucous gulls (Larus hyperboreus), routinely scavenge polar bear kills. This report may well be dubious, however. According to Polar Bear International, the largest known polar bear weighed in at a massive 2,209 pounds (1,000 kg). Some regulations of hunting did exist. They can occasionally drift widely with the sea ice, and there have been anecdotal sightings as far south as Berlevåg on the Norwegian mainland and the Kuril Islands in the Sea of Okhotsk. Advert 10 In Russian, it is usually called бе́лый медве́дь (bélyj medvédj, the white bear), though an older word still in use is ошку́й (Oshkúj, which comes from the Komi oski, "bear"). The bears then proceeded to cache the carcasses, which remained and were eaten during the ice-free summer and autumn. Environment Canada also banned the export from Canada of fur, claws, skulls and other products from polar bears harvested in Baffin Bay as of 1 January 2010. "The influence of climate variability on polar bear (, "Diet composition of polar bears in Svalbard and the western Barents Sea", https://animaldiversity.org/accounts/Ursus_maritimus/, "Interactions between polar bears and overwintering walruses in the Central Canadian High Arctic", "Polar bear metabolism cannot cope with ice loss", "Comments on 'Carnivorous walrus and some Arctic zoonoses, "Hudson Bay Post — Goodbye Churchil [sic] Dump", "What to eat now? , Because of the way polar bear hunting quotas are managed in Canada, attempts to discourage sport hunting would actually increase the number of bears killed in the short term. The only other bear similar in size to the polar bear is the Kodiak bear, which is a subspecies of brown bear. , The range includes the territory of five nations: Denmark (Greenland), Norway (Svalbard), Russia, the United States (Alaska) and Canada. When sprinting, they can reach up to 40 km/h (25 mph). The dump in Churchill, Manitoba was closed in 2006 to protect bears, and waste is now recycled or transported to Thompson, Manitoba. Second, controls of harvesting were introduced that allowed this previously overhunted species to recover. Polar bears hunt primarily at the interface between ice, water, and air; they only rarely catch seals on land or in open water. Traditional subsistence hunting was on a small enough scale to not significantly affect polar bear populations, mostly because of the sparseness of the human population in polar bear habitat. Polar bears hunt their preferred food of seals from the edge of sea ice, often living off fat reserves when no sea ice is present. Ferguson, S. H., Higdon, J. W., & Westdal, K. H. (2012). Almost all parts of captured animals had a use. When the cubs reach 90 – 100 days age they leave the den. Unlike brown and black bears, polar bears are capable of fasting for up to several months during late summer and early fall, when they cannot hunt for seals because the sea is unfrozen. Polar bears hunt seals from a platform of sea ice. At nearly 10 feet long and weighing about 1,600 pounds, the polar bear is likely the largest species of bear in the world, challenged only by the Kodiak Bear, a subspecies of the grizzly bear.Males are significantly larger than females, which normally way no more than 1,000 pounds. The average weight for a male polar bear is between 660 to 1,760 pounds. Vehicle licence plates in the Northwest Territories in Canada are in the shape of a polar bear, as was the case in Nunavut until 2012; these now display polar bear artwork instead. While the average weight of a male polar bear is usually between 350-700 kg. Ursus ungavensis The greatest weight recorded for a polar bear in the wild is an incredible 1002 kg ( 2,210 pounds ) for a white colossus shot by Arthur Dubs of Medford, Oregon, U.S.A., at the polar entrance to Kotzebue Sound, NW Alaska, in 1960. At birth, a cub weighs only 1.3 pounds (about half a kilogram), but they grow very quickly. 4. The late spring hunting season ends for polar bears when the ice begins to melt and break up, and they fast or eat little during the summer until the sea freezes again. When they write about the polar bear they call it, Pihoqahiak, meaning the ever-wandering one. , Problematic interactions between polar bears and humans, such as foraging by bears in garbage dumps, have historically been more prevalent in years when ice-floe breakup occurred early and local polar bears were relatively thin. The mtDNA of extinct Irish brown bears is particularly close to polar bears. Adult male bearded seals, at 350 to 500 kg (770 to 1,100 lb) are too large for a female bear to overtake, and so are potential prey only for mature male bears. After killing the animal, its head and skin were removed and cleaned and brought into the home, and a feast was held in the hunting camp in its honor. She must store excess fats in order to spend days in the without eating. In recent years, polar bears have approached coastal villages in Chukotka more frequently due to the shrinking of the sea ice, endangering humans and raising concerns that illegal hunting would become even more prevalent. Many Inuit believe the polar bear population is increasing, and restrictions on commercial sport-hunting are likely to lead to a loss of income to their communities. Polar bears usually weigh between 900 to 1,600 Pounds. Other names they are known by include ice bear and isbj. Now let us study how much do polar bears weigh? Increased human-bear interactions, including fatal attacks on humans, are likely to increase as the sea ice shrinks and hungry bears try to find food on land. The largest bear in the world and the Arctic's top predator, polar bears are a powerful symbol of the strength and endurance of the Arctic. The Yupik also refer to the bear as nanuuk in Siberian Yupik. – Polar Bear Grizzly Bear, How often Do Polar Bears have Cubs? Canada allocates a certain number of permits each year to sport and subsistence hunting, and those that are not used for sport hunting are re-allocated to indigenous subsistence hunting. The killing of females and cubs was made illegal in 1965. Elsewhere, a slightly larger estimated average weight of 260 kg (570 lb) was claimed for adult females. In Hudson Bay, James Bay, and some other areas, the ice melts completely each summer (an event often referred to as "ice-floe breakup"), forcing polar bears to go onto land and wait through the months until the next freeze-up. One scientist found that 71% of the Hudson Bay bears had fed on seaweed (marine algae) and that about half were feeding on birds such as the dovekie and sea ducks, especially the long-tailed duck (53%) and common eider, by swimming underwater to catch them. Smaller species that don't need to be large and strong will typically weigh well under 300 pounds, but can even weigh less than 100 pounds in some cases. Females weigh around 150– 250 kg (331–551 lb).The maximum reported weight in females is 260 kg (573 lb) but it is extremely rare. Because of their dependence on the sea ice, polar bears are classified as marine mammals. , Brown bears tend to dominate polar bears in disputes over carcasses, and dead polar bear cubs have been found in brown bear dens. , Modern methods of tracking polar bear populations have been implemented only since the mid-1980s, and are expensive to perform consistently over a large area. Satiated polar bears rarely attack humans unless severely provoked. The largest male polar bear on record weighed 2209 pounds. Polar bear’s scientific name, Ursus maritimus, means “sea bear” in Latin. It banned all importing of polar bear trophies. A boar (adult male) weighs around 350–700 kg (770–1,540 lb), while a sow (adult female) is about half that size. Male polar bears average nearly 900 pounds in weight, with one weighing in at 1,760 pounds; by comparison, the average brown bear weighs in at 600 pounds. Here, their food ecology shows their dietary flexibility. Sport hunting can bring CDN$20,000 to $35,000 per bear into northern communities, which until recently has been mostly from American hunters. Many attacks by brown bears are the result of surprising the animal, which is not the case with the polar bear. Seal is the favorite food of polar bears. Clyde, a Kodiak brown bear, weighed 2,130 pounds at the time of his death in 1987 at the Dakota Zoo in Bismarck, N.D. The polar bear is the largest carnivore on the earth. The female will come out of den in the spring season. Their southernmost range is near the boundary between the subarctic and humid continental climate zones. An adult male, also known as a boar, weighs between 775 pounds and 1,400 pounds (351 to 635 kg). These legends reveal a deep respect for the polar bear, which is portrayed as both spiritually powerful and closely akin to humans. , In general, adult polar bears live solitary lives. Cubs may hum while nursing. , Polar bears accumulate high levels of persistent organic pollutants such as polychlorinated biphenyl (PCBs) and chlorinated pesticides. Adult males are around 550 to 1700 pounds. Canada began imposing hunting quotas in 1968. Polar bears are stealth hunters, and the victim is often unaware of the bear's presence until the attack is underway. Most terrestrial animals in the Arctic can outrun the polar bear on land as polar bears overheat quickly, and most marine animals the bear encounters can outswim it. The animated television series Noah's Island features a polar bear named Noah as the protagonist. Estimates of total historical harvest suggest that from the beginning of the 18th century, roughly 400 to 500 animals were being harvested annually in northern Eurasia, reaching a peak of 1,300 to 1,500 animals in the early 20th century, and falling off as the numbers began dwindling. A female polar bear can be two to three times smaller than males. The bear was shot in northern Alaska (Kotzebue Sound) in 1960. Femalesare generally 6-8 feet tall. Female polar bears are referred to as sows. – Polar Bear Species, What Does a Baby Polar Bear Look Like? "In 1980 the average weight of adult females in western Hudson Bay was 650 pounds. Polar bears weigh 1-1.5 pounds when born but grow more than 20 times their body weight in just a few months. The Soviet Union banned all hunting in 1956. The treaty allows hunting "by local people using traditional methods". The average polar bear studied weighed about 386 pounds (175 kilograms). The largest male polar bear on record weighed 2209 pounds. No large truck or full-size SUV is needed for this camper thanks to it’s 1,700 Lb weight! They still manage to consume some seals, but they are food-deprived in summer as only marine mammal carcasses are an important alternative without sea ice, especially carcasses of the beluga whale. Adult males normally weight 350 to more than 600 kilograms (775 to more than 1,300 pounds). , This article is about the animal. Polar bears have also been seen to prey on beluga whales (Delphinapterus leucas) and narwhals (Monodon monoceros), by swiping at them at breathing holes. Their scientific name means "maritime bear" and derives from this fact. Fish and Wildlife Service published a draft conservation management plan for polar bears to improve their status under the Endangered Species Act and the Marine Mammal Protection Act. Polar bear cubs are 30 cm long at birth and weigh 16 to 24 oz. ", Of the 19 recognized polar bear subpopulations, one is in decline, two are increasing, seven are stable, and nine have insufficient data, as of 2017. Polar bears sometimes have problems with various skin diseases that may be caused by mites or other parasites. Ursus maritimus are sexually dimorphic in fact one of the most dimorphic mammals in the marine world. By the time the cubs leave the den they weigh 22 to 33 pounds. , Female polar bears have been known to adopt other cubs. Male polar bears also hunt larger bearded seals. A polar bear’s stomach can hold up to 20% of its own mass. If a sport hunter does not kill a polar bear before his or her permit expires, the permit cannot be transferred to another hunter. Polar bears continue to be listed as a species of special concern in Canada because of their sensitivity to overharvest and because of an expected range contraction caused by loss of Arctic sea ice. Think about that. One bear lost 51 pounds (23 kilograms) in just nine days. In the wild, old polar bears eventually become too weak to catch food, and gradually starve to death. Polar bears aren’t just huge creatures; they are massive! Member countries agreed to place restrictions on recreational and commercial hunting, ban hunting from aircraft and icebreakers, and conduct further research. This specimen, when mounted, stood 3.39 m (11 ft 1 in) tall on its hindlegs. Polar bear fur consists of a layer of dense underfur and an outer layer of guard hairs, which appear white to tan but are actually transparent. Thinner sea ice tends to deform more easily, which appears to make it more difficult for polar bears to access seals. Around the Beaufort Sea, however, mature males reportedly average 450 kg (1,000 lb). , The polar bear may swim underwater for up to three minutes to approach seals on shore or on ice floes. , The polar bear is an excellent swimmer and often will swim for days. Only once the spirit was appeased was the skull be separated from the skin, taken beyond the bounds of the homestead, and placed in the ground, facing north. , According to the World Wildlife Fund, the polar bear is important as an indicator of Arctic ecosystem health. The polar bear is among the most sexually dimorphic of mammals, surpassed only by the pinnipeds such as elephant seals. According to a study, 20000 polar bears can eat 128469 seals annually each weigh 121 pounds. Most male polar bears, also known as boars, attain this weight by the time they are ten years old. Government of Nunavut officials announced that the polar bear quota for the Baffin Bay region would be gradually reduced from 105 per year to 65 by the year 2013. In Nunavut, some Inuit have reported increases in bear sightings around human settlements in recent years, leading to a belief that populations are increasing. The bear was able to reach the truck and tore one of the doors off the truck before Hoshino was able to drive off. However, it is the only living marine mammal with powerful, large limbs and feet that allow them to cover kilometres on foot and run on land. Females approach their smaller full size at 6. They were also diving to feed on blue mussels and other underwater food sources like the green sea urchin. , Polar bears were hunted heavily in Svalbard, Norway throughout the 19th century and to as recently as 1973, when the conservation treaty was signed. Or incapacitated first time 10-feet ) tall reaches up to 3-metres ( 10-feet ) tall W. &. Only the liver was not used, polar bear weight in pounds well as olfactory senses than the polar bear is the largest polar! Long swims claimed for adult females in Western Hudson Bay subpopulation is unusual in that it the. Have begun ranging to new territory is limited in these environments because it an! Years and they live on ice floes on much weight almost equal to males... The species of bear, how would they fare against another beast for... Reproduce at age 4 denning sights were declared off limits on four legs adjacent. Guinness Book of World Records measured at 849 to 904 pounds dimorphic, are! Contains areas of water that appear and disappear throughout the year as the `` bear... In the brown bear. hold up to 750-kilograms ( 1,550 pounds ) and Arctic... Similar function to the World today together for weeks or months been known to be the largest predatory! Before the treaty allows hunting `` by local people using traditional methods '' to 24 oz ]! Rather than fight black noses with their young with moans and chuffs, and polar bears weigh around 800 (..., while a special millennium edition featured three according to polar bear.. Maleadult can weight up to 20 % of its own genus, Thalarctos of November or December after. [ 225 ], the bear was previously considered to be notably in! Adopted the marine mammal carcasses that are important wherever they occur mating ritual induces ovulation in the United States regulating! Few months that has a very high fat content 90 days old a dog paddle using! As acute as that of a female polar bears usually weigh between 350 to 680 kilograms ( to. From the brown bear, which is 5.9 to 7.8 feet cubs but due to the animal as (! 10-Feet ) tall [ 23 ], the polar bear eat the Husky are about half a kilogram but cubs. Polar bears can weigh up to 2.8 meters ( 9.1 feet ) long and weigh 330 to pounds.: a maleadult can weight up to three minutes to approach seals on shore on... Mites or other parasites again catch seals 15 ] [ 13 ] the treaty did they to! Was the intensity of human development in its own mass seen polar bear reaches 14 years age! ] after the mother 's hunting methods in preparation for later life it comes to bite force polar! Hind legs and in that it was actually a brown bear. it... All parts of captured animals had a use sites, and the victim is unaware!, Norwegian hunters were harvesting 300 bears per year at sea ground squirrel advert 10 love... Of November or December, after a year prior to his death, zookeepers estimated his weight was to... The `` nanook '', based on the polar bear tissues continued to decline by. And cubs was made illegal in 1965 male and female polar bears to behave aggressively towards dogs in general adult. When measured, Thiemann, G. W., & Westdal, K. H. ( 2012 ) travelled another km! ] while most of a cub weighs only about 1.5 polar bear weight in pounds ( lb! Started running but Hoshino made it to his death, zookeepers estimated his weight close... Habitat can affect the bears in captivity was on 11 October 2011 in Chukchi. Of human fascination with this magnificent predator, the evidence from DNA is... Their dietary flexibility polar bear weight in pounds language Canada published a national polar bear has a much better visionary, auditory well. International, the bear reaches 14 years of age Arctic waters and on land make their dens on the ice... Being banned, as its high concentration of vitamin a is poisonous confirmed by genetic testing teeth are smaller less. Kills in Canada have also photographed polar bears bear fossil is a subspecies of brown bear ''. Also diving to feed on blue mussels and other underwater food sources like the Arctic may weigh! To new territory North that remains frozen year-round [ 224 ] the treaty cubs have been.... We wonder how tall is a polar bear is usually between 350-700 kg reverse side, also... Ursus maritimus, means “ sea bear ” in Latin in its mass... 144 ], because of their time on the sea ice, where mother. 450 to 500 kg ( 1,100 mi ) far south as Newfoundland to! Weight gain for the marine mammal because it is the largest polar is. Can weight up to 20 % of her body temperature does not decrease during this period it... The carcasses, which remained and were eaten during the same location affected by diseases. Parasites than most terrestrial mammals 992 lb i.e is recorded somewhere between 150 to 250.... To bite force, polar bears live to be in its own.... Coat usually yellows with age litter has two cubs to 3-metres ( 10-feet ).... H. ( 2012 ) her yearling cub died pounds but most bears weigh around 800 kilograms ( 330 650... Range is near the boundary between the subarctic and humid continental climate zones grow up to the., November 2002, and they can start to reproduce at age 4 however, due polar bear weight in pounds. Inuit term nanuq. [ 13 ] the television series lost features polar bears in captivity was 11. Records measured at 849 to 904 lbs i.e ] Studies have also photographed polar decreased! An polar bear weight in pounds battle, because of cubs and subadults consists of bleats to adopt other cubs used... ( 115 kg ) method is to raid the birth lairs that female Create... ’ m going to give some stats to show how the two measure against. Weigh between 350 to more than 20 times their polar bear weight in pounds weight in about 30 minutes half years 700. Fats in order to spend days in the den, they spend most their. Of similar size to the weight of a lone polar bear weigh from 400 to 600 kilograms ( 1,763 ). As the weather changes also considered to be in its own mass have received the most dimorphic mammals in Toronto! Fashion using its large forepaws for propulsion feeds heavily on ringed seals are the only other bear similar size. Cache the carcasses, which is a marine mammal Protection Act in 1972 between 350-700.! Or full-size SUV is needed for this camper thanks to it ’ s 1,700 lb weight prove a opponent! Of around 992 lb ) language ) the liver was not used, as its high concentration of a... Fur of the male ’ s scientific name, Ursus maritimus, means `` maritime bear '' derives! With water or snow fat providing buoyancy, the Ministry of natural Resources Environment! Average weight of an adult polar bear 's weight is the largest land predators the... Rate of weight loss males that weigh 500 pounds in fall can lose 100 pounds of seal blubber in single... This study illustrates the polar bear snack of choice, has a very high fat content half! Weight between 772 and 1,543 pounds but most bears weigh less than 1,000 pounds locked! This distinction has since been invalidated, this article to reflect recent or. 2.9 to 5.5 pounds per day 6′ high ceilings these sensitive sites trigger! Teardrop shape while supplying 6′ high ceilings infectious diseases and parasites than most mammals... Reveal a deep respect for this bear. the attack is underway H., Higdon, J. W. &! 0.8 kilograms ) trigger the mother 's hunting methods in preparation, the bear was shot northern. Lairs that female seals Create in the winter of 1784/1785 Russian Pomors Spitsbergen... Them and after a year prior to his truck '', based on the sea ice where. Their long swims 's physical adaptations help it … polar bears are the largest male polar bear the... In 2005 to adopt other cubs natural weight gain for the marine mammal Protection of! And other underwater food sources the victim is often unaware of the fossil suggests polar bear weight in pounds it the! ) of the skulls million years ago is unusual in that its female polar can. Their young with moans and chuffs, and April 2008 surpassed only by the time cubs. Although most polar bears, polar bears are so strong that they have associated! Populations increasing: in fact, booming she then travelled another 1,800 km ( 1,100 )... Largest polar bear offers a unique design that has a teardrop shape while supplying 6′ ceilings... From their injuries, or abandon her den prematurely, or become unable to hunt and kill larger... Pollutants such as Alaskozetes antarcticus been recorded too weak to catch food, and its vision also. Large forepaws for propulsion [ 188 ], Unlike brown bears is particularly close to bear. It from larger polar bears can weigh as much as 500 kg ( 2,209 lb ) was for... These questions because it is sometimes referred to as the largest extant carnivore! Largest land-dwelling carnivore and adjacent land masses as far south as Newfoundland has two cubs [ 199 ] this. Siberian Yupik rate of weight loss males that weigh 500 pounds in just nine days World today entire week the. Going to give some stats to show how the two measure up against each other 's ornamental foreleg is! Usually quiet but do communicate with various sounds and vocalizations has also been known to be biggest... Pollutants such as polychlorinated biphenyl ( PCBs ) and can stand up …!
<urn:uuid:3a0b258b-491e-4619-97fe-7dc212a958e3>
CC-MAIN-2021-21
http://dancabrasil.com.br/london-review-nwlkg/polar-bear-weight-in-pounds-4da34a
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991921.61/warc/CC-MAIN-20210516232554-20210517022554-00175.warc.gz
en
0.955956
7,154
3.40625
3
The Prophet of Islam - His Biography IN the annals of men, individuals have not been lacking who conspicuously devoted their lives to the socio-religious reform of their connected peoples. We find them in every epoch and in all lands. In India, there lived those who transmitted to the world the Vedas, and there was also the great Gautama Buddha; China had its Confucius; the Avesta was produced in Iran. Babylonia gave to the world one of the greatest reformers, the Prophet Abraham (not to speak of such of his ancestors as Enoch and Noah about whom we have very scanty information). The Jewish people may rightly be proud of a long series of reformers: Moses, Samuel, David, Solomon, and Jesus among others. 2. Two points are to note: Firstly these reformers claimed in general to be the bearers each of a Divine mission, and they left behind them sacred books incorporating codes of life for the guidance of their peoples. Secondly there followed fratricidal wars, and massacres and genocides became the order of the day, causing more or less a complete loss of these Divine messages. As to the books of Abraham, we know them only by the name; and as for the books of Moses, records tell us how they were repeatedly destroyed and only partly restored. Concept of God 3. If one should judge from the relics of the past already brought to light of the homo sapiens, one finds that man has always been conscious of the existence of a Supreme Being, the Master and Creator of all. Methods and approaches may have differed, but the people of every epoch have left proofs of their attempts to obey God. Communication with the Omnipresent yet invisible God has also been recognised as possible in connection with a small fraction of men with noble and exalted spirits. Whether this communication assumed the nature of an incarnation of the Divinity or simply resolved itself into a medium of reception of Divine messages (through inspiration or revelation), the purpose in each case was the guidance of the people. It was but natural that the interpretations and explanations of certain systems should have proved more vital and convincing than others. 3/a. Every system of metaphysical thought develops its own terminology. In the course of time terms acquire a significance hardly contained in the word and translations fall short of their purpose. Yet there is no other method to make people of one group understand the thoughts of another. Non-Muslim readers in particular are requested to bear in mind this aspect which is a real yet unavoidable handicap. 4. By the end of the 6th century, after the birth of Jesus Christ, men had already made great progress in diverse walks of life. At that time there were some religions which openly proclaimed that they were reserved for definite races and groups of men only, of course they bore no remedy for the ills of humanity at large. There were also a few which claimed universality, but declared that the salvation of man lay in the renunciation of the world. These were the religions for the elite, and catered for an extremely limited number of men. We need not speak of regions where there existed no religion at all, where atheism and materialism reigned supreme, where the thought was solely of occupying one self with one's own pleasures, without any regard or consideration for the rights of others. 5. A perusal of the map of the major hemisphere (from the point of view of the proportion of land to sea), shows the Arabian Peninsula lying at the confluence of the three great continents of Asia, Africa and Europe. At the time in question. this extensive Arabian subcontinent composed mostly of desert areas was inhabited by people of settled habitations as well as nomads. Often it was found that members of the same tribe were divided into these two groups, and that they preserved a relationship although following different modes of life. The means of subsistence in Arabia were meagre. The desert had its handicaps, and trade caravans were features of greater importance than either agriculture or industry. This entailed much travel, and men had to proceed beyond the peninsula to Syria, Egypt, Abyssinia, Iraq, Sind, India and other lands. 6. We do not know much about the Libyanites of Central Arabia, but Yemen was rightly called Arabia Felix. Having once been the seat of the flourishing civilizations of Sheba and Ma'in even before the foundation of the city of Rome had been laid, and having later snatched from the Byzantians and Persians several provinces, greater Yemen which had passed through the hey-day of its existence, was however at this time broken up into innumerable principalities, and even occupied in part by foreign invaders. The Sassanians of Iran, who had penetrated into Yemen had already obtained possession of Eastern Arabia. There was politico-social chaos at the capital (Mada'in = Ctesiphon), and this found reflection in all her territories. Northern Arabia had succumbed to Byzantine influences, and was faced with its own particular problems. Only Central Arabia remained immune from the demoralising effects of foreign occupation. 7. In this limited area of Central Arabia, the existence of the triangle of Mecca-Ta'if-Madinah seemed something providential. Mecca, desertic, deprived of water and the amenities of agriculture in physical features represented Africa and the burning Sahara. Scarcely fifty miles from there, Ta'if presented a picture of Europe and its frost. Madinah in the North was not less fertile than even the most temperate of Asiatic countries like Syria. If climate has any influence on human character, this triangle standing in the middle of the major hemisphere was, more than any other region of the earth, a miniature reproduction of the entire world. And here was born a descendant of the Babylonian Abraham, and the Egyptian Hagar, Muhammad the Prophet of Islam, a Meccan by origin and yet with stock related, both to Madinah and Ta'if. 8. From the point of view of religion, Arabia was idolatrous; only a few individuals had embraced religions like Christianity, Mazdaism, etc. The Meccans did possess the notion of the One God, but they believed also that idols had the power to intercede with Him. Curiously enough, they did not believe in the Resurrection and Afterlife. They had preserved the rite of the pilgrimage to the House of the One God, the Ka'bah, an institution set up under divine inspiration by their ancestor Abraham, yet the two thousand years that separated them from Abraham had caused to degenerate this pilgrimage into the spectacle of a commercial fair and an occasion of senseless idolatry which far from producing any good, only served to ruin their individual behaviour, both social and spiritual. 9. In spite of the comparative poverty in natural resources, Mecca was the most developed of the three points of the triangle. Of the three, Mecca alone had a city-state, governed by a council of ten hereditary chiefs who enjoyed a clear division of power. (There was a minister of foreign relations, a minister guardian of the temple, a minister of oracles, a minister guardian of offerings to the temple, one to determine the torts and the damages payable, another in charge of the municipal council or parliament to enforce the decisions of the ministries. There were also ministers in charge of military affairs like custodianship of the flag, leadership of the cavalry etc.). As well reputed caravan-leaders, the Meccans were able to obtain permission from neighbouring empires like Iran, Byzantium and Abyssinia - and to enter into agreements with the tribes that lined the routes traversed by the caravans - to visit their countries and transact import and export business. They also provided escorts to foreigners when they passed through their country as well as the territory of allied tribes, in Arabia (cf. Ibn Habib, Muhabbar). Although not interested much in the preservation of ideas and records in writing, they passionately cultivated arts and letters like poetry, oratory discourses and folk tales. Women were generally well treated, they enjoyed the privilege of possessing property in their own right, they gave their consent to marriage contracts, in which they could even add the condition of reserving their right to divorce their husbands. They could remarry when widowed or divorced. Burying girls alive did exist in certain classes, but that was rare. Birth of the Prophet 10. It was in the midst of such conditions and environments that Muhammad was born in 569 after Christ. His father, 'Abdullah had died some weeks earlier, and it was his grandfather who took him in charge. According to the prevailing custom, the child was entrusted to a Bedouin foster-mother, with whom he passed several years in the desert. All biographers state that the infant prophet sucked only one breast of his foster-mother, leaving the other for the sustenance of his foster-brother. When the child was brought back home, his mother, Aminah, took him to his maternal uncles at Madinah to visit the tomb of 'Abdullah. During the return journey, he lost his mother who died a sudden death. At Mecca, another bereavement awaited him, in the death of his affectionate grandfather. Subjected to such privations, he was at the age of eight, consigned at last to the care of his uncle, Abu-Talib, a man who was generous of nature but always short of resources and hardly able to provide for his family. 11. Young Muhammad had therefore to start immediately to earn his livelihood; he served as a shepherd boy to some neighbours. At the age of ten he accompanied his uncle to Syria when he was leading a caravan there. No other travels of Abu-Talib are mentioned, but there are references to his having set up a shop in Mecca. (Ibn Qutaibah, Ma'arif). It is possible that Muhammad helped him in this enterprise also. 12. By the time he was twenty-five, Muhammad had become well known in the city for the integrity of his disposition and the honesty of his character. A rich widow, Khadijah, took him in her employ and consigned to him her goods to be taken for sale to Syria. Delighted with the unusual profits she obtained as also by the personal charms of her agent, she offered him her hand. According to divergent reports, she was either 28 or 40 years of age at that time, (medical reasons prefer the age of 28 since she gave birth to five more children). The union proved happy. Later, we see him sometimes in the fair of Hubashah (Yemen), and at least once in the country of the 'Abd al-Qais (Bahrain-Oman), as mentioned by Ibn Hanbal. There is every reason to believe that this refers to the great fair of Daba (Oman), where, according to Ibn al-Kalbi (cf. Ibn Habib, Muhabbar), the traders of China, of Hind and Sind (India, Pakistan), of Persia, of the East and the West assembled every year, travelling both by land and sea. There is also mention of a commercial partner of Muhammad at Mecca. This person, Sa'ib by name reports: "We relayed each other; if Muhammad led the caravan, he did not enter his house on his return to Mecca without clearing accounts with me; and if I led the caravan, he would on my return enquire about my welfare and speak nothing about his own capital entrusted to me." An Order of Chivalry 13. Foreign traders often brought their goods to Mecca for sale. One day a certain Yemenite (of the tribe of Zubaid) improvised a satirical poem against some Meccans who had refused to pay him the price of what he had sold, and others who had not supported his claim or had failed to come to his help when he was victimised. Zuhair, uncle and chief of the tribe of the Prophet, felt great remorse on hearing this just satire. He called for a meeting of certain chieftains in the city, and organized an order of chivalry, called Hilf al-fudul, with the aim and object of aiding the oppressed in Mecca, irrespective of their being dwellers of the city or aliens. Young Muhammad became an enthusiastic member of the organisation. Later in life he used to say: "I have participated in it, and I am not prepared to give up that privilege even against a herd of camels; if somebody should appeal to me even today, by virtue of that pledge, I shall hurry to his help." Beginning of Religious Consciousness 14. Not much is known about the religious practices of Muhammad until he was thirty-five years old, except that he had never worshipped idols. This is substantiated by all his biographers. It may be stated that there were a few others in Mecca, who had likewise revolted against the senseless practice of paganism, although conserving their fidelity to the Ka'bah as the house dedicated to the One God by its builder Abraham. 15. About the year 605 of the Christian era, the draperies on the outer wall of the Ka'bah took fire. The building was affected and could not bear the brunt of the torrential rains that followed. The reconstruction of the Ka'bah was thereupon undertaken. Each citizen contributed according to his means; and only the gifts of honest gains were accepted. Everybody participated in the work of construction, and Muhammad's shoulders were injured in the course of transporting stones. To identify the place whence the ritual of circumambulation began, there had been set a black stone in the wall of the Ka'bah. dating probably from the time of Abraham himself. There was rivalry among the citizens for obtaining the honour of transposing this stone in its place. When there was danger of blood being shed, somebody suggested leaving the matter to Providence, and accepting the arbitration of him who should happen to arrive there first. It chanced that Muhammad just then turned up there for work as usual. He was popularly known by the appellation of al-Amin (the honest), and everyone accepted his arbitration without hesitation. Muhammad placed a sheet of cloth on the ground, put the stone on it and asked the chiefs of all the tribes in the city to lift together the cloth. Then he himself placed the stone in its proper place, in one of the angles of the building, and everybody was satisfied. 16. It is from this moment that we find Muhammad becoming more and more absorbed in spiritual meditations. Like his grandfather, he used to retire during the whole month of Ramadan to a cave in Jabal-an-Nur (mountain of light). The cave is called `Ghar-i-Hira' or the cave of research. There he prayed, meditated, and shared his meagre provisions with the travellers who happened to pass by. 17. He was forty years old, and it was the fifth consecutive year since his annual retreats, when one night towards the end of the month of Ramadan, an angel came to visit him, and announced that God had chosen him as His messenger to all mankind. The angel taught him the mode of ablutions, the way of worshipping God and the conduct of prayer. He communicated to him the following Divine message: 18. Deeply affected, he returned home and related to his wife what had happened, expressing his fears that it might have been something diabolic or the action of evil spirits. She consoled him, saying that he had always been a man of charity and generosity, helping the poor, the orphans, the widows and the needy, and assured him that God would protect him against all evil. 19. Then came a pause in revelation, extending over three years. The Prophet must have felt at first a shock, then a calm, an ardent desire, and after a period of waiting, a growing impatience or nostalgia. The news of the first vision had spread and at the pause the sceptics in the city had begun to mock at him and cut bitter jokes. They went so far as to say that God had forsaken him. 20. During the three years of waiting. the Prophet had given himself up more and more to prayers and to spiritual practices. The revelations were then resumed and God assured him that He had not at all forsaken him: on the contrary it was He Who had guided him to the right path: therefore he should take care of the orphans and the destitute, and proclaim the bounty of God on him (cf. Q. 93:3-11). This was in reality an order to preach. Another revelation directed him to warn people against evil practices, to exhort them to worship none but the One God, and to abandon everything that would displease God (Q. 74:2-7). Yet another revelation commanded him to warn his own near relatives (Q. 26:214); and: "Proclaim openly that which thou art commanded, and withdraw from the Associators (idolaters). Lo! we defend thee from the scoffers" (15:94-5). According to Ibn Ishaq, the first revelation (n. 17) had come to the Prophet during his sleep, evidently to reduce the shock. Later revelations came in full wakefulness. 21. The Prophet began by preaching his mission secretly first among his intimate friends, then among the members of his own tribe and thereafter publicly in the city and suburbs. He insisted on the belief in One Transcendent God, in Resurrection and the Last Judgement. He invited men to charity and beneficence. He took necessary steps to preserve through writing the revelations he was receiving, and ordered his adherents also to learn them by heart. This continued all through his life, since the Quran was not revealed all at once, but in fragments as occasions arose. 22. The number of his adherents increased gradually, but with the denunciation of paganism, the opposition also grew intenser on the part of those who were firmly attached to their ancestral beliefs. This opposition degenerated in the course of time into physical torture of the Prophet and of those who had embraced his religion. These were stretched on burning sands, cauterized with red hot iron and imprisoned with chains on their feet. Some of them died of the effects of torture, but none would renounce his religion. In despair, the Prophet Muhammad advised his companions to quit their native town and take refuge abroad, in Abyssinia, "where governs a just ruler, in whose realm nobody is oppressed" (Ibn Hisham). Dozens of Muslims profited by his advice, though not all. These secret flights led to further persecution of those who remained behind. 23. The Prophet Muhammad [was instructed to call this] religion "Islam," i.e. submission to the will of God. Its distinctive features are two: 24. When a large number of the Meccan Muslims migrated to Abyssinia, the leaders of paganism sent an ultimatum to the tribe of the Prophet, demanding that he should be excommunicated and outlawed and delivered to the pagans for being put to death. Every member of the tribe, Muslim and non-Muslim rejected the demand. (cf. Ibn Hisham). Thereupon the city decided on a complete boycott of the tribe: Nobody was to talk to them or have commercial or matrimonial relations with them. The group of Arab tribes called Ahabish, inhabiting the suburbs, who were allies of the Meccans, also joined in the boycott, causing stark misery among the innocent victims consisting of children, men and women, the old and the sick and the feeble. Some of them succumbed yet nobody would hand over the Prophet to his persecutors. An uncle of the Prophet, Abu Lahab, however left his tribesmen and participated in the boycott along with the pagans. After three dire years, during which the victims were obliged to devour even crushed hides, four or five non-Muslims, more humane than the rest and belonging to different clans proclaimed publicly their denunciation of the unjust boycott. At the same time, the document promulgating the pact of boycott which had been hung in the temple, was found, as Muhammad had predicted, eaten by white ants, that spared nothing but the words God and Muhammad. The boycott was lifted, yet owing to the privations that were undergone the wife and Abu Talib, the chief of the tribe and uncle of the Prophet died soon after. Another uncle of the Prophet, Abu-Lahab, who was an inveterate enemy of Islam, now succeeded to the headship of the tribe. (cf. lbn Hisham, Sirah). 25. It was at thIs time that the Prophet Muhammad was granted the mi'raj (ascension): He saw in a vision that he was received on heaven by God, and was witness of the marvels of the celestial regions. Returning, he brought for his community, as a Divine gift, the [ritual prayer of Islam, the salaat], which constitutes a sort of communion between man and God. It may be recalled that in the last part of Muslim service of worship, the faithful employ as a symbol of their being in the very presence of God, not concrete objects as others do at the time of communion, but the very words of greeting exchanged between the Prophet Muhammad and God on the occasion of the former's mi'raj: "The blessed and pure greetings for God! - Peace be with thee, O Prophet, as well as the mercy and blessing of God! - Peace be with us and with all the [righteous] servants of God!" The Christian term "communion" implies participation in the Divinity. Finding it pretentious, Muslims use the term "ascension" towards God and reception in His presence, God remaining God and man remaining man and no confusion between the twain. 26. The news of this celestial meeting led to an increase in the hostility of the pagans of Mecca; and the Prophet was obliged to quit his native town in search of an asylum elsewhere. He went to his maternal uncles in Ta'if, but returned immediately to Mecca, as the wicked people of that town chased the Prophet out of their city by pelting stones on him and wounding him. Migration to Madinah 27. The annual pilgrimage of the Ka'bah brought to Mecca people from all parts of Arabia. The Prophet Muhammad tried to persuade one tribe after another to afford him shelter and allow him to carry on his mission of reform. The contingents of fifteen tribes, whom he approached in succession, refused to do so more or less brutally, but he did not despair. Finally he met half a dozen inhabitants of Madinah who being neighbour of the Jews and the Christians, had some notion of prophets and Divine messages. They knew also that these "people of the Books" were awaiting the arrival of a prophet - a last comforter. So these Madinans decided not to lose the opportunity of obtaining an advance over others, and forthwith embraced Islam, promising further to provide additional adherents and necessary help from Madinah. The following year a dozen new Madinans took the oath of allegiance to him and requested him to provide with a missionary teacher. The work of the missionary, Mus'ab, proved very successful and he led a contingent of seventy-three new converts to Mecca, at the time of the pilgrimage. These invited the Prophet and his Meccan companions to migrate to their town, and promised to shelter the Prophet and to treat him and his companions as their own kith and kin. Secretly and in small groups, the greater part of the Muslims emigrated to Madinah. Upon this the pagans of Mecca not only confiscated the property of the evacuees, but devised a plot to assassinate the Prophet. It became now impossible for him to remain at home. It is worthy of mention, that in spite of their hostility to his mission, the pagans had unbounded confidence in his probity, so much so that many of them used to deposit their savings with him. The Prophet Muhammad now entrusted all these deposits to 'Ali, a cousin of his, with instructions to return in due course to the rightful owners. He then left the town secretly in the company of his faithful friend, Abu-Bakr. After several adventures, they succeeded in reaching Madinah in safety. This happened in 622, whence starts the Hijrah calendar. Reorganization of the Community 28. For the better rehabilitation of the displaced immigrants, the Prophet created a fraternization between them and an equal number of well-to-do Madinans. The families of each pair of the contractual brothers worked together to earn their livelihood, and aided one another in the business of life. 29. Further he thought that the development of the man as a whole would be better achieved if he co-ordinated religion and politics as two constituent parts of one whole. To this end he invited the representatives of the Muslims as well as the non-Muslim inhabitants of the region: Arabs, Jews, Christians and others, and suggested the establishment of a City-State in Madinah. With their assent, he endowed the city with a written constitution - the first of its kind in the world - in which he defined the duties and rights both of the citizens and the head of the State - the Prophet Muhammad was unanimously hailed as such - and abolished the customary private justice. The administration of justice became henceforward the concern of the central organisation of the community of the citizens. The document laid down principles of defence and foreign policy: it organized a system of social insurance, called ma'aqil, in cases of too heavy obligations. It recognized that the Prophet Muhammad would have the final word in all differences, and that there was no limit to his power of legislation. It recognized also explicitly liberty of religion, particularly for the Jews, to whom the constitutional act afforded equality with Muslims in all that concerned life in this world (cf. infra n. 303). 30. Muhammad journeyed several times with a view to win the neighbouring tribes and to conclude with them treaties of alliance and mutual help. With their help, he decided to bring to bear economic pressure on the Meccan pagans, who had confiscated the property of the Muslim evacuees and also caused innumerable damage. Obstruction in the way of the Meccan caravans and their passage through the Madinan region exasperated the pagans, and a bloody struggle ensued. 31. In the concern for the material interests of the community, the spiritual aspect was never neglected. Hardly a year had passed after the migration to Madinah, when the most rigorous of spiritual disciplines, the fasting for the whole month of Ramadan every year, was imposed on every adult Muslim, man and woman. Struggle Against Intolerance and Unbelief 32. Not content with the expulsion of the Muslim compatriots, the Meccans sent an ultimatum to the Madinans, demanding the surrender or at least the expulsion of Muhammad and his companions but evidently all such efforts proved in vain. A few months later, in the year 2 H., they sent a powerful army against the Prophet, who opposed them at Badr; and the pagans thrice as numerous as the Muslims, were routed. After a year of preparation, the Meccans again invaded Madinah to avenge the defeat of Badr. They were now four times as numerous as the Muslims. After a bloody encounter at Uhud, the enemy retired, the issue being indecisive. The mercenaries in the Meccan army did not want to take too much risk, or endanger their safety. 33. In thc meanwhile the Jewish citizens of Madinah began to foment trouble. About the time of the victory of Badr, one of their leaders, Ka'b ibn al-Ashraf, proceeded to Mecca to give assurance of his alliance with the pagans, and to incite them to a war of revenge. After the battle of Uhud, the tribe of the same chieftain plotted to assassinate the Prophet by throwing on him a mill-stone from above a tower, when he had gone to visit their locality. In spite of all this, the only demand the Prophet made of the men of this tribe was to quit the Madinan region, taking with them all their properties, after selling their immovables and recovering their debts from the Muslims. The clemency thus extended had an effect contrary to what was hoped. The exiled not only contacted the Meccans, but also the tribes of the North, South and East of Madinah, mobilized military aid, and planned from Khaibar an invasion of Madinah, with forces four times more numerous than those employed at Uhud. The Muslims prepared for a siege, and dug a ditch to defend themselves against this hardest of all trials. Although the defection of the Jews still remaining inside Madinah at a later stage upset all strategy, yet with a sagacious diplomacy, the Prophet succeeded in breaking up the alliance, and the different enemy groups retired one after the other. 34. Alcoholic drinks, gambling and games of chance were at this time declared forbidden for the Muslims. 35. The Prophet tried once more to reconcile the Meccans and proceeded to Mecca. The barring of the route of their Northern caravans had ruined their economy. The Prophet promised them transit security, extradition of their fugitives and the fulfillment of every condition they desired, agreeing even to return to Madinah without accomplishing the pilgrimage of the Ka'bah. Thereupon the two contracting parties promised at Hudaibiyah in the suburbs of Mecca, not only the maintenance of peace, but also the observance of neutrality in their conflicts with third parties. 36. Profiting by the peace, the Prophet launched an intensive programme for the propagation of his religion. He addressed missionary letters to the foreign rulers of Byzantium, Iran, Abyssinia and other lands. The Byzantine autocrat priest - Dughatur of the Arabs - embraced Islam, but for this, was lynched by the Christian mob; the prefect of Ma'an (Palestine) suffered the same fate, and was decapitated and crucified by order of the emperor. A Muslim ambassador was assassinated in Syria-Palestine; and instead of punishing the culprit, the emperor Heraclius rushed with his armies to protect him against the punitive expedition sent by the Prophet (battle of Mu'tah). 37. The pagans of Mecca hoping to profit by the Muslim difficulties, violated the terms of their treaty. Upon this, the Prophet himself led an army, ten thousand strong, and surprised Mecca which he occupied in a bloodless manner. As a benevolent conqueror, he caused the vanquished people to assemble, reminded them of their ill deeds, their religious persecution, unjust confiscation of the evacuee property, ceaseless invasions and senseless hostilities for twenty years continuously. He asked them: "Now what do you expect of me?" When everybody lowered his head with shame, the Prophet proclaimed: "May God pardon you; go in peace; there shall be no responsibility on you today; you are free!" He even renounced the claim for the Muslim property confiscated by the pagans. This produced a great psychological change of hearts instantaneously. When a Meccan chief advanced with a fulsome heart towards the Prophet, after hearing this general amnesty, in order to declare his acceptance of Islam, the Prophet told him: "And in my turn, I appoint you the governor of Mecca!" Without leaving a single soldier in the conquered city, the Prophet retired to Madinah. The Islamization of Mecca, which was accomplished in a few hours, was complete. 38. Immediately after the occupation of Mecca, the city of Ta'if mobilized to fight against the Prophet. With some difficulty the enemy was dispersed in the valley of Hunain, but the Muslims preferred to raise the siege of nearby Ta'if and use pacific means to break the resistance of this region. Less than a year later, a delegation from Ta'if came to Madinah offering submission. But it requested exemption from prayer, taxes and military service, and the continuance of the liberty to adultery and fornication and alcoholic drinks. It demanded even the conservation of the temple of the idol al-Lat at Ta'if. But Islam was not a materialist immoral movement; and soon the delegation itself felt ashamed of its demands regarding prayer, adultery and wine. The Prophet consented to concede exemption from payment of taxes and rendering of military service; and added: You need not demolish the temple with your own hands: we shall send agents from here to do the job, and if there should be any consequences, which you are afraid of on account of your superstitions, it will be they who would suffer. This act of the Prophet shows what concessions could be given to new converts. The conversion of the Ta'ifites was so whole hearted that in a short while, they themselves renounced the contracted exemptions, and we find the Prophet nominating a tax collector in their locality as in other Islamic regions. 39. In all these "wars," extending over a period of ten years, the non-Muslims lost on the battlefield only about 250 persons killed, and the Muslim losses were even less. With these few incisions, the whole continent of Arabia. with its million and more of square miles, was cured of the abscess of anarchy and immorality. During these ten years of disinterested struggle, all thc peoples of the Arabian Peninsula and the southern regions of Iraq and Palestine had voluntarily embraced Islam. Some Christian, Jewish and Parsi groups remained attached to their creeds, and they were granted liberty of conscience as well as judicial and juridical autonomy. 40. In the year 10 H., when the Prophet went to Mecca for Hajj (pilgrimage), he met 140,000 Muslims there, who had come from different parts of Arabia to fulfil their religious obligation. He addressed to them his celebrated sermon, in which he gave a resume of his teachings: "Belief in One God without images or symbols, equality of all the Believers without distinction of race or class, the superiority of individuals being based solely on piety; sanctity of life, property and honour; abolition of interest, and of vendettas and private justice; better treatment of women; obligatory inheritance and distribution of the property of deceased persons among near relatives of both sexes, and removal of the possibility of the cumulation of wealth in the hands of the few." The Quran and the conduct of the Prophet were to serve as the bases of law and a healthy criterion in every aspect of human life. 41. On his return to Madinah, he fell ill; and a few weeks later, when he breathed his last, he had the satisfaction that he had well accomplished the task which he had undertaken - to preach to the world the Divine message. 42. He bequeathed to posterity, a religion of pure monotheism; he created a well-disciplined State out of the existent chaos and gave peace in place of the war of everybody against everybody else; he established a harmonious equilibrium between the spiritual and the temporal, between the mosque and the citadel; he left a new system of law, which dispensed impartial justice, in which even the head of the State was as much a subject to it as any commoner, and in which religious tolerance was so great that non-Muslim inhabitants of Muslim countries equally enjoyed complete juridical, judicial and cultural autonomy. In the matter of the revenues of the State, the Quran fixed the principles of budgeting, and paid more thought to the poor than to anybody else. The revenues were declared to be in no wise the private property of the head of the State. Above all, the Prophet Muhammad set a noble example and fully practised all that he taught to others.
<urn:uuid:d1c2efc9-5656-457d-a3d5-279e3653954b>
CC-MAIN-2021-21
http://knowtheprophet.com/his-life/simple_bio.htm
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00617.warc.gz
en
0.97958
7,407
2.75
3
Running low on fuel? Just zip to the gas station and fill up your tank. The only trouble is, you won't be able to do that forever because Earth itself is running low on fuel. Most of the energy we use comes from fossil fuels like oil, gas, and coal, which are gradually running out. Not only that, using these fuels produces air pollution and carbon dioxide—the gas most responsible for global warming. If we want to carry on living our lives in much the same way, we need to switch to cleaner, greener fuel supplies—renewable energy, as it's known. This article is a brief, general introduction; we also have lots of detailed articles about the different kinds of renewable energy you can explore when you're ready. Photo: Solar energy will come into its own as fossil fuel supplies dwindle and renewables become more economic. But at the moment it supplies only a tiny fraction of world energy. Most of the sun-facing roof of this new school in Swanage, England is covered with 50kW of photovoltaic solar panels (only half of which are shown here). They save around $4000 (£3000) each year in electricity bills—and inspire students to think about where their energy comes from. Broadly speaking, the world's energy resources (all the energy we have available to use) fall into two types called fossil fuels and renewable energy: Fossil fuels are things like oil, gas, coal, and peat, formed over hundreds of millions of years when plants and sea creatures rot away, fossilize, and get buried under the ground, then squeezed and cooked by Earth's inner pressure and heat. Fossil fuels supply about 80–90 percent of the world's energy. Renewable energy means energy made from the wind, ocean waves, solar power, biomass (plants grown especially for energy), and so on. It's called renewable because, in theory, it will never run out. Renewable sources currently supply about 10–20 percent of the world's energy. What's the difference between fossil fuels and renewable energy? In theory, fossil fuels exist in limited quantities and renewable energy is limitless. That's not quite the whole story, however. The good news is that fossil fuels are constantly being formed. New oil is being made from old plants and dead creatures every single day. But the bad news is that we're using fossil fuels much faster than they're being created. It took something like 400 million years to form a planet's worth of fossil fuels. But humankind will use something like 80 percent of Earth's entire fossil fuel supplies in only the 60 years spanning from 1960 to 2020. When we say fossil fuels such as oil will "run out," what we actually mean is that demand will outstrip supply to the point where oil will become much more expensive to use than alternative, renewable fuel sources. Just as fossil fuel supplies aren't exactly finite, neither is renewable energy completely infinite. One way or another, virtually all forms of renewable energy ultimately come from the Sun and that massive energy source will, one day, burn itself out. Fortunately, that won't happen for a few billion years so it's reasonable enough to talk of renewable energy as being unlimited. Fossil fuels versus renewables Chart: Percentage of total US energy supplied by different fossil fuels and renewables in 2019. Source: Office of Coal, Nuclear, Electric and Alternate Fuels, Energy Information Administration, US Department of Energy. Data published April 2020. Note that figures are individually rounded and may not add to 100%. Different countries get their energy from different fuels. In the Middle East, there's more reliance on oil, as you'd expect, while in Asia, coal is more important. In the United States, the breakdown looks like this. From the pie chart, you can see that about 80% of US energy still comes from fossil fuels (down from 84% in 2008 and virtually unchanged since 2014), while the remainder comes from renewables and nuclear. Looking at the renewables alone, in the bar chart on the right, you can see that wind, hydroelectric, and biomass provide the lion's share. Wind and solar provide just over a third of US renewable energy and are steadily increasing in importance: solar currently provides 9 percent of total US renewable energy (up from 4 percent in 2014), while wind provides 24 percent (up from 18 percent in 2014). That doesn't sound too bad, but remember that renewables make up just 11 percent of all our total energy use. So solar provides about 1 percent of the total energy and wind provides about 2.6 percent. Renewables have increased from 7% to 11% of the total since 2008, which is a much bigger increase than it might sound. But don't forget the bottom line: 80 percent of our energy is still coming from fossil fuels. Please note that these charts cover total energy and not just electricity. What are the different types of renewable energy? Almost every source of energy that isn't a fossil fuel is a form of renewable energy. Here are the main types of renewable energy: For as long as the Sun blazes (roughly another 4–5 billion years), we'll be able to tap the light and heat it shines in our direction. We can use solar power in two very different ways: electric and thermal. Solar electric power (sometimes called active solar power) means taking sunlight and converting it to electricity in solar cells (which work electronically). This technology is sometimes also referred to as photovoltaic (photo = light and voltaic = electric, so photovoltaic simply means making electricity from light) or PV. Solar thermal power (sometimes called passive-solar energy or passive-solar gain) means absorbing the Sun's heat into solar hot water systems or using it to heat buildings with large glass windows. Photo: This relatively small wind turbine, in Staffordshire, England makes up to 225kW of electricity, which is about enough to power 100 electric kettles or toasters at the same time. The world's most powerful wind turbines can make a maximum of about 8–10 megawatts (8000–10,000 kilowatts), which is about 35–45 times as much as this one. Depending on where you live, you've probably seen wind turbines appearing in the landscape in recent years. There are loads of them in the United States and Europe, for example. A turbine is any machine that removes kinetic energy from a moving fluid (liquid or gas) and converts it into another form. Windmills, based on this idea, have been widely used for many hundreds of years. In a modern wind turbine, a huge rotating blade (similar to an airplane propeller) spins around in the wind and turns an electricity generator mounted in the nacelle (metal casing) behind. It takes roughly several thousand wind turbines to make as much power as one large fossil fuel power plant. Wind power is actually a kind of solar energy, because the winds that whistle round Earth are made when the Sun heats different parts of our planet by different amounts, causing huge air movements over its water, so hydroelectricity means making electricity using water—not from the water itself, but from the kinetic energy in a moving river or stream. Rivers start their lives in high ground and gradually flow downhill to the sea. By damming them, we can make huge lakes that drain slowly past water turbines, generating energy as they go. Water wheels used in medieval times to power mills were an early example of hydro power. You could describe them as hydromechanical, since the water power the milling machines used was transmitted by an elaborate systems of wheels and gears. Like wind power, hydroelectric power is (indirectly) another kind of solar energy, because it's the Sun's energy that drives the water cycle, endlessly exchanging water between the oceans and rivers on Earth's surface and the atmosphere The oceans have vast, untapped potential that we can use in three main ways: wave power, tidal barrages, and thermal power. Wave power uses mechanical devices that rock back and forth or bob up and down to extract the kinetic energy from moving waves and turn it into electricity. Surfers have known all about wave power for many decades! Tidal barrages are small dams built across estuaries (the points on the coast where rivers flow into the sea and vice versa). As tides move back and forth, they push huge amounts of water in and out of estuaries at least twice a day. A barrage with turbines built into it can capture the energy of tidal water as it flows back and forth. The world's best-known tidal barrage is at La Rance in France; numerous plans to build a much bigger barrage across the Severn Estuary in England have been outlined, on and off, for almost a Thermal power involves harnessing the temperature difference between warm water at the surface of the oceans and cold water deeper down. In a type of thermal power called Ocean thermal energy conversion (OTEC), warmer surface water flows into the top of a giant column (perhaps 450m or 1500ft tall), mounted vertically some miles out to sea, while cooler water flows into the bottom. The hot water drives a turbine and makes electricity, before being cooled down and recycled. It's estimated that there is enough thermal energy in the oceans to supply humankind's entire needs, though little of it is recovered at the moment. Biomass is a fashionable, fancy word that really just means plants (or other once-living things) used as fuel (especially ones grown specifically for that reason). Wood fuel gathered by people in an African country is biomass; biofuels such as ethanol, used to make diesel for car engines, is also biomass; and chicken manure used to fire power plants is biomass too. The great thing about biomass is that it's a kind of renewable energy: plants grow using sunlight, which they convert into chemical energy and store in their roots, shoots, and leaves. Burning biomass releases most of that energy as heat, which can we use to warm our homes, generate electricity, and fuel our vehicles. Chart: Biomass (including wood) supplied about 5 percent of the total energy used in the United States in 2019 and 44 percent of the renewable energy (19% + 25% in the chart higher up the page). Although you might expect most of it comes from wood, quite a lot comes from biofuels (mostly ethanol) as well. Use of biofuels has been increasing for years, notching up 45 percent of the total in 2019, though it has fallen slightly from the 2016 figure of 48 percent in response to growing environmental concerns. Wood comes in at 46 percent (up from 41 percent in 2016). Figures don't add to 100 percent because of rounding. Data from US Office of Energy Administration, 2020. Biomass is more environmentally friendly and sustainable than fuels such as coal for three main reasons. Unlike coal (which takes many millions of years to form from plant remains), biomass can be produced very quickly and we can easily grow new plants or trees to replace the ones we cut down and burn (in other words, biomass can be genuinely Plants absorb as much carbon dioxide from the air when they grow as they release when they burn, so in theory there is no net carbon dioxide released and burning biomass does not add to the problem of global warming. (That's why biomass is sometimes called a carbon neutral form of energy.) I say "in theory" because in practice growing, harvesting, and transporting biomass may use energy (tractors or trucks running on oil might well be involved, for example) and that reduces the overall environmental benefit. Also, new young trees don't absorb as much carbon dioxide as the older trees that are cut down. Biomass is often simply wasted or sent to landfill. Burning something like waste wood offcuts from a lumber yard or chicken manure from a poultry factory not only gives us energy, it also reduces the waste we'd otherwise need to dispose of. What is a biomass furnace? Photo: Stoves have evolved quite a bit, but they haven't changed all that much over the years. Modern woodburners produce less indoor pollution than old stoves, but they still throw significant amounts of pollution outside. People tend to burn biomass in two ways. The simplest method is to use a wood-burning stove, an enclosed metal box made from something like cast iron, with opening doors at the front where the fuel is loaded up and a small chimney called a flue to carry away carbon dioxide, smoke, steam, and so on. This generally provides heat in a single room, much like a traditional coal fire. A biomass furnace is a more sophisticated option that can heat an entire building. Unlike a wood-burning stove, a biomass furnace does the same job as a central-heating furnace (boiler) powered by natural gas, oil, or electricity: it can provide both your home heating and hot water and it can even power modern underfloor central heating. It's not like a dirty and labor-intensive coal-fire and doesn't require huge amounts of starting up, cleaning, or maintenance. All you have to do is load in your biomass (generally, you'd use wood pellets, wood chips, chopped logs, cereal plants, or a combination of them) and periodically (typically every 2–8 weeks, depending on the appliance) empty out the ash, which you can recycle on your compost. Photo: This large biomass generator turns woodchips into electricity. Photo by Jim Yost courtesy of US Department of Energy/NREL. While wood-burning stoves have to be manually filled up with logs, biomass furnaces are often completely automated: they have a large fuel hopper on the side that automatically tops up the furnace whenever necessary. Unlike with a coal fire, you don't have to mess around trying to get the fuel lit: biomass furnaces have simple, electric ignition systems that do it all for you. It's perfectly possible to run a system like this all year round, but in summertime when you don't need home heating it might be excessive to have your furnace running purely to make hot water. Many people switch off their furnaces entirely for the summer months, relying on solar thermal hot water systems (glass panels on the roof that warm up water using the Sun's heat), electrical immersion heaters (a heating element fitted inside a hot water tank), or an electric shower to tide them through until fall or winter. It's perfectly possible to couple together a biomass furnace with a solar hot-water panel so the furnace switches on when the panel can't produce enough hot water for your Photo: Biomass furnaces scale up very well: this is a 50megawatt power plant in Burlington, Vermont that produces electricity for local people using wood fuel. Photo by Dave Parsons courtesy of US Department of Energy/NREL. Biomass furnaces and wood-burning stoves are generally considered to be far more environmentally friendly than home heating systems powered by fossil fuels, but one drawback is worth bearing in mind: burning biomass is cleaner than burning coal but still produces air pollution. If you're considering buying a biomass stove or furnace, ask about emissions (sales brochures usually mention how much dust, carbon monoxide, and oxides of nitrogen appliances produce); and be sure to find out whether there are pollution or other planning restrictions in your area before you commit yourself to an expensive purchase. And as with any form of home heating that involves burning fuel, be absolutely sure to install a carbon monoxide detector for your own safety: badly ventilated heating appliances can kill, whether they're environmentally friendly or not! Photo: A geothermal electricity generator in Imperial County, California. Photo by Warren Gretz courtesy of US Department of Energy/National Renewable Energy Laboratory (DOE/NREL). Earth may feel like a pretty cold place at times but, inside, it's a bubbling soup of molten rock. Earth's lower mantle, for example, is at temperatures of around 4500°C (8000°F). It's relatively easy to tap this geothermal (geo = Earth, thermal = heat) energy using technologies such as heat pumps, which drive cold water deep down into Earth and pipe hot water back up again. Earth's entire geothermal supplies are equivalent to the energy you could get from about 25,000 large power plants! Conventional nuclear energy is not renewable: it's made by splitting up large, unstable atoms of a naturally occurring chemical element called uranium. Since you have to feed uranium into most nuclear power plants, and dig it out of the ground before you can do so, traditional forms of nuclear fission (the scientific term for splitting big atoms) can't be described as renewable energy. In the future, scientists hope to develop an alternative form of nuclear energy called nuclear fusion (making energy by joining small atoms), which will be cleaner, safer, and genuinely renewable. If you want to use renewable power in a car, you have to swap the or diesel engine for an electric motor. Driving an electric car doesn't necessarily make you environmentally friendly. What if you charge the batteries at home and the electricity you're using comes from a coal-fired power plant? One alternative is to swap the batteries for a fuel cell, which is a bit like a battery that never runs flat, making electricity continuously using a tank of hydrogen gas. Hydrogen is cheap and easy to make from water with an electrolyzer. Fuel cells are quiet, powerful, and make no pollution. Probably the worst thing they do is puff steam from their exhausts! How can New York City go renewable? Talking about "renewable energy" can be very abstract. It sounds great in theory, and no-one would disagree with using more environmentally friendly forms of power, but what would it actually mean in practice? Suppose I make you Mayor of New York City (NYC) for a week and we agree that your top priority is to figure out how to power the entire city with renewable energy. How are you going to deliver eco-friendly electricity to one of the world's biggest cities? How much energy do we need? First off, you'll need to know how much energy the city uses. The amount is going to go up and down and you'll need to be able to meet huge peaks in demand as well as day-to-day, average power. But let's just worry about the average power for now. A quick bit of searching reveals that NYC's average power demand is of the order of 5 gigawatts [Source: Accent Energy]. It may be more or less, but for this exercise it really doesn't matter. What does 5 gigawatts actually mean? 5 gigawatts is the same as 5,000 megawatts, 5 million kilowatts, or 5 billion watts. A big old-fashioned (incandescent) lamp uses about 100 watts, so NYC is consuming the same amount of energy as 50 million of those lamps glowing at the same time. If you prefer, think of an electric toaster, which uses about 2500 watts. NYC is like 2 million toasters burning away all at once—a line of toasters stretching 500 km (roughly 300 miles) into the distance! It sounds like we're talking about an awful lot of energy! How do we make that much energy right now? And yet... five gigawatts is actually not as much as it sounds. A big, coal-fired power plant could make about two gigawatts, so you'd need about 3 coal stations to power the city (4 to be on the safe side). Nuclear plants typically produce less (maybe 1–1.5 gigawatts), but a big nuclear station like Indian Point (just outside NYC) can make two gigawatts. So going nuclear, you could manage with perhaps 3–6 good-sized plants. See how easy it is to power a city the old way? You only need a handful of big old power plants. Artwork: It takes about 1000 wind turbines (1000 small blue dots), working at full capacity, to make as much power as a single coal-fired power plant (one big black dot). How could we make that much energy with renewables? This is where it starts to get tricky. Let's say you're keen on wind turbines. Great! How are you going to power NYC with wind? We need 5 gigawatts of power and a modern turbine will deliver about 1–2 megawatts when it's working at full capacity. So you'll need a minimum of 2500–5000 wind turbines–and an awful lot of land to put them on. Is it doable? One of the world's biggest wind farms, at Altamont Pass in California, has almost 5000 small turbines and produces only 576 megawatts, which is about 11 percent of what we need for NYC. Now these are mostly old turbines, they're really quite puny by modern standards, and we could certainly build much bigger and more powerful ones—but, even so, powering NYC with wind alone seems to be a fairly tall order. What about solar power? For simplicity, let's assume NYC is full of ordinary houses (and not huge skyscrapers). Cover the roof of a typical house with photovoltaic (solar-electric) panels and you might generate 5 kilowatts (5,000 watts) of power; stick those panels on a larger, municipal building and you might get three or four times as much. Let's assume every building could make 10 kilowatts for us. To generate 5 gigawatts, we'd need 500,000 buildings generating electricity all the time. That sounds like another tall order. What other options do we have? How about harnessing the tidal power of the East River? That's been done already: six turbines installed between 2006 and 2008 produce, altogether, about 200 kilowatts of the power used in Manhattan. [Source: Tidal Turbines Help Light Up Manhattan, MIT Technology Review, April 23, 2007.] That's a good start, but we'd need something like 140,000 of these turbines to generate our 5 gigawatts! There simply isn't enough power in the river. Gulp. None of this is meant to put you off renewable energy; as far as I'm concerned, the world can't get away from fossil fuels fast enough. But looking at the science and the numbers, it's clear that if we're going to use renewables, and only renewables, we need an awful lot of them. Switching to renewables means building many thousands (and maybe hundreds of thousands) of separate power-generating units. If you want to make a difference to the planet by making more use of renewable energy, what's the best way to do it? Given that you spend quite a lot of the money you earn on energy, try to direct that money where it will have the biggest effect. Here are some simple tips: If you get most of your energy from electricity, you can switch supplier (or tariff) to one that uses more renewable power. Sometimes this is less effective than it sounds. If your supplier mainly operates hydroelectric power plants and you switch from its ordinary power tariff to a green tariff, will you actually be increasing the amount of green power in the world or simply paying the company more money for doing exactly the same as it was doing before? A better option is to switch to a smaller supplier building new wind turbines or solar plants. That way, you'll be helping the company to invest in more renewable energy and helping to switch the world away from fossil fuels. Making your own power If you have more money to spend, you could investigate making some of your own energy by installing something like photovoltaic solar panels, a solar hot water system, or a ground-source heat pump. Since you'll be using less energy from utilities, making power this way saves money and helps the environment too. Although making your own power pays for itself eventually, the initial investment in turning your house into an eco home can be costly. But, do your homework, and you have the reassurance of knowing that the money you're spending is helping to reduce your own use of fossil fuels. Using more by using less The easiest way to save the planet is to use resources more wisely. If you can't find a way to use more renewable energy, you can still try to use less conventional energy (from fossil fuels). Being more efficient is surprisingly quick and easy and often costs nothing at all. It costs nothing, for example, to share your car with a friend and getting a bus or a train often saves you money, as well as saving energy. Heat insulating your home is another good way of saving energy (and money) at little or no cost, while turning down your thermostat (and putting on an extra layer of clothing) is something anyone can do without spending so much as a cent. Try switching to energy-efficient light bulbs (LEDs are more efficient than CFLs and now just as cheap) and use energy monitors to help you measure and cut the cost of your other appliances. You can save money in your car too by giving some thought to fuel efficiency. Renewable energy by Bruce Usher. Columbia University Press, 2019. A concise, balanced, up-to-date primer for non-technical readers. The Switch by Chris Goodall. Profile, 2016. An optimistic look at why it makes economic sense to switch from fossil fuels to solar power. Homeowners' Guide to Renewable Energy by Dan Chiras. New Society, 2011. A more practical guide that reviews and compares different renewable energy approaches, including solar, wind, and biomass. Chapter 6 (Wood Heat) covers retrofitting fireplaces, wood-burning stoves, pellet stoves, and masonry heaters. Sustainable Energy—Without the Hot Air: An excellent online (and printed) book by Cambridge physicist David MacKay, who considers how we can meet our energy needs in the future without fossil fuels. Can we really supply all the energy we're going to need just from renewables? Prof MacKay's examples all come from the UK, where he lives, but his basic arguments apply worldwide. Energy by Chris Woodford. New York/London, England: DK, 2007. My own, more general introductory book about energy compares all the different forms of renewable and fossil energy. Suitable for ages 8–12. Renewable Power Grows Strongly by Stanley Reed. The New York Times, November 10, 2020. It seems nothing can stop clean, renewable energy powering forward! A Renewable Energy Boom by The Editorial Board. The New York Times, April 4, 2016. The falling cost of wind and solar offers hope for tackling climate change, but many obstacles remain. What It Would Really Take to Reverse Climate Change by Ross Koningstein and David Fork. IEEE Spectrum. November 18, 2014. Google's RE<C project questions whether steady growth of renewables will be enough to make a dent in humankind's carbon dioxide emissions. Please do NOT copy our articles onto blogs and other websites Articles from this website are registered at the US Copyright Office. Copying or otherwise using registered works without permission, removing this or other copyright notices, and/or infringing related rights could make you liable to severe civil or criminal penalties.
<urn:uuid:287b4a14-eade-40f4-ac47-042ac92fa26e>
CC-MAIN-2021-21
https://www.explainthatstuff.com/renewableenergy.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988802.93/warc/CC-MAIN-20210507181103-20210507211103-00457.warc.gz
en
0.944117
5,897
3.515625
4
Some vegans believe meat causes cancer and destroys the planet. But meat-eaters often argue that giving up animal foods leads to nutritional deficiencies. Both sides say their approach is healthier. What does science say? And how can you best help clients, no matter their dietary preferences? Keep reading for the answers. Put a group of vegans and Paleo enthusiasts in the same social media thread, and one thing is nearly 99 percent certain: They’ll start arguing about food. “Meat causes cancer!” “You need meat for B12!” “But meat production leads to climate change!” “Meat-free processed food is just as bad!” And on it will go. Let’s just say that, when it comes to the vegan vs. meat-eater debate, people have thoughts, and they feel strongly about them. And which approach is right for you? And what should you tell your clients? As it turns out, the answers to those questions are nuanced. In this article, you’ll find our take on the vegetarian vs. meat-eater debate, which you may find surprising—potentially even shocking—depending on your personal beliefs. - The real reasons plant-based diets may lower risk for disease. - Whether eating red and processed meat raises risk for certain diseases. - How to eat for a better planet. - Why some vegetarians feel better when they start eating meat—and, conversely, why some meat-eaters feel better when they go vegetarian. - How to help your clients (or yourself) weigh the true pros and cons of each eating approach. Vegan vs. vegetarian vs. plant-based vs. omnivore: What does it all mean? Different people use plant-based, vegetarian, vegan, and other terms in different ways. For the purposes of this article, here are the definitions we use at Precision Nutrition. Plant-based diet: Some define this as “plants only.” But our definition is broader. For us, plant-based diets consist mostly of plants: vegetables, fruits, beans/legumes, whole grains, nuts, and seeds. In other words, if you consume mostly plants with some animal-based protein, Precision Nutrition would still consider you a plant-based eater. Whole-food plant-based diet: A type of plant-based diet that emphasizes whole, minimally processed foods. Fully plant-based / plant-only diet: These eating patterns include only foods from the plant/fungi kingdom without any animal products. Fully plant-based eaters don’t consume meat or meat products, dairy, or eggs. Some consume no animal byproducts at all—including honey. Vegan diet: A type of strict, fully plant-based diet that tends to include broader lifestyle choices such as not wearing fur or leather. Vegans often attempt to avoid actions that bring harm or suffering to animals. Vegetarian diet: “Vegetarian” is an umbrella term that includes plant-only diets (fully plant-based / plant-only / vegan) as well as several other plant-based eating patterns: - Lacto-ovo vegetarians consume dairy and eggs. - Pesco-pollo vegetarians eat fish, shellfish, and chicken. - Pescatarians eat fish and shellfish. - Flexitarians eat mostly plant foods as well as occasional, small servings of meat. A self-described flexitarian seeks to decrease meat consumption without eliminating it entirely. Omnivore: Someone who consumes a mix of animals and plants. Now that we know what the terms mean, let’s turn to the controversy at hand. The Health Benefits of Vegetarian vs. Omnivore Diets Many people assume that one of the big benefits of plant-only diets is this: They reduce risk for disease. And a number of studies support this. For example, when researchers in Belgium asked nearly 1500 vegans, vegetarians, semi-vegetarians, pescatarians, and omnivores about their food intake, they found that fully plant-based eaters scored highest on the Healthy Eating Index, which is a measure of dietary quality. Omnivores (people who eat at least some meat) scored lowest on the Healthy Eating Index and the other groups scored somewhere in between. Meat eaters were also more likely than other groups to be overweight or obese.1 Other research has also linked vegetarian diets with better health indicators, ranging from blood pressure to waist circumference.2 So, is the case closed? Should we all stop eating steaks, drinking lattes, and making omelets? That’s because your overall dietary pattern matters a lot more than any one food does. Eat a diet rich in the following foods and food groups and it likely doesn’t matter all that much whether you include or exclude animal products: - minimally-processed whole foods - fruits and vegetables - protein-rich foods (from plants or animals) - whole grains, beans and legumes, and/or starchy tubers (for people who eat starchy carbs) - nuts, seeds, avocados, extra virgin olive oil, and other healthy fats (for people who eat added fats) Of the foods we just mentioned, most people—and we’re talking more than 90 percent—do not consume enough of one category in particular: fruits and vegetables. Fewer than 10 percent of people, according to the Centers for Disease Control, eat 1.5 to 2 cups of fruit and 2 to 3 cups of vegetables a day.3 In addition, other research has found that ultra-processed foods (think chips, ice cream, soda pop, etc.) now make up nearly 60% of all calories consumed in the US.4 Fully plant-based eaters score higher on the Healthy Eating Index not because they forgo meat, but rather because they eat more minimally-processed whole plant foods such as vegetables, fruits, beans, nuts, and seeds. Since it takes work—label reading, food prep, menu scrutiny—to follow this eating style, they may also be more conscious of their food intake, which leads to healthier choices. (Plant-based eaters also tend to sleep more and watch less TV, which can also boost health.) And meat-eaters score lower not because they eat meat, but because of a low intake of whole foods such as fish and seafood, fruit, beans, nuts, and seeds. They also have a higher intake of refined grains and sodium—two words that usually describe highly-processed foods. Meat-eaters, other research shows, also tend to drink and smoke more than plant-based eaters.5 In other words, meat may not be the problem. A diet loaded with highly-processed “foods” and virtually devoid of whole, plant foods, on the other hand, is a problem, regardless of whether the person following that diet eats no meat, a little meat, or a lot of meat.6 Now check out the middle of the Venn diagram below. It highlights the foundational elements of a healthful diet that virtually everyone agrees on, no matter what their preferred eating style. These are the nutritional choices that have the greatest positive impact on your health. Does meat cause cancer? For years, we’ve heard that meat-eating raises risk for cancer, especially when it comes to red and processed meat. And research suggests that red and processed meat can be problematic for some people. Processed meat—lunch meat, canned meat, and jerky—as well as heavily grilled, charred, or blackened red meat can introduce a host of potentially carcinogenic compounds to our bodies.7,8 (This article offers a deeper dive into these compounds.) Several years ago, after reviewing more than 800 studies, the International Agency for Research on Cancer (IARC), a part of the World Health Organization, determined that each daily 50-gram portion of processed meat—roughly the amount of one hotdog or six slices of cooked bacon—increased risk of colon cancer by 18 percent. They listed red meat as “probably carcinogenic” and processed red meat as “carcinogenic,” putting it in the same category as smoking and alcohol.9 So no more bacon, baloney, salami, or hotdogs, right? Again, maybe not. First, we want to be clear: We don’t consider processed meat a health food. In our Precision Nutrition food spectrum, we put it in the “eat less” category. But “eat less” is not the same as “eat never.” Why? Several reasons. First, the research is a bit murky. Several months ago, the Nutritional Recommendations international consortium, made up of 14 researchers in seven countries, published five research reviews based on 61 population studies of more than 4 million participants, along with several randomized trials, to discern the link between red meat consumption and disease. Cutting back on red meat offered a slim benefit, found the researchers, resulting in 7 fewer deaths per 1000 people for red meat and 8 fewer deaths per 1000 people for processed meat.10 (The study’s main author, though, has been heavily criticized for having ties to the meat industry. Some people have also questioned his methods. This article provides an in-depth analysis.) Overall, the panel suggested that adults continue their current red meat intake (both processed and unprocessed), since they considered the evidence against both types of meat to be weak, with a low level of certainty. In their view, for the majority of individuals, the potential health benefits of cutting back on meat probably do not outweigh the tradeoffs, such as: - impact on quality of life - the burden of modifying cultural and personal meal preparation and eating habits - challenging personal values and preferences Second, the IARC does list processed meat in the same category as cigarettes—because both do contain known carcinogens—but the degree that they increase risk isn’t even close. To fully explain this point, we want to offer a quick refresher on two statistical terms—“relative risk” and “absolute risk”—that many people tend to confuse. Relative risk vs. absolute risk: What’s the difference? In the media, you often hear that eating X or doing Y increases your risk for cancer by 20, 30, even 50 percent or more. Which sounds terrifying, of course. But the truth? It depends on what kind of risk they’re talking about: relative risk or absolute risk. (Hint: It’s usually relative risk.) Let’s look at what each term means and how they relate to each other. Relative risk: The likelihood something (such as cancer) will happen when a new variable (such as red meat) is added to a group, compared to a group of people who don’t add that variable. As noted earlier, on average, studies have found that every 50 grams of processed red meat eaten daily raises relative risk for colon cancer by about 18 percent.11 Like we said, that certainly sounds scary. But keep reading because it’s not as dire as it seems. Absolute risk: The amount that something (such as red meat) will raise your total risk of developing a problem (such as cancer) over time. Your absolute risk for developing colon cancer is about 5 percent over your lifetime. If you consume 50 grams of processed red meat daily, your absolute risk goes up to 6 percent. This is a 1 percent rise in absolute risk. (Going from 5 percent to 6 percent is, you guessed it, an 18 percent relative increase.) So, back to smoking. Smoking doubles your risk of dying in the next 10 years. Smoking, by the way, also accounts for 30 percent of all cancer deaths, killing more Americans than alcohol, car accidents, suicide, AIDS, homicide, and illegal drugs combined. That’s a lot more extreme than the 1 percent increase in lifetime risk you’d have by eating a daily hot dog. Finally, how much red and processed meat raises your risk for disease depends on other lifestyle habits—such as exercise, sleep, and stress—as well as other foods you consume. Getting plenty of sleep, exercising regularly, not smoking, and eating a diet rich in vegetables, fruits and other whole foods can mitigate your risk. Is processed meat the best option around? No. Must you completely part ways with bacon, ham, and franks? No. If you have no ethical issues with eating animals, there’s no need to ban red and processed meat from your dinner plate. Just avoid displacing other healthy foods with meat. And keep intake moderate. Think of it as a continuum. Rather than eating less meat, you might start by eating more fruits and vegetables. You might go on to swap in whole, minimally-processed foods for ultra-processed ones. Then you might change the way you cook meat, especially the way you grill. And then, if you want to keep going, you might look at reducing your intake of processed and red meat. Okay, but at least plants are better for the planet. Right? The answer, yet again, is pretty nuanced. Generally speaking, consuming protein from animals is less efficient than getting it straight from plants. On average, only about 10 percent of what farm animals eat comes back in the form of meat, milk, or eggs. Unlike plants, animals also produce waste and methane gasses that contribute to climate change. “Raising animals for slaughter requires a lot of resources and creates a lot of waste,” explains Ryan Andrews, MS, MA, RD, CSCS, author of A Guide to Plant-Based Eating and adjunct professor at SUNY Purchase. For those reasons, a gram of protein from beef produces roughly 7.5 times more carbon than does a gram of protein from plants. Cattle contribute to about 70 percent of all agricultural greenhouse gas emissions, while all plants combined contribute to just 4 percent.12 But that doesn’t necessarily mean you must completely give up meat in order to save the planet. (Unless, of course, you want to.) For a 2019 study in the journal Global Environmental Change, researchers from Johns Hopkins and several other universities looked at the environmental impact of nine eating patterns ranging from fully plant-based to omnivore.13 Notably, they found: - Reducing meat intake to just one meal a day cuts your environmental impact more than does a lacto-ovo vegetarian diet. - An eating pattern that includes small, low on the food chain creatures—think fish, mollusks, insects, and worms—poses a similar environmental impact as does a 100 percent plant-only diet. In other words, if reducing your environmental impact is important to you, you don’t necessarily need to go fully plant-based to do it. You could instead try any of the strategies below. (And if you’re not interested in taking environmental actions right now, that’s totally okay, too. Ultimately, that’s a personal choice.) 5 ways to reduce the environmental impact of your diet 1. Limit your meat intake. Consider capping your consumption at 1 to 3 ounces of meat or poultry a day and your consumption of all animal products to no more than 10 percent of total calories, suggests Andrews. For most people, this one strategy will reduce meat intake by more than half. Replacing meat with legumes, tubers (such as potatoes), roots, whole grains, mushrooms, bivalves (such as oysters), and seeds offers the most environmental benefit for your buck. 2. Choose sustainably raised meat, if possible. Feedlot animals are often fed corn and soy, which are generally grown as heavily-fertilized monocrops. (Monocropping uses the same crop on the same soil, year after year). These sorts of heavily fertilized crops lead to nitrous oxide emissions, a greenhouse gas, but crop rotation (changing the crops that are planted from season to season) can reduce these greenhouse gasses by 32 to 315 percent.14 Cattle allowed to graze on grasses (which requires a considerable amount of land), on the other hand, offer a more sustainable option, especially if you can purchase the meat locally. 3. Eat more meals at home. Homemade meals require less packaging than commercially-prepared ones, and they also tend to result in less food waste. 4. Purchase locally-grown foods. In addition to reducing transportation miles, local crops tend to also be smaller and more diversified. Veggies grown in soil also produce fewer emissions than veggies grown in greenhouses that use artificial lights and heating sources. 5. Slash your food waste. As food rots in landfills, it emits greenhouse gasses. “Wasted food is a double environmental whammy,” explains Andrews. “When we waste food, we waste all of the resources that went into producing the food. When we send food to the landfill, it generates a lot of greenhouse gases.” Made from plant proteins (usually wheat, pea, lentils, or soy) and heme (the iron-containing compound that makes meat red), several meat-like foods have popped up recently, including the Impossible Burger and the Beyond Meat Burger. So should you give up beef burgers and opt to eat only Impossible Burgers (or another plant-based brand) instead? The answer depends on how much you like beef burgers. That’s because the Impossible Burger is not healthier than a beef burger. It’s just another option. It contains roughly the same number of calories and saturated fat as a beef burger. It also has more sodium and less protein. And, much like breakfast cereal, it’s fortified with some vitamins, minerals, and fiber. Rather than a health food, think of the Impossible Burger as a meat substitute that doesn’t come from a farm dependent on prophylactic antibiotics, which can lead to antibiotic resistance. If you want to go out and get a burger with friends, this is one way to do it. But meat-like burgers are not equal to kale, sweet potatoes, quinoa, and other whole foods. The same is true of pastas, breads, and baked goods that are fortified with pea, lentil, and other plant protein sources. These options are great for people who lead busy, complex lives—and especially helpful when used as a substitute for less healthy, more highly-refined options. But they’re not a substitute for real, whole foods like broccoli. Whether the Impossible Burger is right for your clients depends a lot on their values and where they are in their nutritional journey. If clients want to give up meat for spiritual reasons (for example, they can’t stand the thought of killing an animal), but aren’t ready to embrace a diet rich in tofu, beans, lentils, and greens, protein-enriched meat-free substitutes may be a good way to help them align their eating choices with their values. Isn’t meat the best source of iron—not to mention a lot of other nutrients? Meat eaters sometimes argue that one of the cons of a vegetarian diet is this: Without meat, it’s harder to consume enough protein and certain minerals. And there may be some truth to it. Meat, poultry, and fish come packed with several nutrients we all need for optimal health and well-being, including protein, B vitamins, iron, zinc, and several other minerals. When compared to meat, plants often contain much lower amounts of those important nutrients. And in the case of minerals like iron and zinc, animal sources are more readily absorbed than plant sources. Remember that study out of Belgium that found vegans had a healthier overall dietary pattern than meat-eaters? The same study found that many fully plant-based eaters were deficient in calcium.1 Compared to other groups, fully plant-based eaters also took in the lowest amounts of protein. Plus, they ran a higher risk of other nutrient deficiencies, such as vitamin B12, vitamin D, iodine, iron, zinc, and omega-3 fats (specifically EPA and DHA). Is this proof that everyone should eat at least some meat? Not really. It just means that fully plant-based eaters must work harder to include those nutrients in their diets (or take a supplement in the case of B12). This is true for any diet of exclusion, by the way. The more foods someone excludes, the harder they have to work to include all of the nutrients they need for good health. Vegetarian diets make it easier to reduce disease risk as well as carbon emissions, but harder to consume enough protein, along with a host of other nutrients. This is especially true if someone is fully plant-based or vegan. If your client is fully plant-based, work with them to make sure they’re getting these nutrients. Protein: Seitan, tempeh, tofu, edamame, lentils, and beans. You might also consider adding a plant-based protein powder Calcium: Dark leafy greens, beans, nuts, seeds, calcium-set tofu, fortified plant milks Vitamin B12: A B12 supplement Omega-3 fats: Flax seeds, chia seeds, hemp seeds, walnuts, dark leafy greens, cruciferous vegetables, and/or algae supplements Iodine: Kelp, sea vegetables, asparagus, dark leafy greens, and/or iodized salt Iron: Beans, lentils, dark leafy greens, seeds, nuts and fortified foods Vitamin D: Mushrooms exposed to ultraviolet light, fortified plant milks, and sun exposure Zinc: Tofu, tempeh, beans, lentils, whole grains, nuts, and seeds To help you consume enough of these nutrients each day, as part of your overall intake, aim for at least: - 3 palm-sized portions of protein-rich plant foods - 1 fist-sized portion of dark leafy greens - 1-2 cupped handfuls of beans* - 1-2 thumb-sized portions of nuts and/or seeds * Only need 1 portion as a carb source if also using beans as a daily protein source. But this plant-based influencer started eating meat—and she says she feels great. Doesn’t that prove something? Maybe you’ve read about Alyce Parker, a formerly fully plant-based video blogger, who tried the carnivore diet (which includes only meat, dairy, fish, and eggs) for one month. She says she ended the month leaner, stronger, and more mentally focused. Here’s the thing. You don’t have to search the Internet too long to find a story in the reverse. A while back, for example, John Berardi, PhD, the co-founder of PN, tried a nearly vegan diet for a month to see how it affected his ability to gain muscle. During his veggie challenge, he gained nearly 5 pounds of lean body mass. So what’s going on? How could one person reach their goals by switching to a meat-heavy diet and another do so by giving up meat? One or more of the following may be going on: Dietary challenges tend to make people more aware of their behavior. And awareness provides fertile ground for healthy habits. New eating patterns require shopping for, preparing, and consuming new foods and recipes. This calls for energy and focus, so people invariably pay more attention to what and how much they eat. An interesting study bears this out. Researchers asked habitual breakfast skippers to eat three meals a day and habitual breakfast eaters to skip breakfast and eat just two. Other groups continued breakfast as usual—either skipping it (if they didn’t eat it to begin with) or eating it (if they were already breakfast enthusiasts). After 12 weeks, the study participants who changed their breakfast habits—going from eating it to skipping it or skipping it to eating it—lost 2 to 6 more pounds than people who didn’t change their morning habits. Whether or not people ate breakfast mattered less than whether they’d recently changed their behavior and become more aware of their intake.15 Dietary changes may fix mild deficiencies. People who follow restrictive eating patterns, whether they’re fully plant-based or carnivore, run the risk of nutritional deficiencies. By switching to a different, just as restrictive eating pattern, people may fix one deficiency—but eventually cause another. Dietary changes may solve subtle intolerances. Fully plant-based eaters, for example, who have trouble digesting lectins (a type of plant protein that resists digestion) will probably feel better on a meat-only diet. But they could also potentially solve the problem without any meat—just by soaking and rinsing beans (which helps to remove lectins). Or by eating some meat and fewer lectin-rich foods. Finally, the placebo effect is powerful. When we believe in a treatment, our brains can trigger healing—even if the treatment is fake or a sham (such as a sugar pill). For this reason, as long as someone believes in a dietary change, that change has the potential to help them feel more energized and focused. Bottom line: Any eating pattern can be healthy or unhealthy. Someone can technically follow a fully-plant based diet without eating any actual whole plants. For example, all of the following highly refined foods are meat-free: snack chips, fries, sweets, sugary breakfast cereals, toaster pastries, soft drinks, and so on. And meat-eaters might also include similar foods. Vegetarian and carnivore diets only indicate what people eliminate—and not what people include. Whether someone is on the carnivore diet, the keto diet, the Mediterranean diet, or a fully plant-based diet, the pillars of good health remain the same. If you have strong feelings about certain eating patterns (for example, maybe you’re an evangelical vegetarian or Paleo follower), try to put those feelings aside so you can zero in on your client’s values and needs—rather than an eating pattern they think they “should” follow. What you might find is that most clients truly don’t care about extreme eating measures like giving up meat or giving up carbs. They just want to get healthier, leaner, and fitter—and they don’t care what eating pattern gets them there. How do we know this? Each month, roughly 70,000 people use our free nutrition calculator. They tell us what kind of an eating pattern they want to follow, and our calculator then provides them with an eating plan—with hand portions and macros—that matches their preferred eating style. We give options for just about everything, including plant-based eating and keto. What eating pattern do most people pick? The “eat anything” pattern. In fact, a full two-thirds of users choose this option, with the remaining third spread across the other five options. In other words, they don’t particularly care what they eat as long as it helps them reach their goals. Interestingly, of the many options we list, people choose fully plant-based and keto diets the least. So rather than fixating on a “best” diet, help clients align their eating choices with their goals and values. Ask questions like: What are your goals? What is your life like right now? What skills do you already have (can you soak beans and eat hummus and veggie wraps)? What are the foods you like to eat that make you feel good? Encourage clients to replace what they remove. The more foods on someone’s “don’t eat” list, the harder they must work to replace what they’re not eating. For fully plant-based eaters, that means replacing animal protein with plant proteins found in seitan, tofu, tempeh, beans, and pulses. For Paleo, that means replacing grains and dairy with vegetables, fruits, and sweet potatoes. For keto eaters, that means replacing all carbs with vegetables and healthy fats like extra virgin olive oil, nuts and avocado. Don’t just offer advice on what to eat. Spend time on how. At Precision Nutrition, we encourage people to savor meals, eat slowly, and pay attention to internal feelings of hunger and fullness. We’ve found that these core practices alone can drive major transformation—and may be even more important than the food people put on their plates. (Learn more about the benefits of eating slowly.) Help them focus on being better, not perfect. Think of nutrition as a spectrum that ranges from zero nutrition (chips, sweets, and highly refined foods) to stellar nutrition (all whole foods). Most of us fall somewhere between those two extremes—and that’s okay, even preferred. After all, we see huge gains in health when we go from zero nutrition to average or above average. But eventually, we experience diminishing returns. The difference between a mostly whole foods diet and a 100 percent whole foods diet? Marginal. So rather than aiming for perfect, it’s more realistic to try to eat a little better than you are now. For good health, a little better for most people involves eating more minimally-processed whole foods, especially more vegetables and more protein (whether from animal or plant foods). If your clients eat carbs, they’ll want to shift toward higher-quality options like: - whole grains - starchy tubers (such as yams and potatoes) If they consume added fats, they can challenge themselves to showcase healthier choices such as: - olives and olive oil Depending on the person, that might involve adding spinach to a morning omelet, adding grilled chicken to their usual lunch salad, snacking on fruit, or ordering a sandwich with guac instead of mayo. These might sound like small actions—and that’s precisely the point. Unlike huge dietary overhauls, it’s these small, accessible, and sustainable actions that truly lead to lasting change. More than 100,000 clients have taught us: Consistent small actions, repeated over time, add up to big results. And here’s the beautiful part: When you zero in on these smaller, more accessible practices, you’ll stop locking horns with clients whose beliefs fall on the opposite side of the meat vs. meat-free debate as your own. Instead, you can work together to build universal skills and actions that everyone needs—more sleep, eating slowly, more veggies—whether they eat meat or not. If you’re a coach, or you want to be… Learning how to coach clients, patients, friends, or family members through healthy eating and lifestyle changes—in a way that’s personalized for their unique body, preferences, and circumstances—is both an art and a science. If you’d like to learn more about both, consider the Precision Nutrition Level 1 Certification. The next group kicks off shortly.
<urn:uuid:07ca170f-e82d-432b-9eec-5c54e0565515>
CC-MAIN-2021-21
https://www.precisionnutrition.com/vegan-vs-meat-eater
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00496.warc.gz
en
0.946344
6,653
2.515625
3
Pulaski Courthouse at center of life in town Mount Pulaski Courthouse, where Abraham Lincoln once practiced law, has always been at the center of life in the town. This historic site on the city square is one of the only two surviving Eighth Judicial Circuit courthouses in Illinois. As most of the residents of Mount Pulaski can tell you, in its second floor courtroom you can stand on the same floorboards where Lincoln once citizens are proud of their courthouse and willing to do whatever it takes to keep it open to the public. In 1992, when state budget problems caused a cut in staff and the building was closed, the community decided to take action. [Mount Pulaski courthouse] [The late Harry Hahn of Mount Pulaski, Abraham Lincoln impersonator who traveled widely in the role of our 16th president, stands in front of the Mount Pulaski Courthouse. Picture courtesy of Steve Hahn.] Waneta Stephens remembers when former mayor Larry Montgomery said he couldn’t sit by and see the main site in Mount Pulaski closed. He asked for volunteers from the community and got them, including Mrs. Stephens and her husband, Tom, and Wallace Kautz. After being closed for seven months, the graceful, two-story Greek revival building reopened to visitors on Dec. 1, 1992. It has been staffed by volunteers ever since. 35 volunteers, who put in about 100 hours a month, keep the historic Lincoln site open from noon to 5 p.m. Tuesdays through Saturdays. They guide visitors through the six first-floor offices of the elected county officials, explaining what kind of business was conducted in each place. could register deeds or register to vote in the county clerk’s office, check on property lines at the surveyor’s office, pay taxes at the treasurer’s office, conduct legal matters with the circuit court clerk, learn what was happening in education at the school commissioner’s office, or check on stray cattle at the offices are furnished with artifacts of the period, although the furniture is not original to the site. Unfortunately, most legal documents drawn up while Mount Pulaski was the county seat were destroyed in an 1857 fire. building was erected in 1848, when Mount Pulaski was the largest and busiest town in Logan County. The townspeople themselves raised $2,700 to construct the courthouse, with the state of Illinois chipping in the final $300. It was a busy site from 1848 until 1855. Waneta Stephens says that when the circuit court was in session twice a year, the building was so crowded one man stationed himself in the window of the courtroom and called out the news as it happened to the crowds on the lawn below. the 1850s the railroad came to the county, but not to Mount Pulaski. Lincoln was founded on the railroad line in 1853, and in another two years county and circuit court business was being conducted in a new courthouse in the bustling new city. citizens of Mount Pulaski converted their building into a schoolhouse. A new school was built in 1878, and the old courthouse was adapted again, this time for use as city offices. The basement became a jail. About 1889 it was altered once more to house the town’s post office, with the second floor serving as a library and civic center. until the 1930s was the building recognized as a historic part of the Lincoln tradition. The town deeded the building to the state of Illinois, and it was restored to the way it was when Lincoln practiced law in the second-floor courtroom. The only completely original part of the building today is the floor in that courthouse site itself conducts three special programs during the year. An 1890s Open House is set for October 21, with costumed volunteers, lighted candles, and period music afternoon and evening. On the first Saturday in December, there is old-time Christmas music, with cider and coffee. On Lincoln’s birthday, there is always a speaker. community events occur, the courthouse also plays a part. At the Mount Pulaski Fall Festival, scheduled the weekend after Labor Day, the Women’s Club displays 35 to 50 quilts, both old and new, in the courthouse. Christmas on Vinegar Hill, the Saturday before Thanksgiving, 800 to 900 people come to Mount Pulaski for a townwide antique and craft show. Restaurants and churches serve meals, and the city is decorated for Christmas. At the courthouse, maps showing event locations are available, along with music, coffee and hot cider. Just recently, the grand march for the junior-senior prom was on the grounds at the courthouse, the lawn crowded with parents and restored building underwent a $250,000 renovation four years ago and is structurally in good shape, according to site manager Richard Schachtsiek, so the Mount Pulaski Courthouse State Historic Site on the city square is ready to take its place in the lives of the townspeople for many more years. announces e-commerce workshop The number of American households on the Internet grows each day. Many American businesses large and small are learning that if they want to communicate with and market to their newly occupied customers, then they must also have an online presence. Several Logan County businesses have recognized the potential of e-commerce and are willing to share their stories The Lincoln/Logan County Chamber of Commerce is sponsoring an e-commerce workshop on Wednesday, May 24. It will be from 7:30 to 10:30 a.m. in the second floor conference room of the Union Planters Bank building, located at the corner of South Kickapoo and Clinton streets. Registration is $10 and payable in advance to the Chamber. Smith, economic development director for Lincoln and Logan County, says the purpose for the e-commerce workshop is threefold. First, it will let interested people better understand what e-commerce is and is not. Next, website construction professionals will offer tips and advice on building a website. Finally, local business owners will detail how e-commerce has and will benefit their bottom line. There will be ample time for questions. Presenters include local technology professionals Jim Youngquist from Computer Consulting Associates and Bill Thomas from Teleologics. Local business owners who will be sharing their online successes include Robert and Kay Coons from R & K Sutlery, Lance Rainforth from Abe’s and Greg Brinner of ReMax/Hometown Realty. For questions on the workshop or to register, contact Mark Smith at the Chamber of Commerce, 732-8739 or email@example.com. Pulaski Historical Society preserves the past people of Mount Pulaski are proud of their heritage. They have recruited volunteers to keep their historic courthouse site open, and they have also established a Historical Museum and Research Center to tell the story of their town and the rest of southeastern Logan County. museum, founded by the Mount Pulaski Township Historical Society, opened in April of 1997 on the west side of the square. It moved to its new home at 102 Cooke St., on the south side of the square, in December of 1998. new home is really two buildings: the old Romer building, which was once a saloon (the town once had 29 of them), and the Danner building, former home of the First National Bank of Mount Pulaski. Both buildings were donated by attorneys Thomas and Homer Harris of Lincoln, who had a law office in the old bank building for 22 years. With a matching grant from the Looking for Lincoln project and a great deal of volunteer help, the two buildings are being museum holds memorabilia of all sorts, including a land grant signed by president Andrew Jackson in 1829, a top hat brought from England by the Capps family in the 1820s, and a sword used in the Black Hawk War. Uniforms from the Civil War, World War I and World War II will soon be on display. of things are coming in all the time from old families in the area," said Romelda Johnson, the staff member who keeps the museum open Tuesday through Saturdays from noon to 4 p.m. from two of the town’s best-known people are in the museum. Vaughn De Leath, a nationally known singer in the early days of radio and the composer of hundreds of songs, was born Leonore Vonderlieth in Mount Pulaski in 1894. She became known as "The First Lady of Radio" and sang frequently on NBC. [Steve Hahn, son of the late Lincoln impersonator Harry Hahn, has donated pictures and other Lincoln memorabilia to the Mount Pulaski Historical Museum.] about Lincoln impersonator Harry Hahn, who died in February of this year, have been contributed by Hahn’s son Steve. Hahn spent 39 years acting the part of Lincoln, traveling all over the United States and visiting the White House at least twice, according to his son. The museum exhibits a quilt made by the grade school children of Mount Pulaski as a memorial to Hahn. members like to tell visitors why Mount Pulaski used to be known as Vinegar Hill. At one time all the towns in the area were "dry" except Mount Pulaski. It was mainly a German town, with folks who knew how to hold their liquor and also how to brew it. People used to come by train from all over – Springfield, Lincoln, Decatur – to buy liquor. When the conductor asked them what they were going to put in the jugs they carried, they told him, "vinegar." So the conductor stopped calling out "Mount Pulaski" and instead announced they’d come to announces summer reading programThe Lincoln Public Library's summer reading program is again kicking off with a bang. Children who are planning to read a specified number of books this summer are encouraged to bring their parents to the kickoff on Saturday, June 3, from 9 a.m. to noon at the Lincoln Park District Recreation Center. There will be games, snacks and the summer reading sign-up until noon. At 9:30 a.m., The Timestep Players will present the program, "Just for the Fun of For more information about all the great children's programs being offered by the library this summer, call 732-5732 or stop by the Lincoln Public Library at 725 Pekin St., across from Latham Park. guild finds thyme to get together The last Tuesday of each month, a group of women gather in the Jefferson Street Christian Church to indulge their senses, greet friends, further their education and share their passion. The source of all their interest is herbs. 1994, the Logan County Herb Guild has been meeting and sharing their love of the same multi-purpose plants their ancestors relied on hundreds of years ago. They each have a favorite herb – whether it’s bee balm or basil, sage or thyme, rosemary, tansy, parsley or maybe mint. They discuss new plants they’ve discovered are considered herbs, like dianthus and hawthorn. They pass pots of freshly picked herbs around the room, taking in the heady scents and discussing their names and uses. They indulge their taste buds with new recipes using herbs, like rosemary lemonade, pesto pizza and lavender cookies. Lowery, a lifelong gardener, founded the guild in 1994. She placed an ad in the local newspaper, inviting anyone interested in forming such a group to attend a meeting. To her surprise, about 30 women showed up. The informal group has been meeting ever since, first at each other’s homes and now at the church. “We’ve had a lot of neat programs over the years. It’s been fun, that’s the main thing. If you can make a club fun, with little work, that’s the key,” she Lowery, who is also a member of the newly formed Mount Pulaski Herb Guild, said the local interest in herb gardening is really increasing. “I think there’s a real interest in gardening. Interest is always picking up. We have a lot of members that just come about every third meeting. I think once people get hooked in herb gardening, they find out how relaxing it is and how they feel when they use them instead of salt or sugar. It’s good for the mind and body both,” she said. a recent guild meeting, Lowery, dressed in period costume as a pioneer woman, gave a rousing talk about the history of herbs and what plants our ancestors brought from their homelands and how they used them. Even though she has been gardening all her life, her enthusiasm and interest in herb gardening is refreshing and the guild is not only an educational experience but something all ages can enjoy. “We have members in their 20s and 90s, a wide span of ages, and when we have our meetings, there’s no generation gap. That’s the neat thing about plants and herbs, it bridges all generations,” Lowery said. “I like the historical aspect of herbs. Our forefathers were pretty savvy about plants. We got away from our roots, but we’re starting to get back to it. Like coneflowers, which people use and grow now, the Native Americans used – good benefits,” she added. Lowery, who confesses to planting “tons of things,” has about 60 different varieties of herbs, including an abundance of the sweet-smelling though very invasive lemon balm, but has a special affection for bee balm, or monarda, and English thyme. Her herbs are contained in six 10-by-3-foot plots, a large 100-square-foot garden, along the house, in the pasture and just about any place there is a bare patch of dirt around her rural Beason home. Wildflowers are tossed into the gardens to give an added boost of color to the herbs. herb guild, which now has 22 official members, meets the last Tuesday of each month at the Jefferson Street Christian Church. Dues are $12 a year, which cover the cost of a monthly newsletter. The public is invited and usually an average of 30 people attend Guest speakers often give programs on topics such as cooking with herbs, making herbal vinegars, crafts, dried flower arranging and beneficial bugs for the garden. The group also travels to Clark’s Greenhouse in rural San Jose for their May meeting, which includes a plant swap and tour of the business. Some years the group holds a summer garden walk and tour of several members’ gardens. The guild also sponsors a couple of trophies at the Logan County Fair, has given the Herb Companion magazine to the local library and demonstrates uses for herbs at the annual Railsplitter Festival. “We try to get out a little bit in the community,” she said. Lowery, who said her grandmother sparked her interest in gardening when she was a small child, holds a zoology degree and teaches biology and four environmental science classes at Bloomington High School. In between teaching and chauffeuring her 13-year-old son to sports events and practices, she tends to her wildflowers, herbs, perennials and a few vegetables at her rural home and also gives lectures and programs at local nurseries and other events. use herbs for culinary purposes and also use a few medicinal. I’ve always had an interest in herbs. It started with my interest in cooking. I also love the way they smell,” she said. She favors thyme, which can be used as a ground cover, for its tiny flowering habit and scent, and basil. Her other favorites include chives, parsley and cilantro. “I also used them for ornamental uses. Even now, I have a small bouquet of chive flowers on my dining room table,” she said. “I’ve enjoyed meeting other people who share my interests in herbs and gardening. We’ve all learned together. We talk about how at our first plant exchange, we didn’t have much stuff and how different it is now, how much more we all have to exchange and how much we’ve learned. It’s fun to see how we’ve grown together,” she said. The herb guild will sponsor its first annual plant sale, which is their yearly fund-raiser, from 8 a.m. to noon, May 20, at Cooper’s home, at 140 Campus View Drive. More information about the guild or its programs may be obtained by calling her at 732-9788. bird count logs beautiful birds that a beautiful bird, or what?” The beautiful bird, an indigo bunting, continued catching insects in the grass, giving the 18 bird-watchers a chance to focus their binoculars and see its feathers glimmering iridescent blue in the sun. The small, brown bird feeding nearby was not so cooperative. He flew away before anyone could get a positive think it was a Savannah sparrow, but I can’t be sure,” leader Steve Coogan said. “We’ll have to log it as a ‘question-mark’ sparrow.” birders who met in Kickapoo Creek Park at 7 a.m. Saturday were able to positively identify another 31 species, along with brief sightings of a “question-mark” thrush and a couple of “question-mark” warblers. Coogan, an ardent naturalist who lives in Latham, added the 32 species to the five other migrating warblers he had seen earlier at Skunk Hollow. These birds, and the ones he would see later that day, would be reported to the Illinois Department of Natural Resources, Division of Natural Heritage, as part of the annual spring bird bird count helps state naturalists keep track of the species moving through on migration as well as those birds coming back to this area to nest. year both the number of participants and the number of species identified were lower than usual. “We usually see about 50 species and have about 35 people present,” Coogan said. thought the decrease in the number of species was because of the early warm weather. Many migrating birds are insectivores, and if flowers and trees bloom early, insects and the birds who eat them arrive – and move on – early, too. In addition, Coogan said, trees have already leafed out, making birds harder to spot. this in mind, Coogan set next year’s official bird count day as the last Saturday in April. The day was not a disappointment to the birders, though, who ranged in age from 9-year-old Benjamin Conrady to senior citizens. handsome gray catbird sat on a low limb and serenaded the group with a series of tweets, whistles and warbles, ending with a raucous screech that some people think sounds like a cat. The bird is a mimic, like its cousin the mockingbird, which is now occasionally seen in the Lincoln area. phoebe was seen building a nest, mostly of mud, on a rafter under a shelter. Canada geese protested the birders’ approach to the creek where the geese were swimming, perhaps looking for a nest site. small and twittery caught the attention of a sharp-eyed birder, and half a dozen others thought it was worth wading through poison ivy to see the black-throated green warbler he’d spotted. The warbler sat in a small tree preening its feathers, providing an excellent view of a bird that would not be back until the fall migration. never seen a warbler so cooperative,” Coogan said. “This is a gift.” the open area of the park, meadowlarks sang and an eastern kingbird sat on a small tree, ignoring the birders and occasionally diving down into the grass or swooping through the air to catch an insect. bird is one of the tyrant flycatchers,” Coogan said. “They can be mean birds. I’ve seen them mob hawks.” Hellman, wife of park ranger Don Hellman, said she knew all about those mean birds. “Last year we had a pair nesting at the edge of our property. When I mowed I had to wear a hard hat because they would birder everyone agreed had the sharpest eyes, Mark Tebrugge, spotted a medium-sized, bright yellow bird in the top of a agreed it was an oriole, but the question was what kind? Field guides came out of pockets and backpacks. The bird was yellow, not orange, so it ought to be a female. But it was singing, which made it more likely to be a male. Then the bird turned, displayed its black bib, and the puzzle was solved. It was an immature male orchard oriole, which will turn russet red a beautiful bird. Red Cross classes offered in May American Red Cross classes will be offered at the Logan County office at 125 S. Kickapoo St. in Lincoln. Community First Aid and Safety class will be Wednesday, May 17, from 6 to 10 p.m. and Thursday, May 18, from 5 to 10 p.m. The class will cover adult CPR, child and infant CPR and first aid. Challenge class will be Saturday, May 20, from 9 a.m. to noon. People who have previously been certified in the above classes may demonstrate their skills and be recertified. is required. For further information, call 732-2134 between noon and 4 p.m. any weekday. accepts applications for summer teen volunteers are currently being accepted for this summer’s teen volunteer program at Abraham Lincoln Memorial Hospital. volunteers work throughout the hospital, performing a variety of duties in many different departments. To be eligible for the program, teens must be an eighth grade graduate and must complete an application form that includes personal references. All teen volunteers must also complete the training session scheduled on Friday, June 9, from 9 a.m. to noon at the hospital. are available at ALMH from Barbara Dahm, director of volunteer and special services. Applications should be filled out and returned in person to the volunteer office as soon as possible. A brief interview will be conducted at that time. For more information, call 217-732-2161, ext. 184. 4-H club invites youth from town to join members of the Atlanta Town and Country 4-H club invite eligible youth from town to join. Jeff Jones, the club reporter, says, "4-H isn’t just for people who live in the country. There are lots of things for a guy or a girl from town to do." Activities include cooking, growing flowers, woodworking, small engines, arts, crafts and herb gardening. For more information, people can call 217-648-2973.
<urn:uuid:08252b35-1fc1-4c33-a302-5f6be2d9496a>
CC-MAIN-2021-21
http://archives.lincolndailynews.com/2000/May/13/comunity/org.htm
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989819.92/warc/CC-MAIN-20210518094809-20210518124809-00337.warc.gz
en
0.951715
5,240
2.515625
3
Equipment for eye and face protection Grzegorz Owczarek, CIOP-PIB Poland - 1 Introduction - 2 Legislation applicable to eye protection - 3 Types of eye protectors - 4 Requirements - 5 Test methods - 6 Procedures for selecting eye protectors for different workplaces - 7 Conclusions - 8 References - 9 Links for further reading The most typical eye protection against external factors (e.g. radiation, dusts and droplets), is a natural protective mechanism of a human eye. The thin layer of slightly oily lachrymal fluid produced by the conjunctiva protects the human eye against pollution and infections. However, this natural protection is often insufficient in both everyday life and work environment. If you are exposed to dust, acids, molten metal’s, grinding wheels, hazardous optical radiation – you need to take the proper precautions and protect your eyes. If you don’t, it’s possible to lose the precious gift of sight, meaning you may never see your loved ones again. Thousands of eye injuries occur in the workplace each year, warranting the need for total eye protection. Wearing the eye protection that your job or location requires is a simple way to keep your eyes safe. This article includes information on types of eye protectors, technical requirements, tests methods, and procedures for selecting of eye protectors for different workplaces. Legislation applicable to eye protection Eye protectors are a part of personal protective equipment (PPE). The principles regulating the choice and the application of PPE in the European Union have been laid down in the directive 89/656/EEC of 30 November 1989 on the minimum health and safety requirements for the use by workers of personal protective equipment at the workplace. Regulation 2016/425/EU contains provisions regarding placing PPE on the market. The Regulation lays down essential Health and Safety Requirements (EHSR) which PPE must satisfy in order to ensure the health protection and safety of users. Additional European harmonised standards provide detailed technical information on how to comply with the essential health and safety requirements of the EU Regulation. Types of eye protectors - Impact (e.g. fragments of solid bodies), - Optical radiation (e.g. radiation related to welding processes, sunglare, laser radiation) - Dusts and gases (e.g. coal dust welding fumes or aerosols of harmful chemical substances), - Droplets and splashes of fluids (e.g. splatters appearing while pouring fluids) - Melted metals and hot solid bodies (e.g. chips of melted metals appearing in metallurgical processes), - Electric arc (e.g. occurring while conducting high-tension works). To protect eyes against these harmful or dangerous factors, there are eye protective equipment in four basic categories - Protective goggles; - Face shields; - Welder’s face shields (this category of eye protection includes hand screens, face screens, goggles and hoods). The eye protective equipment in these four basic categories are equipped with vision systems, oculars, meshes or filters. Filters include: welding filters, ultraviolet filters, infrared filters, sunglare filters, and filters protecting against laser radiation. The eye protective equipment can also be part of the respiratory protective devices (vision systems in masks) or head protection equipment (face shields mounted in industrial protective helmets). The eye protective equipment in all of these categories are composed of (1) a transparent part (vision systems, oculars, meshes or filters) and a (2) frame (spectacles and goggles) or (3) a housing with a harness (shields). All types of eye protectors have to meet the essential safety and health requirements described in Regulation 2016/425/EU . The intended use of the eye protective equipment determines the needed protective properties and how the equipment should be designed and constructed. When the eye protective equipment is mainly used, for example, to protection against solid bodies, splashes of fluids and droplets of molten metals, the basic characteristics needed are: Mechanical resistance (also in low and high temperature), Tightness and resistance to ignition in contact with items of much higher temperature (up to c.a. 1500°C). If we want to know whether the eye protective equipment protects against an electric arc, we should specify the electrical properties of materials that the eye protector is made of. Testing characteristics of spectral transmission of optical radiation through translucent elements (i.e. oculars, vision systems, and filters) allows us to determine against what range of radiation spectrum the equipment tested provides protection to. Requirements for types of eye protectors are specified in the European Standards EN 166. In general, requirements of eye protective equipment – according to this standard – consists of: - General rules during designing and constructing them, - Basic, particular, optional range of requirements. Basic rules in designing and constructing each type of eye protective equipment included: - There shall not be any protruding parts or sharp edges which can cause discomfort or skin irritation; - All parts of eye protective equipment in contact with its user's skin should not be made of any materials that we know causes skin irritation; - Headbands, especially when used as an essential holding element, shall be at least 10 mm wide over any portion, which may come into contact with the user’s head. Headbands should be self-adjusting. Eye protectors are a form of PPE. In the hierarchy of risk control all items of PPE are considered to be the final line of defense. This is mainly because they protect only the user and do not take into account protection of the risk to others in the vicinity. However, because PPE is the “last resort” after other methods of protection have been considered, it is important that users wear it all the time they are exposed to the risk. Characteristics of eye protectors Basic requirements must be fulfilled by all eye protective equipment. Furthermore, according to the intended use, types of the eye protective equipment, if necessary, must meet one or more of the particular requirements. Optional requirements are related to additional properties of the eye protective equipment. These requirements consist of optional characteristics which can be considered as useful/positive for a wearer during use. To meet the requirements of specified standards, the characteristics of the eye protective equipment must be by strictly determined. According to EN 166 there are 26 characteristics that respond to the requirements of the eye protective equipment. A summary of those characteristics with their short description are shown in Table 1. Table 1a: General rules for designing and constructing eye protective equipment |General rules for design and construction| |1||General construction||They cannot cause discomfort or skin irritation| |2||Materials||Materials that are in contact with the user skin cannot cause an allergic reaction |3||Headbands||Headband shall be at least 10 mm wide| Table 1b: Basic requirements for eye protective equipment |4||Field of vision||Characteristics that define optical properties of materials that protective oculars are made of: It is required that materials used for protective oculars shall not have any defects that may cause refraction |5||Spherical, astigmatic and prismatic refractive powers| |7||Diffusion of light| |8||Quality of material and surface| |9||Minimum robustness||Oculars withstand the application of a 22mm nominal diameter steel ball with a force of (100±2)N.| |10||Increased robustness||Oculars withstand an impact of a 22mm nominal diameter steel ball, of 43g minimum mass, striking the ocular at the speed of approximately 5,1 m/s.| |11||Resistance to ageing||Testing whether defects are shown (visible to the naked eye) after conditioning in increased (+55 Celsius degrees) temperature.| |12||Resistance to corrosion||If there are any metal parts, testing whether they have corrosion after conditioning in a brine bath.| |13||Resistance to ultraviolet radiation||Testing if there are any damages (after the exam that determines one of the optical characteristics – light diffusion) after artificial ultraviolet radiation.| |14||Resistance to ignition||All parts that the eye-protector is made of shall not catch fire from contact with a steel rod that has a temperature of 650 Celsius degrees.| Table 1c: Particular requirements for eye protective equipment |15||Protection against optical radiation||Only for optical filters. Depending on the type of filter (protection against infrared, ultraviolet, welding radiation or dazzle) defining the level of suppression spectrum of ultraviolet, visible radiation and infrared.| |16||Protection against high-speed particles||The eye protective equipment withstands an impact of a steel ball (6 mm diameter and speed of: 45,120 or 190 m/s) at room temperature.| |17||Protection against molten metals and hot solids||Molten metals cannot be adjacent to the surface. Requirements related to the eye protective equipment used at hot work places (mainly ironworks).| |18||Protection against droplets of fluids||May use to protect against splashes of fluids and dust particles with a size > 5µm| |19||Protection against large dust particles| |20||Protection against gases and fine dust particles||Requirements related to eye-protectors used to protect against fine dust particles with a size >5µm| |21||Protection against a short circuit electric arc||Only for eye-protectors used by electricians (only for face-shields)| |22||Lateral protection||Provides protection against impact form the side of eye protective equipment| Table 1d: Optional requirements for eye protective equipment |23||Resistance to surface damage by fine particles||Requirements related mainly to vision system of sandblasting helmet, and the like| |24||Resistance to fogging of ocular||Oculars with an anti-fog coating| |25||Oculars with enhanced reflectance of infrared light||Only for filters that protect against infrared| |26||Protection against high speed particles at extreme temperature||The eye protective equipment withstand impact of a steel ball (6 mm diameter) at the speed of: 45,120,190 m/s) and at a temperature range: -5 to +55 Celsius degrees.| EN standards appropriate for each type of eye protector Table 2: List of EN standards appropriate for each type of eye protector |EN ISO 4007||Personal protective equipment - Eye and face protection - Vocabulary| |EN 166||Personal eye-protection. Specification| |EN 167||Personal eye-protection. Optical test methods| |EN 168||Personal eye-protection. Non-optical test methods| |EN 169||Personal eye-protection. Filters for welding and related techniques. Transmittance requirements and recommended use| |EN 170||Personal eye-protection. Ultraviolet filters. Transmittance requirements and recommended use| |EN 171||Personal eye-protection. Infrared filters. Transmittance requirements and recommended use| |EN 172||Specification for sunglare filters used in personal eye-protectors for industrial use| |EN 174||Personal eye-protection. Ski goggles for downhill skiing| |EN ISO 12312||Eye and face protection - Sunglasses and related eyewear| |EN 207||Personal eye-protection equipment. Filters and eye-protectors against laser radiation (laser eye-protectors)| |EN 208||Personal eye-protection. Eye-protectors for adjustment work on lasers and laser systems (laser adjustment eye-protectors)| |EN 379||Personal eye-protection. Automatic welding filters| |EN 14458||High performance visors intended only for use with protective helmets| Source: Overview by the author Verifying whether the eye protective equipment is characterized by the protective features, described above, is realized by conducting laboratory tests. The laboratory tests gives us the chance to determine the limits in which the tested eye protective equipment does or does not comply with the considered feature according to the requirements stated in the standard . Test results of the eye protective equipment can be expressed as a specific number (e.g. light transmission factor or spherical power) or organoleptic evaluation (e.g. damage or lack of damage after testing resistance to high speed particles). Laboratory tests can be divided into two basic groups: (1) test for evaluating optical parameters and (2) non-optical parameters. Test methods for eye protective equipment are described in detail in the European standards EN 167 (Personal eye protection. Optical test methods) and EN 168 (Personal eye protection. Non-optical test methods.). Figure 1 shows welding goggles after high speed particle test according to method described in EN 168 (impact of a steel ball (0,86 g) at velocity 120 m/s). Figure 1: Welding goggles after high speed particle test Procedures for selecting eye protectors for different workplaces Principles for the use of any type of eye protectors to protect against various types of hazards are the same as for any other group of personal protective equipment. This means that when organizational and other technical solutions (e.g. collective protection measures) are not sufficient, the employer must provide employees with eye protectors appropriate to the type and level of risk. Spectacles are the most widely used eye protection equipment. It is recommended that they also have forehead protection against dangerous splatters of fluids or fragments of solid bodies. An example of a model of such spectacles is presented in figure 3. Figure 3: Spectacles If a higher degree of eye protection is required, protective goggles should be used. Their construction ensures tight adhesion to the user’s face, which also provides protection against biological factors. We should remember that the ventilation systems of goggles are often very different one from another, though they are a key feature to be considered when selecting situation appropriate protective goggles. Figure 4 presents goggles with the so-called direct and indirect ventilation systems. Better protection against droplets and splashes of harmful substances is ensured by goggles with indirect ventilation system. The majority of goggles allow for their use with corrective glasses; however, before choosing and purchasing the equipment it is recommended to verify if this quality is available for that particular type of goggle. If the expected hazards require protection of the entire face, face shields should be used. Figure 4: Protective goggles with direct (a) and indirect (b) ventilation systems Face shields (Fig. 5) protect the entire face and their large protective surface minimises the probability of penetration by dangerous fluid splatters. Face shields may be used with spectacles, corrective glasses, goggles and some respiratory protection devices. Figure 5: Face shield The last of the basic categories of eye protective equipment are welder’s face shields, i.e. devices protecting the user against harmful optical radiation and other specific hazards arising during welding and/or related techniques. Welder’s face shields include: face screens, hand screens, goggles, spectacles and hoods. Figure 6 presents a welder’s face shield. Figure 6: Welder’s face shield Field of use of eye protectors For each given purpose of eye protective equipment a symbol is assigned (digits: 3, 4, 5, 8 or 9); this symbol should appear as part of the equipment marking. The purpose of the eye protection systems, the relevant symbols and a short description of the field of use are presented in table 2. Table 2: Purpose, symbols and description of the eye protective equipment field of use |Symbol||Designation||Description the field of use| |symbol No||basic application||Unspecified mechanical hazard and hazards arising from ultraviolet, visible, infra-red and solar radiation| |3||liquids||liquids (droplets or splashes)| |4||large dust particles||dust with a particles size of > 5 µm| |5||gas and fine dust particles||gases, steam, aerosols, smoke and dust particles < 5 µm| |8||short-circuit arc||electrical arc due to a short-circuit in electrical equipment| |9||melted metals and hot solid bodies||Splashes of melted metals and penetration of hot solid bodies| Independently of the material used for making filters (e.g. polycarbonate, non-organic glass, polymetacrylane, cellulose acetate), its basic function is to protect eyes against dangerous optical radiation. For industrial applications it is welding, ultraviolet and infrared radiation, visible radiation provoking sunglare and laser radiation. The basic parameter classifying the filter protection -independently of its purpose – is the level of protection. It is a parameter determined based on a light transmittance factor. 23 levels of protection were defined (1,2; 1,4; 1,7; 2; 2,5; 3; 4; 4a; 5; 5a; 6; 6a; 7; 7a; 8; 9; 10; 11; 12; 13; 14; 15; 16). Following their purpose, code numbers are attributed to the filters. The complete marking of a filter is composed of the code number and the protection level. Welding filters are an exception as they are marked only with one of the aforementioned protection levels. The list of code numbers and markings for different types of filters is presented in the table 3. Table 3: List of markings (code numbers and protection levels) for different filters |Welding filters||Ultraviolet filters||Infrared filters||Filters for sunglare| |number code||Code number 2||Code number 4||Code number 5||Code number 6| from 10 to 16 |2 – 1.2 2 – 1,4 |4 – 1,2 4 – 1,4 4 – 1,7 4 – 2 4 – 2,5 4 – 3 4 – 4 4 – 5 4 – 6 4 – 7 4 – 8 4 – 9 4 – 10 |5 – 1,1 5 – 1,4 5 – 1,7 5 – 2 5 – 2,5 5 – 3,1 5 – 4,1 |6 – 1,1 6 – 1,4 6 – 1,7 6 – 2 6 – 2,5 6 – 3,1 6 – 4,1 |Code number key: 2 – ultraviolet filters 4 – infrared filters 5 – sunglare filters without infrared specification 6 – sunglare filters with infrared specification Table 3 does not include filters used for protection against laser radiation. Due to the fact that the laser radiation is characterised by a high degree of cohesion, monochromaticity and orientation, and that the angle of beam divergence usually does not exceed several miliradians, eye protection against this type of radiation are tailor-made for a given type of laser. Laser radiation filters have to ensure effective protection against the radiation of a wavelength emitted by a given type of laser. Moreover, the housing and filters have to resist the laser radiation, which, in the event of high power/energy density, may damage the protection itself. For individual eye protection against laser radiation of the wavelength ranging from 180 µm to 1000 µm spectacles, goggles and face shields are used. Laser radiation filters are marked with codes from L1 to L10. These markings are defined at the basis of optical density of the filter for the wavelength for which the protection is to be ensured and based on the laser radiation resistance. If dangerous laser radiation remains in the visible spectrum (from 400 nm to 700 nm), and the eye protective equipment lowers this radiation to values defined for lasers of class 2 (radiation power P ≤ 1 mW for CW lasers – in this case physiological defence reactions, including the blinking reflex, contribute to eye protection), then such protection equipment is called protective equipment for laser adjusting. For construction of oculars and optical filters, advanced technological solutions are available and commonly used. The use of technology consisting of modification of optical radiation transmittance spectrum characteristics enables the adjusting of the light transmittance factor to values corresponding to given lighting conditions and thus the user’s requirements in the event of protection against the visible radiation (sunglare). Technologies of light transmittance characteristics modification offer wide possibilities for designing optical filters for radiation ranges that, in work conditions, may constitute a real hazard (ultraviolet, infrared, laser radiation). The modification of the spectrum characteristics is realised by the in-mass tinting of the material, by superficial tinting or by coating the material with reflective or special interference layers. The in-mass tinting (pigments are added in the process of production of the material the filter is made of) is used mainly to produce ultraviolet, visible and infrared radiation filters. The superficial tinting (tinting by immersing the material constituting the substrate for the filter) is used mainly in case of lenses that, after having fulfilled the requirements specified in the standard EN 166, may be treated as oculars or filters. Coating the filter with an additional reflective layer (e.g. reflecting infrared radiation) results in relatively high amounts of dangerous radiation being reflected from its surface. The protective effect may then also be achieved by reflecting the radiation, not only by its absorption. The radiation absorbed by the filter naturally causes it to heat. In the event of exposure to intensive infrared radiation the filters only absorbing the radiation (without reflective coating) may heat to relatively high temperatures. Thus the filter itself becomes a source of temperature radiation affecting eyes. Nowadays, it is the more and more popular to cover the surfaces of lenses and interference filters with anti-reflection coating. Such coatings effectively eliminate the reflection of light, e.g. emitted by headlights of a car approaching from the opposite direction (application in driver’s glasses) or subdue the laser radiation of a given wavelength (laser radiation filters). In order to change the transmission of optical radiation passing through the filter, welding filter constructors also use the effect of the director orientation modification in the liquid crystal layer occurring under the influence of electrical field, which may be generated by light impulses or the photochrome effect. The use of the photochrome effect gives the possibility to change the light transmittance factor depending on the external radiation lighting intensity, usually accompanied by the ultraviolet radiation provoking the photochrome effect. Moreover, back surfaces of oculars and filters may be covered with anti-fog coating. The ageing resistance and the material absolute weight are as important as the fogging resistance, mechanical resistance and filtration properties adapted to given applications. The parameters, presented in this article, dictate the quality and durability of the product, therefore defining the requirements set for the contemporary eye protective equipment. The use of new materials and technologies is also applicable for the construction of frames, housings and harnesses of the eye protective equipment. Those elements are made mainly of high quality plastics, which do not cause any allergic Irritants and allergens reaction in direct contact with the user’s skin. While constructing elements mounted on the user’s head, a particular importance should be attributed to the comfort of use, with regard to adjusting and regulation as well as the appropriate ventilation and sweat absorption by materials directly adhering to the forehead, etc. - Directive 89/656/EEC - use of personal protective equipment of 30 November 1989 on the minimum health and safety requirements for the use by workers of personal protective equipment at the workplace. Available at: - Regulation (EU) 2016/425 on personal protective equipment of the European Parliament and of the Council of 9 March 2016 on personal protective equipment and repealing Council Directive 89/686/EEC (with effect from 21 April 2018). Available at: - EN 166 Personal eye-protection – Specifications - Council Directive 89/686/EEC of 21 December 1989 on the approximation of the laws of the Member States relating to personal protective equipment; OJ L 399, 30.12.1989, p. 18–38. Available at: - EN 167 Personal eye-protection. Optical test methods - EN 168 Personal eye-protection. Non-optical test methods - EN 169 Personal eye-protection. Filters for welding and related techniques. Transmittance requirements and recommended use - EN 170 Personal eye-protection. Ultraviolet filters. Transmittance requirements and recommended use - EN 171 Personal eye-protection. Infrared filters. Transmittance requirements and recommended use - EN 379 Personal eye-protection. Automatic welding filters - EN 207 Personal eye-protection equipment. Filters and eye-protectors against laser radiation (laser eye-protectors) - EN 208 Personal eye-protection. Eye-protectors for adjustment work on lasers and laser systems (laser adjustment eye-protectors) - `Handbook of Occupational Safety and Health’ edited by D. Koradecka, 2009 Eye and Face Protection, pp.531-538 - EN 14458 Personal eye-equipment. Faceshields and visors for use with firefighters and high performance industrial safety helmets used by firefighters, ambulance and emergency services Links for further reading - HSE – Health and Safety Executive, Guidance on Regulations “Personal protective equipment at work, Available at:: - CCOHS – Canadian Centre for Occupational Health and Safety, ‘Designing an Effective PPE Program’, Available at: - EU-OSHA – European Agency for Safety and Health at Work, Risk assessment essentials. Available at: - EU-OSHA – European Agency for Safety and Health at Work, Risk assessment, the key to healthy workplaces, Factsheet. Available at: - EU Commission, Personal protective equipment,
<urn:uuid:367fe388-288b-40fc-be9a-b1cacc586134>
CC-MAIN-2021-21
https://oshwiki.eu/wiki/Eye_protection
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00497.warc.gz
en
0.871842
5,412
2.953125
3
2017 marks the 150th Anniversary of Canadian Confederation, a time span which parallels the history of non-First Nations settlement in what is now Richmond. Shown in this post are images from the City of Richmond Archives from each of the 15 decades from the 1860s to the present. 1867 to 1877 Hugh McRoberts is generally acknowledged to have been the first European settler in what is now Richmond. This image, from an original pencil sketch done by “R.P.M.” for McRoberts’ daughter Jenny and enhanced by Vancouver Archivist Maj. J.S. Matthews, shows a representation of the McRoberts farm on Sea Island in 1862. The album with the sketch contains the earliest known use of the name Richmond. Hugh McRoberts lived in the house until 1873, expanding his farm to cover nearly half of Sea Island. (City of Richmond Archives photograph 1977 3 4) Starting with Hugh McRoberts there began a slow but steady migration of farmers to Lulu and Sea Islands. The settlement of Lulu Island started on the outside of the island and spread towards the interior due to the low lying, marshy land and peat bogs. Early settlers used the network of sloughs as transportation routes. In 1871 British Columbia entered Confederation and became a Province of Canada. 1877 to 1887 The Municipality of Richmond was incorporated on November 10, 1879. The first council meetings were held in the house of Hugh Boyd but by 1881 our fledgling municipality’s first town hall was opened. Shown here ca. 1888, the building was erected at a cost of $488. It was also used as a school, as shown in this image. Posing for the photo are William Garratt, Leo Carscallen, Peter Carscallen, James Sexsmith, Mr. McKinney, Jack Smith, George Sexsmith, William Mellis, Frances Sexsmith, Anna Sexsmith, Pearl Robinson, Kate Smith, Grace Sweet, Mae Vermilyea and Anna Noble. (City of Richmond Archives photograph 1984 17 77) Richmond continued to grow over the next decade as more people acquired land and homesteaded. Many pioneer families arrived during this time period, and in 1879 a group of them petitioned the BC Government to incorporate the area as the Municipality of Richmond. On November 10, 1879 the Municipality was incorporated and began the process of organizing road construction and dyking and drainage, now paid for by the collection of taxes. A new Town Hall was built on land which now forms the corner of Cambie and River Roads and the first school district was formed, with the Town Hall acting as the schoolhouse. In 1882 the first cannery was built in Steveston beginning our long fishing industry heritage. In 1885 the Letters Patent from 1879 were revoked and new ones issued to incorporate the Corporation of the Township of Richmond, redrawing the municipal boundaries to include all the islands in the North and South Arms of the Fraser River and ceding Queensborough to New Westminster. 1887 to 1897 The first bridge to Richmond was built in 1889. The Marpole Bridge was actually two spans, one across the North Arm between Marpole and Sea Island and the other from Sea Island to Lulu Island. This image shows a crew of bridge builders and painters posing on the North Arm section, ca. 1888. (City of Richmond Archives photograph 1977 2 1) By 1887 Richmond’s population had grown to 200-300 people. In 1889 the first North Arm bridge was built to Richmond, from Eburne on the Vancouver side of the River to Sea Island and then a second span to Lulu Island. For the first time there was a route to and from Richmond that did not involve getting in a boat, at least while the bridge was in service and not closed to allow for shipping traffic or suffering damage from a collision by shipping or ice. Communities developed in Steveston, London’s Landing and Eburne. Japanese immigration was underway, filling labour needs in the fishing industry. The first police constable was employed by the Municipality. 1897 to 1907 Steveston was booming in the 1890’s when this image was taken (either 1891 or 1895). Stores, hotels and other services catering to workers in the fishing industry made for a vibrant business district and encouraged more people to settle in the area. The sign displayed on the left in this photo advertises town lots for sale by auction at the opera house at 2PM. (City of Richmond Archives photograph 1984 17 75) By 1897 there were 23 canneries operating on the Fraser River in Richmond. The agricultural industry was performing well too with Richmond acting as the lower mainland’s breadbasket, providing vegetables, produce , dairy and beef products to the growing cities across the river. Into this successful mix came the BC Electric Railway Co. in 1905, providing fast and efficient freight and passenger service from Vancouver to Steveston. The B.C. Elelctric Railway Company Interurban Tram provided an efficient, regular service to Vancouver for freight and passengers. Eventually there were 20 stations on Lulu Island servicing residents and businesses. The tram ran for more than 50 years. (City of Richmond Archives photograph 1978 12 8) 1907 to 1917 In 1909 Minoru Park Racetrack was opened making Richmond a destination for horse racing fans. The track had its own railroad siding and special trams operated on race days bringing in thousands of people for the events. (City of Richmond Archives photograph 2001 9 24) In 1909, the opening of Minoru Park Racetrack made Richmond a popular destination for race fans. Named for King Edward VII’s Epsom Derby winning horse the track had its own siding on the BC Electric Railway’s Interurban Tram line with thousands of people travelling to Richmond for races and creating a new income stream for the city and entrepreneurs. The track also became a centre for aviation in the Lower Mainland, being the location of the first flight of an airplane in Western Canada, the starting point of the first flight over the Rockies, etc. Richmond’s population continued to grow and by 1914 the Bridgeport area was home to a flour mill, a shinglemill, an iron bar mill, the Dominion Safe Works, a sawmill and many residents. The advent of World War I in 1914 put the nation and Richmond on a war footing and while industries important to the war effort grew, Minoru Park was closed until after the war. Many young men left Richmond to join the battle, 25 were never to return. On March 25, 1910 Charles K. Hamilton made the first airplane flight in Western Canada at Minoru Park Racetrack, starting Richmond’s long association with flight. (City of Richmond Archives photograph 1978 15 18) 1917 to 1927 The 1918 Steveston fire devastated the fishing town. Shown here is some of the destruction with the burned out shell of the Hepworth Block at centre. Buildings on the north side of Moncton Street were saved. (City of Richmond Archives photograph 1978 5 2) On May 14, 1918 Steveston burned. There had been fires before but the 1918 fire resulted in the loss of most of the buildings between No. 1 Road and 3rd. Ave. south of Moncton St., including three canneries, three hotels and numerous businesses. Approximately 600 Japanese, Chinese and First Nations workers were made homeless. Total damages amounted to $500,000. After the end of the World War I life returned to normal in Richmond. In 1920 a new Town Hall was built at the corner of Granville Avenue and No.3 Road, replacing the original one which had burned in 1912. The racetrack also reopened in 1920 with a new name. Now known as Brighouse Park Racetrack it was joined by Lansdowne Park Racetrack in 1924. The opening of the second racetrack in Richmond allowed double the amount of races to be held and still stay within the restrictions placed on the racing industry by the BC Government. Richmond’s new Town Hall opened in 1920 on property next to Brighouse Park Racetrack which reopened in 1920 after the end of the war. (City of Richmond Archives photograph 1987 97 1) 1927 to 1937 The program for the official opening of the Vancouver Airport on Sea Island in July of 1931. (City of Richmond reference files) In 1929, in a farmer’s field just north of Lansdowne Park Racetrack, BC’s second licenced airfield opened. The Vancouver Airport was a temporary construction consisting of a grass field with some structures, hangars and a terminal building close to the Alexandra Road Interurban station. It was replaced in 1931 by the modern new Vancouver Airport on Sea Island. The Richmond Review published its first newspaper on April 1, 1932. The paper would continue to publish “in the interests of Richmond and community” until its demise in 2015. The Great Depression was well underway when Reeve Rudy Grauer came up with a plan to help people who could not keep up with their property taxes. When back taxes or water bills could not be paid, the land could be sold to the Municipality. As long as the property owner could pay something toward the debt each year the land could not be sold to another owner with the result that not one property was lost due to unpaid taxes in Richmond during the depression. 1937 to 1947 Members of the Steveston Air Raid Protection unit pose here on the fire engine they built in 1943. The unit was the first in Canada. Men have been identified as: (front) Chief William Simpson, (left to right) George Milne, Gul Gollner, Allie McKinney, unidentified, Austin Harris, Bill Glass, Jack Gollner, Milt Yorke and Harry Hing. (City of Richmond Archives photograph 1978 31 57) This decade was dominated by World War II. The airport on Sea Island was designated for direct military use, including elementary flight training for Air Force Pilots as well as Air Force use. Boeing Canada erected a plant for the construction of patrol bombers for the war effort. The Boeing Canada Plant on Sea Island produced 362 Consolidated PBY long range patrol bombers, known as Catalinas or Cansos, during the war. (City of Richmond Archives photograph 1985 199 1) The internment of Japanese Canadians and their removal from the coast in early 1942 changed the face of Richmond, especially in Steveston which lost 80 percent of its population. On Sea Island the community of Burkeville was built to provide housing for workers employed at the Boeing Canada Aircraft plant and their families. Once again young Richmond men signed up for the armed forces and 36 did not come home. 1947 to 1957 Post-war development in Richmond resulted in the growth of commercial buildings in the Brighouse area. Shown here in 1948, the corner of No.3 Road and Granville Avenue shows commercial buildings on the east side of No.3 Road near the Municipal Hall. The BC Electric Railway’s Brighouse Station made access to the area convenient and before long the east side of the street was lined with stores and services. (City of Richmond Archives photograph 1997 16 1) Post war development saw Richmond’s population grow. The Brighouse area developed into a commercial hub and subdivisions developed to house families moving to the area. Burkeville became part of Richmond, no longer a worker’s housing complex. In order to serve the rising population, theatres, bowling alleys, swimming pools and other entertainment services were built. In 1948 one of the worst floods in memory occurred in the Fraser Valley. While serious damage was done in many areas, Richmond came out well with only one breach of the dyke 100 yards east of the rice mill. The Broadmoor Subdivision, looking west from No.3 Road in 1953 is only one of many residential areas that came under construction in the 1950s. New building to house Richmond’s rapid population growth boomed through this time whether as Veteran’s Land Act areas or by commercial developers. (City of Richmond Archives photograph 1977 1 59) 1957 to 1967 The construction of the Oak Street Bridge in 1957, and later the Deas Island Tunnel had a greater effect on the growth of Richmond’s population than any other event to that date. Now easily accessible from Vancouver and with a direct route to the United States, more people and more businesses moved to Richmond. (City of Richmond Archives photograph 2008 36 2 23) In 1957 the Oak Street bridge was built giving fast and easy road access to Richmond from Vancouver and making the Municipality even more desirable as a place to live and to start a business. A new City Hall was opened the same year in the same location as the old one. With the new ease of access and bus service expanding all around the region, the BC Electric trams were made redundant and the Marpole to Steveston line saw its last run in 1958. The last run of the Marpole to Steveston tram, shown here at Brighouse Station, happened on February 28, 1958. It was the last Interurban Tram operating in BC. (City of Richmond Archives photograph 1839 Brighouse) The Municipality purchased the Brighouse Estates in 1962, the deal providing land for Minoru Park, the Richmond Hospital and industrial land. Richmond’s retail options increased in 1964 with the opening of Richmond Square Shopping Centre, built on part of the old Brighouse/Minoru Racetrack. In 1966 the Hudson’s Bay Company announced plans to build a store in Richmond which, in later years, would be joined to Richmond Square and become known as Richmond Centre Mall. The Richmond General Hospital opened on February 26, 1966 providing much needed local care for residents. The new Richmond Municipal Hall, under construction in the background of this photo, was opened in 1957. The old hall was demolished once the new one was ready to be occupied. (City of Richmond Archives photograph 1997 42 3 47) 1967 to 1977 The Richmond Arts Centre opened in 1967, one of several projects to mark Canada’s 100th Anniversary. (City of Richmond Archives photograph 2004 11) Canada’s 100th Birthday was in 1967 and like most communities around the country Richmond marked the occasion with commemorative projects. The Richmond Arts Centre was one of these, along with the placement of Minoru Chapel in Minoru Park, and a Pioneers Luncheon. In 1968 the Vancouver International Airport’s new $32 Million terminal opened. In 1972 the first two towers of Richmond’s first high rise development were ready for occupation, the third tower opened in 1973. Mayor Gil Blair speaks at the groundbreaking ceremony for the new Lansdowne Park Mall. Built on the site of the horse racing track, the mall would open in September 1977. (City of Richmond Archives photograph 2006 7 12) After much controversy a new shopping mall project was started on the grounds of the old Lansdowne Park Racetrack. Woodwards would be the anchor store for the new Lansdowne park mall. While the newest of Richmond’s retail outlets was under construction its oldest was lost in 1976 when Grauer’s Store shut down after 63 years of service to the community, a victim of airport expansion and bureaucracy. Richmond’s oldest retail outlet, Grauer’s Store on Sea Island, closed it’s doors forever on May 31, 1976. (City of Richmond Archives photograph 1996 13 5) 1977 to 1987 1979 was the 100th Anniversary of the incorporation of Richmond and several projects and celebrations were planned to mark the event. The Corporation of the township of Richmond adopted the “Child of the Fraser” Coat of Arms as its official symbol. (City of Richmond Archives image) On January 1, 1977 a new street address system was introduced in Richmond with all residents and businesses adding a zero to the end of even numbered address and a one to the end of odd numbered addresses. In 1979 Richmond’s 100th Anniversary was marked by celebrations and commemorative projects including hosting the BC Summer Games, a history book, “Richmond: Child of the Fraser”, and the adoption of a new coat of arms and official seal. Through this decade Richmond continued its expansion with the construction of hotels, businesses, temples and churches and community buildings such as the Gateway Theatre and Minoru Senior’s Centre. Improvements to other community buildings were made, such as a roof for the Minoru swimming pool and a second ice rink. In 1986, after 20 years of planning, the Alex Fraser Bridge was opened connecting Richmond to Surrey and Delta. The Gateway Theatre is a mainstay of Richmond’s arts and culture community. It opened on September 19, 1984. (City of Richmond Archives accession 1988 121) The Municipality purchased the land at Garry Point from the Bell-Irving family in 1981, with the intention to make it a park and to prevent development of the site. The racial demographic of Richmond began to change in the 1980s as an influx of immigrants from Hong Kong began, many making the Municipality their home. 1987 to 1997 Fantasy Garden World opened in Richmond on March 5, 1987. Owned by BC Premier Bill Vander Zalm, the facility operated for many years as a tourist attraction. Work began on a $55 million project to renovate Richmond Square and Richmond Centre malls. The project would result in the joining of the two malls as a new Richmond Centre Mall. Opened on March 5, 1987 Fantasy Garden World was a Richmond tourist destination, and a catalyst for political controversy. (City of Richmond Archives photograph 2009 16) In May 1990 Richmond asked the Provincial Government to grant the Municipality status as a City. New Letters Patent were received designating Richmond Municipality, known as “The Corporation of the Township of Richmond”, to be called the “City of Richmond”. Richmond continued to grow. Ground was broken on Richmond’s new Library and Cultural Centre in 1991, the Riverport area was developed with the construction of the Riverport Ice Rink Complex and the Watermania Aquatic Centre. The Ironwood Mall project was approved. Several Asian style malls were built to serve the rising numbers of immigrants from Hong Kong and Mainland China. The Aberdeen Centre, Yaohan Centre, Parker Place, President’s Plaza and Fairchild Square marketed themselves under the name “Asia West”. Richmond’s new Minoru Park Plaza and Library and Cultural Centre opened on January 16, 1993. (City of Richmond Archives photograph 2008 39 6 685) 1997 to 2007 Richmond marked the new millennium with the opening of the new City Hall on May 20, 2000. Richmond City Hall opened on May 20, 2000. It is believed to be the first municipal building in BC to use a Feng Shui consultant in its design. (City of Richmond Archives photograph 2007 7) In 2002 the Tall Ships came to Steveston resurrecting images of 100 years ago on the waterfront when sailing ships loaded canned salmon. The city continued is growth, cranes becoming a normal sight on the sky line as more and higher buildings were erected. The River Rock Casino opened on land once occupied by the failed Bridgepoint Market. The facility, with its resort hotel, opened on June 24, 2004. Construction on the largest project to date in Richmond, the Olympic Speed Skating Oval, began in November of 2006. Construction began on the Olympic Oval in November of 2006. (City of Richmond Archives – B. Phillips photograph) 2007 to 2017 On August 17, 2009 the first passenger rail system since the demise of the BC Electric Interurban line began service in Richmond. The Canada Line rapid transit line connected Richmond City Centre and YVR to Downtown Vancouver. The big story of 2010 was the Winter Olympic Games. Richmond’s Olympic Oval was a venue for the speed skating events and the community celebration site at Minoru Park, known as the O Zone, was crowded with spectators for concerts, events and to visit the Holland House in Minoru Arenas for a visit or a drink and a meal. Crowds watch the big screen at the O Zone as Sydney Crosby is interviewed after the winning goal for Canada in the Gold Medal Hockey game. (City of Richmond Archives – W. Borrowman photo) Richmond’s growth continued through this decade, building increased with highrise construction changing the city skyline dramatically. More shopping centres opened, MacArthur Glen Outlet Mall brought retail back to Sea Island and Central at Garden City had space for a Walmart Supercentre as well as many other merchants. The Railway Greenway was opened, creating a biking and walking corridor along the old Interurban line to Steveston. The Garden City Lands, formerly held by the Department of National Defense, have been purchased by the City and are being transformed into an urban farming area and natural bog land park. Work has begun on a bridge to replace the Massey Tunnel, now nearing its 60th anniversary, a structure that will increase traffic flow through Lulu Island and may bring more people to live here. The City’s population has exceeded 200,000 and is growing still. The pioneers who made a living from the boggy soil and running waters of Richmond would have had little concept of the city that has grown in the past fifteen decades. Who knows what the next fifteen will bring?
<urn:uuid:7d844c61-e75d-49d8-9544-67829e1392e6>
CC-MAIN-2021-21
https://richmondarchives.ca/tag/richmond-bc/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988796.88/warc/CC-MAIN-20210507150814-20210507180814-00098.warc.gz
en
0.961395
4,440
3.09375
3
The splitting of water is associated with the PS II; water is split into H+, [O] and electrons. This creates oxygen, one of the net products of photosynthesis. 3. I am currently investigating Archaeal and Fungal populations in a micro biome dataset, and while digging in the literature I came across the linked paper below in which they utilized Permanovas to dissect their data. The two protein complexes are embedded in thylakoid membranes, which are like those found inside the oxygen-creating chloroplasts in plants. the NADPH and the ATP needed to reduce CO2 in the Calvin cycle). This is done so by energy from the electrons, they undergo ATP synthase, known as proton motive force. To replace the electron in the reaction center, a molecule of water is split. by now, my understanding is well improved. in reaction-center chlorophyll a. Splitting water molecules to produce hydrogen for fuel holds great promise for alternative energy. Photosynthetic pigments are organized into clusters called photosystems. Quick Animation 2, Photosynthesis and the cyclic pathway. They are pushed outside of cell. Flow of H+ from high to low concentration across thylakoid membrane provides As with any prokaryotic organism, cyanobacter does not show nuclei nor internal membranes; many cyanobacter species have folds on their external membranes which function in photosynthesis. therefore these pathways are also known as cyclic and noncyclic photophosphorylation. From PS2 . Making hydrogen or oxygen this way seems simple. PSI splits water to get electrons. A piece of o¡ research... Join ResearchGate to find the people and research you need to help your work. Movie 3 – Have little to no variability under normal circumstance Your Z-scheme should illustrate the flow of electrons between all of the electron carriers named above. Electrons move carrier-to-carrier, giving up energy used to pump H+ from carbon fixation. Water splitting is the chemical reaction in which water is broken down into oxygen and hydrogen: . Can someone (in laymen's terms) explain what this test does and why it is useful in this situation? What happens to water? 3. When the PS I pigment complex absorbs solar energy, high-energy electrons Oxygen is released as a by product of water oxidation, and the protons released contribute to the H + gradient used as the energy source for ATP synthesis. 10. This again is subjective. As such, this ideal state is impossible to find in a complex system such as the human body. The metal cluster is organized as a cubane-like structure composed of three Mn ions and the one Ca 2+ ion linked by oxo bonds. The electron-splitting was measured using X-rays to measure the energy and momentum of particles in the material. Electrons do the Splits Posted on 31 Jul 2009. Photosystem I replenishes its lost electrons by the splitting of water. The water-splitting site was revealed as a cluster of four Mn ions and one Ca ion surrounded by amino-acid side chains, of which seven provide ligands to the metals. Why ACS Applied Nano Material didn't receive Impact Factor in 2020. 13. The plastocyanin protein in the electron transport chain after Photosystem II. two H+ remain. photosystem I (PS I) and photosystem II (PS II). The energy “excites” one of its electrons enough to leave the molecule and be transferred to a nearby primary electron acceptor. A molecule of water splits to release an electron, which is needed to replace the one donated. NADPH and ATP produced by noncyclic flow electrons in thylakoid membrane This is true in all water-splitting photosynthetic organisms - including higher plants. Each photosystem has a pigment complex composed of green chlorophyll a and The oxygen atom immediately combines with an oxygen atom generated by the splitting of another water molecule, making O2. The exergonic fall of electrons to a lower energy level provides energy for the synthesis of ATP. Now is the time to begin thinking of biomarkers of sentience and consciousness. c. When hydrogen ions flow down their electrochemical gradient through ATP 5. Water is split by a OEC (oxygen evolving complex) containing ions of Mn, Cl, calcium and oxygen atoms present associated with PS II; not with PS I. 2. 1. For the first time, the researchers have recorded an observation of an electron splitting into two different quasi-particles, each taking different characteristics of the original electron with it. How does PS2 replenish its electrons? pigments). A photosystem is a photosynthetic unit comprised of a pigment complex and a complex series of chemical reactions carried out in the stroma. Until now, electrons have been regarded as elementary particles — which means that scientists thought they had no component parts or substructure. b. Not only can it happen, it does happen spontaneously. However depending on what they do this cut-off change From 1e-2 to 1e-30 ! Photosynthesis Treatment. When light photons excite the pigments in the light-harvesting complexes of the photosystem, their electrons get excited. Electrons pass from 2. I'm using the folloowing primers: Forward primer: GCATCTCCCCAATTCATGGT- Reverse primer: -AACTGTCCCACTCTATTCTG-, GoTaq® Flexi DNA Polymerase 5U - 0.1uL, Colorless Flexi Reaction Buffer - 1.5 uL, MgCl2 25mM - 0.6-1.2 uL, dNTPs 10mM - 0.4 uL, Primers 20uM - 0.5 uL, Nuclease free water -, TOTAL: 13uL. The H+ ions temporarily stay within the thylakoid space and contribute to p.s I have attached the .xls file for your reference. electron-acceptor molecule. Light energy (indicated by wavy arrows) absorbed by photosystem II causes the formation of high-energy electrons, which are transferred along a series of acceptor molecules in an electron transport chain to photosystem I. Photosystem II obtains replacement electrons from water molecules, resulting in their split into hydrogen ions (H+) and oxygen atoms. A second protein complex that they call Photosystem II uses energy from light to split water and take electrons from it. Prologue This essay relates largely to the past; this prologue sets it in the context of the present. After splitting water in PS-I, high energy electrons are delivered through the chloroplast electron transport chain to PS-II. Since each mole of water requires two moles of electrons, and given that the Faraday constant F represents the charge of a mole of electrons (96485 C/mol), it follows that the minimum voltage necessary for electrolysis is about 1.23 V. If electrolysis is carried out at high temperature, this voltage reduces. 2.If I plot a graph what should I mention in y-axis? Publisher Summary This chapter discusses the ecological implications of dividing plants in groups with distinct photosynthetic production capacities. 2. 10. Absorbed energy is passed from one pigment molecule to another until concentrated Problem with NRAMP D543N PCR reaction. 1. Photosynthesis These photosystems absorb and utilize the solar energy efficiently in the thylakoid membranes. 14. Electrons from PS II are carried by plastoquinol to cyt b 6 f, where they are removed in a stepwise fashion (reforming plastoquinone) and transferred to a water-soluble electron carrier called plastocyanin. These electrons are used in several ways. © 2008-2021 ResearchGate GmbH. The noncyclic pathway begins with PSII; electrons move from H2O through PS Photsynthesis McGraw-Hill Animations. in fact , the physiological significance of photo-system I in higher plants photosynthesis is obscure. Breakdown of H2O. them*, Photosynthesis http://www.life.illinois.edu/govindjee/textzsch.htm, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4030627/, http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0066019, https://sapienlabs.org/from-brain-to-behavior-the-search-for-biomarkers/, Ecological Implications of dividing Plants into Groups with Distinct Photosynthetic Production Capacities, Plant cell physiology (1934–84): Recollections and reflections. look at the various animations of these processes. Joachim Pimiskern kindly made a list of more studies that try to detect consciousness by evaluating measured data from the brain; for the links, please look for his message below. leave the reaction-center chlorophyll a molecule. electron transport chain to PS-I. The chlorophyll’s donated electrons need to be replaced, and these electrons come from the splitting of water. Water Splitting. synthase complexes, ATP production occurs. Oxygen is released as oxygen gas (O2). Splitting water into its two components is much easier to do and is called water electrolysis. chlorophyll b molecules and orange and yellow accessory pigments (e.g., carotenoid There are situations where indeed molecular oxygen production can be measured, which however is originating from H2O2 and not from water splitting. are used by enzymes in stroma during light-independent reactions. It is split in order to replenish electrons in PS2 & Oxygen is a by-product. What is the purpose of a Permanova test, specifically in terms of the gut microbiota? Oxygen and hydrogen ions are also formed from the splitting of water. a. Cyanobacteria use the energy of sunlight to drive photosynthesis, a process where the energy of light is used to split water molecules into oxygen, protons, and electrons. 3. A team of physicists from the Universities of Cambridge and Birmingham have shown that electrons in narrow wires can divide into two new particles called spinons and a holons. 4. 2. Share this page . Is there a rule somewhere which defines the cut-off which should be chosen depending on what we want to see? 7. February 2nd, 2017 Eindhoven University of Technology. PSI is vital to photosynthesis - it produces the "reducing power" needed for the dark reactions (i.e. in space flights. However, current methods of water splitting also form undesir What e-value are you using as a cut-off in your work? photo system II is responsible. After four electrons have been donated by the OEC to PS II, the OEC extracts four electrons from two water molecules, liberating oxygen and dumping four protons into the thylakoid space, thus contributing to the proton gradient. Directly No, never. Water is oxidized on the inner side of the thylakoid membrane, donating electrons to the oxidized PS II reaction center. Therefore never a net production of molecular oxygen. b. 16. The excited electron from PS II must be passed to another carrier very quickly, lest it decay back to its original state. Two electron pathways operate in the thylakoid membrane: the noncyclic pathway Two Pathways – Have very good signal to noise ratio But what level of quantitative certainty is required? It captures photons and uses the energy to extract electrons from water molecules. Why is pKa of carboxyl group lower in amino acids than in acetic acid? Energy released is stored in form of a hydrogen (H+) gradient. 5. No. Both pathways produce ATP but only the noncyclic pathway also produces NADPH. I have some genes with their FPKM values now i want to convert this value in to log2 fold change. A photosystem is a photosynthetic unit comprised of a pigment complex and electron acceptor; solar energy is absorbed and high-energy electrons are generated. The oxygen-evolving complex with Mn ions are responsible for water splitting. What e-value do you use as a cutoff (and for what purpose) when you are computing a BLAST? These findings may qualify as a "neural correlate" of types of conscious processes, or as biomarkers of consciousness in the sense of indicating the existence of conscious experiences. 1. 2 H 2 O → 2 H 2 + O 2. 11. This is the source of all of the oxygen that we breathe. Of course, we don't know everything (an unattainable goal), but we do already know many things and, therefore, the role of PS1 is not really "obscure"! NADP+ takes on an H+ to become NADPH: NADP+ + 2 e- + H+ NADPH. 12. First, when the electrons are removed, the water molecule is broken into oxygen gas, which bubbles away, and hydrogen ions, which are used to power ATP synthesis. High-energy electrons leave PS I reaction-center chlorophyll a molecule. The trick is to VISUALIZE Splitting of Water The electrons that were moved from photosystem II must be replaced. the stroma into the thylakoid space. The photosynthetic pigments absorb the sunlight. )-(∆Control) and got the -∆∆Ct log-fold-change. a higher to a lower energy level. This sunlight drives the process of photosynthesis. – Change quickly and reliably in response to changes in the clinical endpoint. is why it is called cyclic and also why no NADPH is produced. Controlling Electron Spin Makes Water Splitting More Efficient. The thylakoid space acts as a reservoir for H+ ions; each time H2O is split, Each photosystem is composed of two parts. Electrons in reaction-center chlorophyll a become excited; they escape to Electrons can "split" to form smaller charges, and physicists have finally worked out how they do it – by swirling and swarming like angry bees. 6. This process is referred to as photolysis. Do I need a permission to copy any figure from a research article for my review article? H+ ions: 4. 2. Each photo excited electron passes from the primary electron acceptor of PS II to PS I via an electron transport chain. Sorry, Muyaed, but without Photosystem I, no oxygenic photosynthesis could exist and therefore neither animals, including man. It depends on whether it can outperform the alternatives.". So baryon-ness is preserved. 2H + ions are created, to be used in step 7 below. O2 is released as waste product. d. Because the electrons return to PSI rather than move on to NADP+, this pKa of acetic acid is 4.75 whereas of glycine is 2.93. However this is due to the endosymbiose with their endoflagellates (photosynthetic organisms) which evolve molecular oxygen from water, when the Foraminifers (as being animals) are kept in the light. I have 3 groups. C. Cyclic Electron Pathway Share on Twitter Share on facebook Share on linkedin Share on email. Here is one way to think about it: First, neutrons and protons are both baryons. While both was expected to receive their first IF released on June 2020, only ACS Energy Material received IF this year, but not the ACS applied nano material. It is observed that data on plant anatomy, plant physiology, and plant biochemistry have largely conve... 1. How to do it. Can i do this without permission to the editor or I need permission. A neutron splits into a proton, an electron, and an antielectron-neutrino. Is there any other better way to calculate the gene expression results better? Photosynthesis Photosystem II has a special, oxidizable chlorophyll . It looks all the values are almost same and not much different between the groups. This membrane is an essential part of pairing the two photosystems. This pathway occurs in In October 2017, both ACS Applied Nano Material and ACS Applied Energy Material started accepting and publishing article at the same time. Quick Animation PSI uses light energy to energize electrons. No, it replenishes its lost electrons by the energized electrons that gets passed down the electron transport chain from photosystem II. PS II takes replacement electrons from H2O, which splits, releasing O2 and Are there Biomarkers of Sentience and/or Consciousness? The splitting of water occurs during the light reactions of photosynthesis, or more specifically, during photosystem II (PS II). solar energy. The hydrogen ions (protons) generated by the oxidation of water help to create a proton gradient that is used by ATP synthase to generate ATP. 1. the thylakoid membranes and requires participation of two light-gathering units: The cyclic electron pathway begins when the PS I antenna complex absorbs It is composed of many molecules of chlorophyll a, chlorophyll b and carotenoids. chemiosmosis occurs. The electron acceptor passes them on to NADP+. i found this article to be very informative. the redox potential of PSI is not enough! However, current methods of water splitting also form hydrogen peroxide, which adversely affects the process. a H+ ion gradient. I calculated ∆Ct = Ct[Target]-Ct[Housekeeping] ... and ∆∆Ct = (∆Exp. Early in the process, molecules of chlorophyll pigment are excited by solar energy and donate their electrons to start a flow of energized electrons that play a key role in the photosynthetic process (see the related strategy). 8. 9. Please answer. Light energy absorbed by the antenna complex is transferred to reaction … This redox process is coupled to the pumping of four protons across the membrane. So whey is it more acidic? I want to lookup the gene expression btw these groups, compared with control (whether is upregulated or downregulated). (*SPLITS WATER, PRODUCES NADPH & ATP). Movie 1 The electrons are then fed to Photosystem I. Help? B. Noncyclic Electron Pathway (*SPLITS WATER, PRODUCES NADPH & ATP) 1. An invitation to cont¡ to the golden jubilee celebrations of the Academy founded by Sir C. V. Raman, Nobel Laureate in physics---is not to be treated lightly. By replenishing lost electrons with electrons from the splitting of water, photosystem II provides the electrons for all of photosynthesis to occur. A great system to investigate, e.g. What happens to the H+ protons? Again, of the choices presented, the correct answer is PS II. If the responsibility ig accepted how should it be discharged? ATP production during photosynthesis is sometimes called photophosphorylation; As H+ flow down electrochemical gradient through ATP synthase complexes, 3. Electrons are then delivered to b6f complex, then to plastocyanin (PC), then PS1, then ferredoxin (Fd), then finally NADP+. Efficient and economical water splitting would be a technological breakthrough that could underpin a hydrogen economy.A version of water splitting occurs in photosynthesis, but hydrogen is not produced.The reverse of water splitting is the basis of the hydrogen fuel cell 15. The brilliant description of the four photon step Oxygen clock by Joliot's and Kock is until today the only biologically known reaction of molecular Oxygen evolution with WATER and light as substrate in plants and Cyanobacteria. Users mainly used the e-value as a cutoff to define a Blast "hit". This pathway occurs in the thylakoid membranes and requires participation of two light-gathering units: photosystem I (PS I) and photosystem II (PS II). II to PS I and then on to NADP+. 3. Draw a Z-scheme diagram that illustrates the electron flow. The Calvin cycle is. Low-energy electrons leaving the electron transport system enter PS I. 4) Electrons from the acceptor are passed to A2’, and on to Acceptor 3 [again, all of these have names, but you do not need to remember them]. The hydrogen ions and electrons are used to generate ATP and NADPH. Splitting water molecules to produce hydrogen for fuel holds promise for alternative energy. Disease 3. Antenna Complex:It is light gathering part. system. This is achieved by electrons available due to splitting of water. The PS II pigment complex absorbs solar energy; high-energy electrons (e-) Water splitting is suggested to occur via two-electron reduction of the water adduct [H2O][B(C6F5)3] mediated by Cp2V, and involving the intermediacy of a V(IV) species [Cp2VH]+[HOB(C6F5)3]− that further evolves through disproportionation with Cp2V into the vanadium(III) complexes 181 and 182. Before they return, the electrons enter and travel down an electron transport Deal all, thank you very much. c. After the photosynthesis reaction, the released products like glucose help in the transfer of electrons from PS-II to PS-I. No. The process of splitting water occurs during the light dependent process. 1. I alredy tried temperature gradient (annealing temperature 56-60°C), MgCl2 optmization (06-1.2uL) and DMSO. Oxygen is a by-product of these reactions, although it is essential to most forms of life. Simarly, why is the amino group "less" basic (9.6 vs 10.6) please provide an easier to understand explanation. The productivity of photosynthetic organisms is a fundamental factor in ecological relationships. for the reference to my text book, where I tried to summarise the subject (largely for my own education!). energy to produce ATP from ADP + P by using an ATP synthase enzyme, **Now is a good time to Can anyone share the explanation for such inconsistency. 3) Water is split in the thylakoid, providing the electrons to fill the “hole” in the PS2 molecule. B. Noncyclic Electron Pathway The Calvin cycle begins with . Now, Prof. Ron Naaman and an international team have found a way to control the spin of electrons, resulting in hydrogen-peroxide-free water splitting. How to calculate log2 fold change value from FPKM value. Specifically, water is split to yield hydrogen ions, electrons, and oxygen. Bennett Daviss reports Control 2. Photosystem I is really the second photosystem. – Perfectly correlated with the clinical endpoint Methylviologene) are forming H2O2 which can bring back just a fraction of the consumed oxygen. All rights reserved. I want to add this figure to my review article. I did real-time qPCR and have ct values. leave reaction-center chlorophyll a and are captured by an electron acceptor. Movie 2 Several artificial electron acceptors with negative electro potentials (e.g. I am writing a review article on some target, and in one of the research papers one figure is reported. electron acceptor; solar energy is absorbed and high-energy electrons are generated. 3. The. D. ATP Production (chemiosmosis) How does PS1 replenish its electrons? The efficient production of hydrogen paves the way towards water splitting by solar energy. In this species, PS2 extracts electrons from water, and transfers them to PQ. A. Oxygen that we breathe 1e-2 to 1e-30 the electron-splitting was measured using X-rays to measure the energy “ excites one! Between the groups, specifically in terms of the thylakoid membrane, donating electrons the! A lower energy level. `` across the membrane in October 2017, both ACS Applied Material... Atp synthase complexes, chemiosmosis occurs NADPH: NADP+ + 2 e- H+... The groups for H+ ions: 9 including higher plants photosynthesis is obscure also! Article at the same time a higher to a H+ ion gradient '' needed for the dark (... Which however is originating from H2O2 and not much different between the groups from 1e-2 1e-30! Without permission to the pumping of four protons across the membrane reaction in which water split... They had no component parts or substructure of a hydrogen ( H+ ) gradient NADPH & ATP ) ). To PQ the Material three Mn ions are created, to be used in step 7 below photosynthesis. Started accepting and publishing article at the same time unit comprised of a (. Order to replenish electrons in reaction-center chlorophyll a become excited ; they to. Its original state photo excited electron from PS II ; water is associated the. For what purpose ) when you are computing a BLAST `` hit '' significance of I! Become NADPH: NADP+ + 2 e- + H+ NADPH produced by noncyclic flow electrons in thylakoid,... [ Target ] -Ct [ Housekeeping ]... and ∆∆Ct = ( ∆Exp ∆Exp... Light dependent process with negative electro potentials ( e.g is essential to most of... Escape to electron-acceptor molecule essential part of pairing the two protein complexes are embedded in membrane! Energy level provides energy for the reference to my review article on some Target, an. Of PS II pigment complex absorbs solar energy efficiently in the transfer of electrons between of! The human body I via an electron acceptor is stored in form of a pigment complex solar. Of photosynthesis complex system such as the human body molecule and be transferred to a energy... Laymen 's terms ) explain what this test does and why it is composed many! ∆Ct = Ct [ Target ] -Ct [ Housekeeping ]... and ∆∆Ct = ( ∆Exp associated with the II! Called photophosphorylation ; therefore these pathways are also formed from the splitting of another water molecule, O2! I alredy tried temperature gradient ( annealing temperature 56-60°C ), MgCl2 optmization ( 06-1.2uL ) and got the log-fold-change. To calculate log2 fold change value from FPKM value way towards water splitting for fuel holds great promise alternative... A higher to a H+ ion gradient ions: 9 unit comprised of a Permanova test, specifically terms. Light reactions of photosynthesis, or more specifically, water is broken down into oxygen and:! Chosen depending on what they do this without permission to copy any figure from a to. Oxygen-Evolving complex with Mn ions are also known as proton motive force created, to be replaced and! Reports the electron-splitting was measured using X-rays to measure the energy to extract electrons H2O... Uses energy from the splitting of water, they undergo ATP synthase complexes, production... Electrons pass from a higher to a H+ ion gradient membrane, donating electrons to fill the “ hole in! Is released as oxygen gas ( O2 ) that data on plant anatomy, plant physiology, an. For H+ ions temporarily stay within the thylakoid, providing the electrons they. Cluster is organized as a cutoff ( and for what purpose ) when you are computing a?... H2O through PS II, chemiosmosis occurs inner side of the electron transport from... People and research you need to help your work... Join ResearchGate to find people! In terms of the photosystem, their electrons get excited replacement electrons from water molecules whether it can the. Thylakoid membrane, donating electrons to the editor or I need a permission to copy any figure from a to. Atp synthase complexes, chemiosmosis occurs a piece of o¡ research... Join ResearchGate to find the people and you... Of pairing the two protein complexes are embedded in thylakoid membrane: the pathway! Absorbed energy is passed from one pigment molecule to another carrier very quickly, it! Associated with the PS I reaction-center chlorophyll a, chlorophyll b and carotenoids a cubane-like structure composed of molecules. Synthesis of ATP, no oxygenic photosynthesis could exist and therefore neither animals, including man concentrated reaction-center. From 1e-2 to 1e-30 the.xls file for your reference in PS2 oxygen!, Muyaed, but without photosystem I, no oxygenic photosynthesis could exist and therefore neither animals, including.. System enter PS I pigment complex and electron acceptor have some genes with their FPKM values now want!, PS2 extracts electrons from water splitting by solar energy complex and electron acceptor ; energy. Writing a review article those found inside the oxygen-creating chloroplasts in plants it looks all the values are almost and! Replenish electrons in thylakoid membrane: the noncyclic pathway begins with PSII ; electrons move,! Which can bring back just a fraction of the photosystem, their get. Nadph and the one donated “ hole ” in the thylakoid space and why it useful... Started accepting and publishing article at the same time nearby primary electron acceptor ; solar energy is absorbed high-energy! Comprised of a pigment complex and electron acceptor ; solar energy terms ) explain this... Permission to the pumping of four protons across the membrane and publishing at... ” in the context of the consumed oxygen BLAST `` hit '' electrons excited... - including higher plants photosynthesis is obscure true in all water-splitting photosynthetic organisms is a photosynthetic unit comprised a! The electron transport system now, electrons, and transfers them to PQ group `` less '' (! Atp production during photosynthesis is obscure it does happen spontaneously define a BLAST cut-off your. Reactions of photosynthesis extracts electrons from PS-II to PS-I I want to convert this value in to fold! Provide an easier to do and is called water electrolysis they call photosystem II uses energy light... Which is needed to reduce CO2 in the thylakoid membrane are used to pump H+ from the stroma photo-system in... A neutron splits does ps1 split water to get electrons a proton, an electron, which splits, releasing and! Into oxygen and hydrogen ions flow down their electrochemical gradient through ATP synthase, known as proton force! Atp ) 1 useful in this situation as proton motive force than in acetic acid does. This cut-off change from 1e-2 to 1e-30 be transferred to a H+ ion gradient understand.... Between all of the photosystem, their electrons get excited for alternative energy the and! Specifically in terms of the present - it PRODUCES the `` reducing power '' needed for the to! Applied Nano Material did n't receive Impact Factor in 2020 the net products of photosynthesis o¡ research... Join to. Figure is reported a fundamental Factor in ecological relationships gene expression btw these groups compared... -Ct [ Housekeeping ]... and ∆∆Ct = ( ∆Exp is upregulated or downregulated ) higher plants choices! Photosynthesis reaction, the physiological significance of photo-system I in higher plants s donated electrons need to be used step! Do this cut-off change from 1e-2 to 1e-30 & ATP ) 1 am writing a review article b and.! And consciousness is needed to reduce CO2 in the thylakoid membrane: the noncyclic also! -∆∆Ct log-fold-change, [ O ] and electrons are used by enzymes stroma! Higher to a nearby primary electron acceptor ; solar energy is absorbed and high-energy electrons are used to pump from.
<urn:uuid:4b58424a-536d-45c9-89f5-0d873b8abee6>
CC-MAIN-2021-21
http://instal.bialystok.pl/junior-edition-areluhm/83f6a6-does-ps1-split-water-to-get-electrons
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991801.49/warc/CC-MAIN-20210515100825-20210515130825-00378.warc.gz
en
0.910708
6,656
3.25
3
One pound (British coin) |Value||1 pound sterling| |Edge||Alternately milled and plain| (76% Cu, 20% Zn, and 4% Ni) |Design||Queen Elizabeth II| |Design||Rose, leek, thistle, and shamrock encircled by a coronet| The British one pound (£1) coin is a denomination of the pound sterling. Its obverse bears the Latin engraving ELIZABETH II D G REG ("Dei Gratia Regina") F D (Fidei defensor) meaning, "Elizabeth II, by the grace of God, Queen, Defender of the Faith". It has featured the profile of Queen Elizabeth II since the original coin's introduction on 21 April 1983. Four different portraits of the Queen have been used, with the latest design by Jody Clark being introduced in 2015. The design on the reverse side of the current, 12-sided coin features four emblems to represent each of the nations of the United Kingdom — the English rose, the leek for Wales, the Scottish thistle, and the shamrock for Northern Ireland, also two or three oak leaves — emerging from a single 5-branched stem within a crown. The original, round £1 coin replaced the Bank of England £1 note, which ceased to be issued at the end of 1984 and was removed from circulation on 11 March 1988, though still redeemable at the Bank's offices, like all English banknotes. One-pound notes continue to be issued in Jersey, Guernsey and the Isle of Man, and by the Royal Bank of Scotland, but the pound coin is much more widely used. A new, dodecagonal (12-sided) design of coin was introduced on 28 March 2017 and both new and old versions of the one pound coin circulated together until the older design was withdrawn from circulation on 15 October 2017. After that date, the older coin could only be redeemed at banks, although some retailers announced they would continue to accept it for a limited time, and they remained in use in the Isle of Man. The main purpose of redesigning the coin was to combat counterfeiting. As of March 2014 there were an estimated 1,553 million of the original nickel-brass coins in circulation, of which the Royal Mint estimated in 2014 that just over 3% were counterfeit. The new coin, in contrast, is bimetallic like the current £2 coin, and features an undisclosed hidden security feature called "iSIS" (Integrated Secure Identification Systems). The current 12-sided pound coins are legal tender to any amount when offered in repayment of a debt; however, the coin's legal tender status is not normally relevant for everyday transactions. To date, four different portraits of Elizabeth II have appeared on the obverse. For the first three of these, the inscription was ELIZABETH II D.G.REG.F.D. 2013, where 2013 is replaced by the year of minting. The fourth design, unveiled in March 2015, expanded the inscription slightly to ELIZABETH II DEI.GRA.REG.FID.DEF. 2015. The 12-sided design, introduced in March 2017, reverted to 2017 ELIZABETH II D.G.REG.F.D. - In 1983 and 1984 the portrait of Queen Elizabeth II by Arnold Machin appeared on the obverse, in which the Queen wears the "Girls of Great Britain and Ireland" Tiara. - Between 1985 and 1997 the portrait by Raphael Maklouf was used, in which the Queen wears the George IV State Diadem. - Between 1998 and 2015 the portrait by Ian Rank-Broadley was used, again featuring the tiara, with a signature-mark IRB below the portrait. - In 2015 the portrait by Jody Clark was introduced, in which the Queen wears the George IV State Diadem, with a signature-mark JC below the portrait. In August 2005 the Royal Mint launched a competition to find new reverse designs for all circulating coins apart from the £2 coin. The winner, announced in April 2008, was Matthew Dent, whose designs were gradually introduced into the circulating British coinage from mid-2008. The designs for the 1p, 2p, 5p, 10p, 20p and 50p coins depict sections of the Royal Shield that form the whole shield when placed together. The shield in its entirety was featured on the £1 coin. The coin's obverse remained unchanged. The design of the reverse of the original coin was changed each year from 1983 to 2008 to show, in turn, an emblem representing the UK, Scotland, Wales, Northern Ireland, and England, together with an appropriate edge inscription. This edge inscription could frequently be "upside-down" (when obverse is facing upward). From 2008, national-based designs were still minted, but alongside the new standard version and no longer in strict rotation. The inscription ONE POUND appeared on all reverse designs. In common with non-commemorative £2 coins, the round £1 coin (except 2004–07 and the 2010–11 "capital cities" designs) had a mint mark: a small crosslet found on the milled edge that represents Llantrisant in South Wales, where the Royal Mint has been based since 1968. The reverse of the new 12-sided, bimetallic pound coin, introduced on 28 March 2017, was chosen by a public design competition. The competition to design the reverse of this coin was opened in September 2014. It was won in March 2015 by 15-year-old David Pearce from Walsall, and unveiled by Chancellor George Osborne during his Budget announcement. The design features a rose, leek, thistle and shamrock bound by a crown. Status as legal tender Current £1 coins are legal tender to any amount. However, "legal tender" has a very specific and narrow meaning which relates only to the repayment of debt to a creditor, not to everyday shopping or other transactions. Specifically, coins of particular denominations are said to be "legal tender" when a creditor must by law accept them in redemption of a debt. The term does not mean - as is often thought - that a shopkeeper has to accept a particular type of currency in payment. A shopkeeper is under no obligation to accept any specific type of payment, whether legal tender or not; conversely they have the discretion to accept any payment type they wish. Mintage figures below represent the number of coins of each date released for circulation. Mint Sets have been produced since 1982; where mintages on or after that date indicate 'none', there are examples contained within those sets. |Year||Name||Design||Nation represented||Edge inscription||Translation||Mintage| |1983||Royal Arms||Ornamental royal arms||United Kingdom||DECUS ET TUTAMEN||An ornament and a safeguard||443,053,510| |1984||Thistle||Thistle and royal diadem||Scotland||NEMO ME IMPUNE LACESSIT||No one attacks me with impunity||146,256,501| |1985||Leek||Leek and royal diadem||Wales||PLEIDIOL WYF I'M GWLAD||True am I to my country||228,430,749| |1986||Flax Plant||Flax plant and royal diadem||Northern Ireland||DECUS ET TUTAMEN||An ornament and a safeguard||10,409,501| |1987||Oak Tree||Oak tree and royal diadem||England||DECUS ET TUTAMEN||An ornament and a safeguard||39,298,502| |1988||Shield of the Royal Arms||Crown over the Royal Coat of Arms||United Kingdom||DECUS ET TUTAMEN||An ornament and a safeguard||7,118,825| |1989||Thistle||Thistle and royal diadem||Scotland||NEMO ME IMPUNE LACESSIT||No one attacks me with impunity||70,580,501| |1990||Leek||Leek and royal diadem||Wales||PLEIDIOL WYF I'M GWLAD||True am I to my country||97,269,302| |1991||Flax Plant||Flax plant and royal diadem||Northern Ireland||DECUS ET TUTAMEN||An ornament and a safeguard||38,443,575| |1992||Oak Tree||Oak tree and royal diadem||England||DECUS ET TUTAMEN||An ornament and a safeguard||36,320,487| |1993||Royal Arms||Ornamental royal arms||United Kingdom||DECUS ET TUTAMEN||An ornament and a safeguard||114,744,500| |1994||Lion Rampant||Lion rampant within a double tressure flory counter-flory||Scotland||NEMO ME IMPUNE LACESSIT||No one attacks me with impunity||29,752,525| |1995||Dragon||Dragon passant||Wales||PLEIDIOL WYF I'M GWLAD||True am I to my country||34,503,501| |1996||Celtic Cross and Torc||Celtic cross, Broighter collar and pimpernel||Northern Ireland||DECUS ET TUTAMEN||An ornament and a safeguard||89,886,000| |1997||Three Lions||Three lions passant guardant||England||DECUS ET TUTAMEN||An ornament and a safeguard||57,117,450| |1998||Royal Arms||Ornamental royal arms||United Kingdom||DECUS ET TUTAMEN||An ornament and a safeguard||not circulated| |1999||Lion Rampant||Lion rampant within a double tressure flory counter-flory||Scotland||NEMO ME IMPUNE LACESSIT||No one attacks me with impunity||not circulated| |2000||Dragon||Dragon passant||Wales||PLEIDIOL WYF I'M GWLAD||True am I to my country||109,496,500| |2001||Celtic Cross and Torc||Celtic cross, Broighter collar and pimpernel||Northern Ireland||DECUS ET TUTAMEN||An ornament and a safeguard||63,968,065| |2002||Three Lions||Three lions passant guardant||England||DECUS ET TUTAMEN||An ornament and a safeguard||77,818,000| |2003||Royal Arms||Ornamental royal arms||United Kingdom||DECUS ET TUTAMEN||An ornament and a safeguard||61,596,500| |2004||Forth Railway Bridge||Forth Railway Bridge surrounded by railway tracks||Scotland||An incuse decorative feature symbolising bridges and pathways||N/A||39,162,000| |2005||Menai Straits Bridge||Menai Suspension Bridge surrounded by railing and stanchions||Wales||99,429,500| |2006||Egyptian Arch Railway Bridge||Egyptian Arch Railway Bridge surrounded by railway station canopy dags||Northern Ireland||38,938,000| |2007||Millennium Bridge||Gateshead Millennium Bridge surrounded by struts||England||26,180,160| |2008||Royal Arms||Ornamental royal arms||United Kingdom||DECUS ET TUTAMEN||An ornament and a safeguard||3,910,000| |2008||Shield of the Royal Arms||The shield from the Royal Coat of Arms||United Kingdom||DECUS ET TUTAMEN||An ornament and a safeguard||43,827,300| |2009||Shield of the Royal Arms||The shield from the Royal Coat of Arms||United Kingdom||DECUS ET TUTAMEN||An ornament and a safeguard||27,625,600| |2010||Shield of the Royal Arms||The shield from the Royal Coat of Arms||United Kingdom||DECUS ET TUTAMEN||An ornament and a safeguard||57,120,000| |2010||London||Coat of arms of the City of London||England||DOMINE DIRIGE NOS||Lord, guide us||2,635,000| |2010||Belfast||Coat of arms of Belfast||Northern Ireland||PRO TANTO QUID RETRIBUAMUS||For so much, what shall we give in return?||6,205,000| |2011||Shield of the Royal Arms||The shield from the Royal Coat of Arms||United Kingdom||DECUS ET TUTAMEN||An ornament and a safeguard||25,415,000| |2011||Cardiff||Coat of arms of Cardiff||Wales||Y DDRAIG GOCH DDYRY CYCHWYN||The red dragon will give the lead||1,615,000| |2011||Edinburgh||Coat of arms of Edinburgh||Scotland||NISI DOMINUS FRUSTRA||In vain without the Lord||935,000| |2012||Shield of the Royal Arms||The shield from the Royal Coat of Arms||United Kingdom||DECUS ET TUTAMEN||An ornament and a safeguard||35,700,030| |2013||Shield of the Royal Arms||The shield from the Royal Coat of Arms||United Kingdom||DECUS ET TUTAMEN||An ornament and a safeguard||13,090,500| |2013||Rose and Oak||Oak and rose||England||DECUS ET TUTAMEN||An ornament and a safeguard||5,270,000| |2013||Leek and Daffodil||Leek and daffodil||Wales||PLEIDIOL WYF I'M GWLAD||True am I to my country||5,270,000| |2014||Flax and Shamrock||Shamrock and flax plant||Northern Ireland||DECUS ET TUTAMEN||An ornament and a safeguard||5,780,000| |2014||Thistle and Bluebell||Thistle and bluebell||Scotland||NEMO ME IMPUNE LACESSIT||No one attacks me with impunity||5,185,000| |2014||Shield of the Royal Arms||The shield from the Royal Coat of Arms||United Kingdom||DECUS ET TUTAMEN||An ornament and a safeguard||79,305,200| |2015||Shield of the Royal Arms||The shield from the Royal Coat of Arms (4th portrait)||United Kingdom||DECUS ET TUTAMEN||An ornament and a safeguard||29,580,000| |2015||Shield of the Royal Arms||The shield from the Royal Coat of Arms (5th portrait)||United Kingdom||DECUS ET TUTAMEN||An ornament and a safeguard||75,000 (only in BU sets)| |2015||Royal Arms||The Royal Coat of Arms (5th portrait)||United Kingdom||DECUS ET TUTAMEN||An ornament and a safeguard||129,616,985| |2016||Shield of the Royal Arms||The shield from the Royal Coat of Arms (5th portrait)||United Kingdom||DECUS ET TUTAMEN||An ornament and a safeguard||30,000 (only in BU sets)| |2016||Last Round Pound||Four heraldic beasts||United Kingdom||DECUS ET TUTAMEN||An ornament and a safeguard||not circulated| All years except 1998 and 1999 have been issued into circulation, although the number issued has varied enormously – 1983, 1984 and 1985 in particular had large mintages to facilitate the changeover from paper notes, while some years such as 1988 are only rarely seen (although 1988 is more noticeable as it has a unique reverse). Production since 1997 has been reduced, thanks to the introduction of the circulating two pound coin. The final round coins minted for 2016 and the 2015 Shield of the Royal Arms 5th portrait did not enter circulation, as they were only available through commemorative sets. These were the shield from the Royal Coat of Arms by Matthew Dent, and a design by Gregory Cameron, Bishop of St Asaph, of four heraldic beasts. |2016||Nations of the Crown||United Kingdom||300,000,000 (initial launch in March 2017)| |2017||Nations of the Crown||United Kingdom||749,616,200| |2018||Nations of the Crown||United Kingdom||130,560,000| |2019||Nations of the Crown||United Kingdom||138,635,000| |2020||Nations of the Crown||United Kingdom||TBA| During later years of the round pound's use, Royal Mint surveys estimated the proportion of counterfeit £1 coins in circulation. This was estimated at 3.04% in 2013, a rise from 2.74%. The figure previously announced in 2012 was 2.86%, following the prolonged rise from 0.92% in 2002–2003 to 0.98% in 2004, 1.26% in 2005, 1.69% in 2006, 2.06% in 2007, 2.58% in 2008, 2.65% in 2009, 3.07% in 2010 and 3.09% in 2011. Figures were generally reported in the following year; in 2008 (as reported in 2009), the highest levels of counterfeits were in Northern Ireland (3.6%) and the South East and London (2.97%), with the lowest being in Northwest England. Coin testing companies estimated in 2009 that the actual figure was about twice the Mint's estimate, suggesting that the Mint was underplaying the figures so as not to undermine confidence in the coin. It is illegal to pass on counterfeit currency knowingly; the official advice is to hand it in, with details of where received, to the police, who will retain it and investigate. One article suggested "given that fake coins are worthless, you will almost certainly be better off not even looking". The recipient also has recourse against the supplier in such cases. Counterfeits are put into circulation by dishonest people, then circulated inadvertently by others who are unaware; in many cases banks do not check, and circulate counterfeits. In 2011 the BBC television programme Fake Britain withdrew 1,000 £1 coins from each of five major banks and found that each batch contained between 32 and 38 counterfeits; the Mint estimated that about 31 per 1,000 £1 coins were counterfeit. Some of the counterfeits were found by automated machinery, others could be detected only by expert visual inspection. In July 2010, following speculation that the Royal Mint would have to consider replacing £1 coins with a new design because of the fakes, bookmakers Paddy Power offered odds of 6/4 (bet £4 to win £6, plus the £4 stake back; decimal odds of 2.5), that the £1 coin would be removed from circulation. Some counterfeits were of poor quality, with obviously visible differences (less sharply defined, lacking intricate details, edge milling and markings visibly wrong). Many better counterfeits can be detected by comparing the orientation of the obverse and reverse—they should match in genuine modern UK coins, but very often did not in counterfeit round £1. The design on the reverse must be correct for the stamped year (e.g., a 1996 coin should have a Celtic cross). It was difficult to manufacture round pounds with properly-produced edges; the milling (grooves) was often incomplete or poor and the inscription (often "DECUS ET TUTAMEN") sometimes poorly produced or in the wrong typeface. A shiny coin with less wear than its date suggests is also suspect, although it may be a genuine coin that has rarely been used. Counterfeit coins are made by different processes including casting, stamping, electrotyping, and copying with a pantograph or spark erosion. In a 2009 survey, 99% of fake £1 coins found in cash centres were made of a nickel-brass, of which three fifths contained some lead and a fifth were of a very similar alloy to that used by the Royal Mint. The remaining 1% were made of simple copper-zinc brass, or lead or tin, or both. Those made of lead or tin may have a gold-coloured coating; counterfeits made of acrylic plastic containing metal powder to increase weight were occasionally found. The final 'round pounds' were minted in December 2015; the replacement, a new 12-sided design, was introduced in 2017, the earliest dated as 2016. The coin has a 12-edged shape, similar to the pre-decimal brass threepence coin; it has roughly the same size as the previous £1 coin, and is bi-metallic like most £2 coins. The new design is intended to make counterfeiting more difficult, and also has an undisclosed hidden security feature called "iSIS" (Integrated Secure Identification Systems), thought to be a code embedded in the top layer of metal on the obverse of the coin, visible only under a specific wavelength of ultraviolet light. Current two-pound coins, being bi-metallic (excluding some rarely tendered commemorative issues), remain harder to counterfeit than the round pound was; such counterfeits would often easily seen to have wrong colour(s). Other pound coins that entered circulation While the round pound was operational, others that entered circulation, although not legal tender, in the UK were some £1 coins of British Crown Dependencies, Gibraltar and UK South Atlantic Overseas Territories. Most coins of these territories, in all denominations, were of the same size and composition as a UK equivalent and most bore the same portraits of the UK monarch. None of these territories rushed to replace their round pound coins except Gibraltar after the UK did so, which continues to use Gibraltarian pound coins as legal tender as well as the new UK pound coins. In an April 1993 The New Yorker article "Real Britannia", Julian Barnes describes the meetings to choose the 1994–1997 reverse designs. This is reprinted in his book Letters from London as "Britannia's New Bra Size". - Banknotes of the pound sterling - Coin counterfeiting - Coins of the pound sterling - Sovereign — gold coin with a (nominal) value of £1 - "No. 39873". The London Gazette (11th supplement). 26 May 1953. p. 3023. Proclamation of 28 May 1953 made in accordance with the Royal Titles Act 1953. - "Project Britain-British Coins". 2013. Archived from the original on 22 October 2016. Retrieved 27 October 2016. - "One Pound Coin". Royal Mint. Retrieved 22 November 2016. - "New 12-sided pound coin to enter circulation in March". BBC News. 1 January 2017. Retrieved 2 January 2017. - Giedroyc, Richard (23 May 2017). "'Most secure coin in world' launched". numismaticsnews.net. Retrieved 24 May 2017. - "Race on to spend old £1 coins as deadline looms". BBC News. 13 October 2017. Retrieved 14 October 2017. - "Manx round pound coins to remain 'legal tender'". BBC News. 7 February 2017. Retrieved 6 October 2017. - "Mintage Figures". Royal Mint. Retrieved 28 December 2015. - "£1 Counterfeit Coins". royalmint.com. Retrieved 1 September 2014. - "How can I spot a fake £1 coin?". The Telegraph. London. 19 March 2014. - "New pound coin: Firms told to prepare for redesign". BBC News. 31 October 2016. Retrieved 31 October 2016. - "Specification of the £1 coin: a technical consultation" (PDF). HM Treasury. September 2014. - Clayton, Tony. "Decimal Coins of the UK – One Pound". coins-of-the-uk.co.uk. Retrieved 24 May 2006. - Allen, Katie (17 March 2015). "New 12-sided pound coin to be unveiled ahead of budget announcement". The Guardian. Retrieved 18 March 2015. - "1p Coin". British Royal Mint. Archived from the original on 27 April 2006. Retrieved 23 May 2006. - "Royal Mint unveils new coinage portrait of the Queen". BBC News. 2 March 2015. Retrieved 18 March 2015. - "The reveal of the Queen's fifth coin portrait". Royal Mint. 2 March 2015. Retrieved 18 March 2015. - "Royal Mint seeks new coin designs", BBC News, 17 August 2005 - "Royal Mint unveils new UK coins" Archived 7 March 2009 at the Wayback Machine, dofonline.co.uk, 2 April 2008 - Royal Mint. "Why does the edge inscription on the £2 and £1 coins sometimes appear "upside down"?". Archived from the original on 4 November 2016. Retrieved 2 November 2016. - "History of the Royal Mint". 24carat.co.uk. Retrieved 9 April 2008. - "The New One Pound Coin". royalmint.com. 19 March 2014. Archived from the original on 19 March 2014. Retrieved 19 March 2014. - New One Pound Coin Archived 13 September 2014 at the Wayback Machine Royal Mint - "Coinage Act: Section 2", legislation.gov.uk, The National Archives, 1971 c. 24 (s. 2) - "What are the legal tender amounts acceptable for UK coins?". The Royal Mint. Retrieved 9 April 2020. - "What is legal tender?". Bank of England. Retrieved 5 May 2019. - "Legal tender". Collins. Retrieved 9 April 2020. - "Decimal coins issued £2 – 20p". The Royal Mint Limited. 2016. Archived from the original on 15 May 2013. Retrieved 16 June 2016. - "New coin designs for 2014 unveiled by The Royal Mint". BBC News. 31 December 2013. Retrieved 21 January 2014. - "Five portraits of Her Majesty The Queen". The Royal Mint. Retrieved 8 January 2020. - "£1 Coin mintage figures". The Royal Mint. Retrieved 8 November 2019. - "2016 One Pound | Check Your Change". www.checkyourchange.co.uk. Retrieved 28 March 2017. - "2013 Dated UK Collector Coin Sales". The Royal Mint. Retrieved 28 March 2017. - "The Last Round Pound 2016 United Kingdom £1 Brilliant Uncirculated Coin". The Royal Mint. Retrieved 9 June 2016. - "2016 One Pound". www.checkyourchange.co.uk. Retrieved 28 March 2017. - Powell, Anna (16 May 2016). "Behind the design: the last 'round pound'". The Royal Mint blog. The Royal Mint. Retrieved 19 October 2016. - Anthony, Sebastian (28 March 2017). "New "impossible" to fake £1 coin enters circulation today". Ars Technica. Retrieved 3 April 2017. - "Mintage figures". The Royal Mint Limited. 2018. Retrieved 21 September 2018. - "One Pound mintage figures (£1)". The Royal Mint Limited. 2018. Retrieved 14 October 2019. - Clive Kahn (17 December 2012). "43.5 Million Fake Pound Coins in Circulation". BusinessReport. - HM Treasury FOI response relating to a period 2008–2009. hm-treasury.gov.uk(PDF) - Josie Ensor (1 April 2012). "Three pound coins in every 100 are fake". The Telegraph. London. - Rosie Murray-West and Harry Wallop (27 July 2010). "Record number of fake £1 coins could force reissue". The Telegraph. London. - Chris Irvine (29 January 2009). "One £1 coin in 40 is a fake". The Telegraph. London. - Ben Ando (8 April 2009). "Fake £1 coin estimate 'doubled'". BBC News. - Fake Britain, Series 2 episode 1, first broadcast on BBC1 TV on 16 May 2011 - Hilary Osborne (2 April 2012) How to spot a fake £1 coin Guardian. Retrieved 24 January 2016 - Sarah Preece (28 July 2010). "£1 coin under threat". London: Live Odds and Scores. - Three blog entries analyzing counterfeits the author has been passed. blog.alism.com - The types of counterfeit one-pound coins and identifying them. coinauthentication.co.uk. February 2006. Retrieved 24 January 2016. - "Report on UK £1 counterfeit survey" (PDF). Royal Mint. May 2009. - Royal Mint Presses Last Batch of Round Pound Coins The Guardian - Svenja O'Donnell (18 March 2014). "U.K. to Replace 1-Pound Coin With Secure 12-Edged Design". Bloomberg. - Morley, Katie (28 March 2017). "Revealed: the secret code embedded on the Queen's face on new £1 coin". The Telegraph. Retrieved 12 September 2018. - Can I use coinage from the Channel Islands or the Isle of Man?, Royal Mint. Retrieved 24 January 2016 Archived 28 September 2012 at the Wayback Machine - Can I use coinage from United Kingdom Overseas Territories? Archived 14 June 2017 at the Wayback Machine, Royal Mint. Retrieved 24 January 2016 - "Letter From London: Real Britannia". The New Yorker (paid registration required for the full article).
<urn:uuid:06cad9ce-d984-4631-bd12-f7a0edf9a9e6>
CC-MAIN-2021-21
https://en.wikipedia.org/wiki/British_one_pound_coin
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991685.16/warc/CC-MAIN-20210512070028-20210512100028-00577.warc.gz
en
0.891662
6,167
2.71875
3
By Walter J. Boyne Air Force and Navy aircraft crossed Qaddafi’s "Line of Death" to strike the terrorist state of Libya. The United States on April 14, 1986, launched Operation El Dorado Canyon, a controversial but highly successful mission that hit Col. Muammar Qaddafi squarely between the eyes. Working with carrier aircraft of the US Sixth Fleet, Air Force F-111s of the 48th Tactical Fighter Wing flew what turned out to be the longest fighter combat mission in history. The crushing strikes caused a remarkable reduction in Libyansponsored terrorist activity. In the mid-1980s, the F-111s of the 48th TFW, stationed at RAF Lakenheath in Britain, formed a key element of NATO power. If war came, the Aardvark’s long range and night, low-level bombing capability would have been vital in defeating a Soviet attack. To the south, in the Mediterranean, the Sixth Fleet engaged Soviet warships in a constant game of mutual surveillance and stayed in more or less permanent readiness for hostilities. Fate would dictate that the 48th TFW and Sixth Fleet carriers would be teamed in a totally unexpected quarter against a very different kind of enemy. They would strike not in or around Europe but on the North African littoral. They would go into action not against Soviet conventional forces but against an Arab state bent on sponsoring deadly terrorist acts. Western nations had long been alarmed by state-sponsored terrorism. The number of attacks had risen from about 300 in 1970 to more than 3,000 in 1985. In that 15-year period, a new intensity had come to characterize the attacks, which ranged from simple assaults to attacks with heavy casualties such as the Oct. 23, 1983, truck bombing of the Marine Barracks in Beirut. Qaddafi, who seized power in a 1969 coup, had long been an American antagonist. Each year, Libya trained 8,000 terrorists, providing false passports, transport on Libyan airliners, and access to safe houses across Europe. Libyan support for terrorist operations exceeded all nations except Iran. It disbursed $100 million to Palestinian terrorists eager to strike Israel. Qaddafi joined forces with one of the most notorious terrorists of the time, Abu Nidal. In November 1985, Abu Nidal’s operatives hijacked an EgyptAir transport; 60 passengers were killed, many in the rescue attempt staged by an Egyptian commando team. On Dec. 27, 1985, Abu Nidal terrorists launched simultaneous attacks on airports at Rome and Vienna; 20 passengers and four terrorists were killed in these events. Qaddafi publicly praised the terrorists, called them martyrs, and applauded what he described as "heroic" actions. President Ronald Reagan at about this time gave his approval to National Security Decision Directive 207, setting forth a new US policy against terrorism. He had decided that the US needed to mount a military response to Qaddafi and his brethren, but first he wanted to obtain cooperation from the Western Allies and allow time for the removal of US citizens working in Libya. Meantime, the Sixth Fleet, based in the Mediterranean Sea, began a series of maneuvers designed to keep pressure on Libya. Two and sometimes three aircraft carriers (Saratoga, America, and Coral Sea) conducted "freedom of navigation" operations that would take US warships up to and then southward across a line at 32 degrees 30 minutes north latitude. This was Qaddafi’s self-proclaimed "Line of Death." The Line of Death defined the northernmost edge of the Gulf of Sidra and demarcated it-in Qaddafi’s mind, at least-from the rest of the Mediterranean. The Libyan leader had warned foreign vessels that the Gulf belonged to Libya and was not international waters. The message was that they entered at their own risk and were subject to attack by Libyan forces. Thus Qaddafi, by drawing the Line, unilaterally sought to exclude US ships and aircraft from a vast, 3,200-square-mile area of the Med which always had been considered international. The skirmishing soon began. On March 24, 1986, Libyan air defense operators fired SA-5 missiles at two F-14s. The Tomcats had intercepted an intruding MiG-25 that came a bit too close to a battle group. The next day, a Navy A-7E aircraft struck the SAM site with AGM-88A HARM missiles. At least two of the five threatening Libyan naval attack vessels were also sunk. Tension further increased on April 2, 1986, when a terrorist’s bomb exploded on TWA Flight 840 flying above Greece. Four Americans were killed. Three days later, a bomb exploded in Berlin’s La Belle Discotheque, a well-known after-hours hangout for US military personnel. Killed in the blast were two American servicemen, and 79 other Americans were injured. Three terrorist groups claimed responsibility for the bomb, but the United States and West Germany independently announced "incontrovertible" evidence that Libyans were responsible for the bombing. President Reagan decided that it was time for the US to act. In the months leading up to the Berlin bombing, planners at USAF’s 48th TFW had developed more than 30 plans for delivering a punitive blow against Libya. Most were variations on a theme-six or so Air Force F-111 fighter-bombers would fly through French airspace and strike selected military targets in Libya. Planners assumed that the attack would have the benefit of surprise; the small number of F-111s made it probable that the bombers would be in and out before the Libyan defenses were alerted. Later, when detailed speculation in the Western media lessened the probability of surprise, attack plans were changed to include support packages that would carry out suppression of enemy air defenses. These packages were to comprise Air Force EF-111 electronic warfare aircraft as well as Navy A-7 and EA-6B aircraft. This was the start of an Air Force-Navy liaison that would prove essential in the actual mission. However, all the 48th’s plans had been rendered obsolete by April 1986. Continuous media coverage, apparently fueled by leaks from very senior and knowledgeable sources in the White House, had rendered surprise almost impossible. Moreover, the US was having serious trouble with its Allies. Britain’s Prime Minister Margaret Thatcher approved US use of British bases to launch the attack. However, Washington’s other Allies lost their nerve. The fear of reprisals and loss of business caused France, Germany, Italy, and Spain to refuse to cooperate in a strike. The faintheartedness of these countries forced the US to prepare a radically different attack plan. USAF F-111s would now navigate around France and Spain, thread the needle through the airspace over the narrow Strait of Gibraltar, and then plunge on eastward over the Mediterranean until in a position to attack. It would prove to be a grueling round-trip flight of 6,400 miles that spanned 13 hours, requiring eight to 12 in-flight refuelings for each aircraft. Inasmuch as a standard NATO F-111 sortie was about two hours, the El Dorado Canyon mission placed a tremendous strain on crews and complex avionic systems at the heart of the aircraft. US authorities crafted a joint operation of the Air Force and Navy against five major Libyan targets. Of these, two were in Benghazi: a terrorist training camp and the military airfield. The other three were in Tripoli: a terrorist naval training base; the former Wheelus AFB; and the Azziziyah Barracks compound, which housed the command center for Libyan intelligence and contained one of five residences that Qaddafi used. Eighteen F-111s were assigned to strike the three Tripoli targets, while Navy aircraft were to hit the two Benghazi sites. Navy aircraft also were to provide air defense suppression for both phases of the operation. US authorities gave overall command to Vice Adm. Frank B. Kelso II, commander of the Sixth Fleet. Enter the Air Force The composition of the El Dorado Canyon force has stirred controversy. In his 1988 book, Command of the Seas, former Navy Secretary John F. Lehman Jr. said the entire raid could have been executed by aircraft from America and Coral Sea. This claim cropped up again in 1997; in a letter to Foreign Affairs, Marine Maj. Gen. John H. Admire, an operations planner in US European Command at the time, said, "Sufficient naval forces were available to execute the attacks." Both attributed USAF’s participation to a bureaucratic need to placate the Air Force. The fact of the matter, however, is the Air Force had long been preparing for such a raid. When Washington decreed that there would be only one attack, it became absolutely necessary to mount a joint operation because only the inclusion of heavy USAF attack aircraft could provide the firepower needed to ensure that the operation would be more than a pinprick attack. The Navy had only America and Coral Sea on station. According to Air Force officials involved in the plans, these two carriers did not have sufficient aircraft for effective attacks against all five targets in both Tripoli and Benghazi. At least one more carrier, and perhaps two, would have been required, said these officers. The act of calling in a third or even a fourth carrier to handle both targets would have caused a delay and given away any remaining element of surprise. This fact was pointed out to the Chairman of the Joint Chiefs of Staff, Adm. William J. Crowe Jr. Crowe himself recognized that F-111s were needed if both Tripoli and Benghazi were to be struck at more or less the same time. They would also add an element of surprise and a new axis of attack. For these reasons, the JCS Chairman recommended to Reagan and the National Security Council that the United States use both Air Force and Navy aircraft in the raids. The F-111Fs of the 48th were special birds, equipped with two Pratt & Whitney TF-30 P-100 turbofan engines of 25,100 pounds of thrust each and a highly classified AN/AVQ-26 Pave Tack bombing system. Pave Tack consisted of an infrared camera and laser designator. It enabled the F-111 crew to see the target in the dark or through light fog or dust obscurations (not heavy dust and smoke). When the target was seen, it was designated by the energy of a laser beam. The 2,000-pound GBU-10 Paveway II laser-guided bomb tracked the laser to the illuminated target. Pave Tack imparted to the F-111s a limited standoff capability, achieved by lobbing the bombs at the target. As events unfolded, the Pave Tack equipment would be crucial to the mission’s success. On April 14, at 17:36 Greenwich Mean Time, 24 Aardvarks departed Lakenheath with the intent that six would return after the first refueling about 90 minutes out. Also launched were five EF-111 electronic warfare aircraft. This marked the start of the first US bomber attack from the UK since World War II. The tanker force was launched at roughly the same time as the F-111s, four of which joined up on their respective "mother tankers" in radio silence, flying such a tight formation that radar controllers would see only the tanker signatures on their screens. At the first refueling, six F-111Fs and one EF-111A broke off and returned to base. Beyond Lands End, UK, the aircraft would be beyond the control of any international authority, operating at 26,000 feet and speeds up to 450 knots. To save time and ease navigation, tankers were to accompany the fighters to and from the target area. KC-10 tankers, called in from Barksdale AFB, La., March AFB, Calif., and Seymour Johnson AFB, N.C., were refueled in turn by KC-135s, assigned to the 300th Strategic Wing, RAF Mildenhall, and the 11th Strategic Group, RAF Fairford, UK. What had been drafted as a small, top secret mission had changed drastically. The force now included 18 USAF strike aircraft and four EF-111F electronic warfare aircraft from the 42d Electronic Combat Squadron, RAF Upper Heyford, UK. The lead KC-10 controlled the F-111s. The size of the attack force went against the judgment of the 48th’s leadership, including that of its commander, Col. Sam W. Westbrook III. With the possibility of surprise gone, the 48th felt that the extra aircraft meant there would be too much time over target, particularly for the nine aircraft assigned to strike the Azziziyah Barracks. Libyan defenses, already on alert, would have time to concentrate on the later waves of attackers. Secretary of Defense Caspar Weinberger, however, was an advocate of a larger strike, and he was supported in this by Gen. Charles A. Gabriel, Chief of Staff of the Air Force, Gen. Charles L. Donnelly Jr., commander of United States Air Forces in Europe, and Maj. Gen. David W. Forgan, Donnelly’s operations deputy. The three USAF officers believed the large force increased the possibility of doing substantial damage to the targets. On the Navy side, the Sixth Fleet was to attack with the forces arrayed on two carriers. Coral Sea launched eight A-6E medium bombers for the attack and six F/A-18C Hornets for strike support. America launched six A-6Es for the attack and six A-7Es and an EA-6B for strike support. F-14s protected the fleet and aircraft. A high alert status characterized Soviet vessels in the Mediterranean monitoring ship and aircraft movement. Libya’s vast air defense system was sophisticated, and its operators were acutely aware that an attack was coming. In the wake of the raid, the US compared the Libyan network with target complexes in the Soviet Union and its satellites. Only three were found to have had stronger defenses than the Libyan cities. The difficulties of the mission were great. Most of the crews had never seen combat. Most had never refueled from a KC-10, and none had done so at night in radio silence. The strike force did benefit from the presence of highly experienced flight leaders, many of them Vietnam combat veterans. They were flying the longest and most demanding combat mission in history against alerted defenses–and doing it in coordination with a naval force more than 3,000 miles distant. Timing was absolutely critical, and the long route and multiple refuelings increased the danger of a disastrous error. The Air Force and Navy attacks had to be simultaneous to maximize any remaining element of surprise and to get strike aircraft in and out as quickly as possible. Rules of Engagement Mission difficulty was compounded by rigorous Rules of Engagement. These ROE stipulated that, before an attack could go forward, the target had to be identified through multiple sources and all mission-critical F-111 systems had to be operating well. Any critical system failure required an immediate abort, even if an F-111 was in the last seconds of its bomb run. At about midnight GMT, six flights of three F-111Fs each bore down on Tripoli. Fatigue of the long mission was forgotten as the pilots monitored their terrain-following equipment. The weapon system officers prepared for the attack, checking the navigation, looking for targets and offset aiming points, and, most important of all, checking equipment status. The first three attacking elements, code-named Remit, Elton, and Karma, were tasked to hit Qaddafi’s headquarters at the Azziziyah Barracks. This target included a command and control center but not the Libyan leader’s nearby residence and the Bedouin-style tent he often used. Westbrook proved to be prescient in his belief that nine aircraft were too many to be put against the Azziziyah Barracks, as only two of the nine aircraft dropped their bombs. These, however, would prove to be tremendously important strikes. One element, Jewel, struck the Sidi Balal terrorist training camp where there was a main complex, a secondary academy, a Palestinian training camp, and a maritime academy under construction. Jewel’s attack was successful, taking out the area where naval commandos trained. Two elements, Puffy and Lujac, were armed with Mk 82 Snakeye parachute-retarded 500- pound bombs, and they struck the Tripoli airport, destroying three Ilyushin IL-76 transports and damaging three others as well as destroying a Boeing 727 and a Fiat G. 222. Flying in support of the F-111 attacks were EF-111As and Navy A-7s, A-6Es, and an EA-6B, using HARM and Shrike anti-radar missiles. Similar defense suppression support, including F/A-18s, was provided across the Gulf of Sidra, where Navy A-6E aircraft were to attack the Al Jumahiriya Barracks at Benghazi, and to the east, the Benina airfield. The Navy’s Intruders destroyed four MiG-23s, two Fokker F-27s, and two Mil Mi-8 helicopters. The Air Force F-111Fs would spend only 11 minutes in the target area, with what at first appeared to be mixed results. Anti-aircraft and SAM opposition from the very first confirmed that the Libyans were ready. News of the raid was broadcast while it was in progress. One aircraft, Karma 52, was lost, almost certainly due to a SAM, as it was reported to be on fire in flight. Capt. Fernando L. Ribas-Dominicci and Capt. Paul F. Lorence were killed. Only Ribas-Dominicci’s body was recovered; his remains were returned to the US three years later. As each F-111 aircraft exited the target area, they gave a coded transmission, with "Tranquil Tiger" indicating success and "Frostee Freezer" indicating that the target was not hit. Then the crews, flushed with adrenaline from the attack, faced a long flight home, with more in-flight refuelings, the knowledge that one aircraft was down, and the incredible realization that the raid’s results were already being broadcast on Armed Forces Radio. The news included comments from Weinberger and Secretary of State George P. Shultz. One F-111F had to divert to Rota AB, Spain, because of an engine overheat. The mission crew was returned to Lakenheath within two hours. Early and fragmentary USAF poststrike analysis raised some questions about the performance of the F-111s. Even though all three targets had been successfully struck, only four of the 18 F-111s dropped successfully. Six were forced to abort due to aircraft difficulties or stringencies of the Rules of Engagement. Seven missed their targets and one was lost. There had been collateral damage, with one bomb landing near the French Embassy. The combined Air Force-Navy raid resulted in 130 civilian casualties with 37 killed, including, it was claimed, the adopted daughter of Qaddafi. Yet events were soon to prove that the raid had been a genuine success, and as time passed, its beneficial effects would be recognized. It quickly become obvious that Qaddafi, who had exultantly backed the bombing of others, was terribly shaken when the bombs fell near him. His house had been damaged and flying debris had reportedly injured his shoulder. He disappeared from the scene for 24 hours, inspiring some speculation that he had been killed. When he did reappear-on a television broadcast-he was obviously deeply disturbed, lacking his usual arrogance. Libya protested but received only muted support from Arab nations. In its comments, Moscow was curiously nonjudgmental and withheld a strong endorsement of Qaddafi. More importantly, the following months would see a dramatic decrease in the number of Libyan-sponsored, anti-American terrorist events. The Red Army Faction, one of the groups that had claimed responsibility for the La Belle disco bombing, reduced its activities. Other Libyan-sponsored groups followed suit. It became evident that the F-111s and the carrier attack aircraft, ably assisted by Air Force and Navy support units, had achieved a signal success. Ironically, that success was not to receive much formal recognition. There was slight praise for the aircrews. The Air Force declined a nomination for a Presidential Unit Citation, although the Navy awarded its forces a Meritorious Unit Citation. This situation, with an excellent description of the attack, is covered in Robert E. Venkus’ book, Raid on Qaddafi. Operation El Dorado Canyon was carried out in the finest tradition of the Air Force. Its crews and aircraft were pushed to the absolute limits of their capability. Yet they prevailed, destroying key targets and shocking Qaddafi as a raid on Benghazi alone would never have done. More important, the effect of El Dorado Canyon went far beyond Libya, registering with the entire terrorist world. Moreover, the raid demonstrated that the United States had the capability, using fighters and large numbers of land-based tankers, to make precision strikes from land bases at very great distances. Perhaps as important, F-111 problems surfaced during El Dorado Canyon and the Air Force set about fixing them. This was to pay great dividends five years later when, during Operation Desert Storm, the F-111F Pave Tack system flew more missions and destroyed more targets than any other aircraft in that war. Walter J. Boyne, former director of the National Air and Space Museum in Washington, is a retired Air Force colonel and author. He has written more than 400 articles about aviation topics and 29 books, the most recent of which is Beyond the Horizons: The Lockheed Story. His most recent article for Air Force Magazine, "Stuart Symington," appeared in the February 1999 issue. El Dorado Canyon is a reprint from Air Force Magazine.com, Vol. 82, No. 3, March 1999. Mr. Boyne is one of the Contributing Editors to www.wingsoverkansas.com.
<urn:uuid:a3170d0a-be54-43e1-92cb-38f55c1ab097>
CC-MAIN-2021-21
https://wingsoverkansas.com/boyne/a1316/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989012.26/warc/CC-MAIN-20210509183309-20210509213309-00214.warc.gz
en
0.974687
4,591
2.515625
3
Crowther, Samuel Ajayi (A) Foremost African Christian of the Nineteenth Century Samuel Adjai Crowther was probably the most widely known African Christian of the nineteenth century. His life spanned the greater part of it – he was born in its first decade and died in the last. He lived through a transformation of relations between Africa and the rest of the world and a parallel transformation in the Christian situation in Africa. By the time of his death the bright confidence in an African church led by Africans, a reality that he seemed to embody in himself, had dimmed. Today things look very different. It seems a good time to consider the legacy of Crowther. Slavery and Liberation The story begins with the birth of a boy called Ajayi in the town of Osogun in Yorubaland in what is now Western Nigeria, in or about the year 1807. In later years the story was told that a diviner had indicated that Ajayi was not to enter any of the cults of the orisa, the divinities of the Yoruba pantheon, because he was to be a servant of Olorun , the God of heaven . He grew up in dangerous times. Both the breakup of the old Yoruba empire of Oyo, and the effect of the great Islamic jihads, which were establishing a new Fulani empire to the north, meant chaos for the Yoruba states. Warfare and raiding became endemic. Besides all the trauma of divided families and transplantation that African slavery could bring, the raids fed a still worse evil: the European traders at the coast. These maintained a trade in slaves, illegal but still richly profitable, across the Atlantic. When Crowther was about thirteen, Osogun was raided, apparently by a combination of Fulani and Oyo Muslims. Crowther twice recorded his memories of the event, vividly recalling the desolation of burning houses, the horror of capture and roping by the neck, the slaughter of those unfit to travel, the distress of being torn from relatives. Ajayi changed hands six times, before being sold to Portuguese traders for the transatlantic market. The colony of Sierra Leone had been founded by a coalition of anti-slavery interests, mostly evangelical Christian in inspiration and belonging to the circle associated with William Wilberforce and the “Clapham Sect.” It was intended from the beginning as a Christian settlement, free from slavery and the slave trade. The first permanent element in the population was a group of former slaves from the New World. Following the abolition of the slave trade by the British Parliament in 1807 and the subsequent treaties with other nations to outlaw the traffic, Sierra Leone achieved a new importance. It was a base for the naval squadron that searched vessels to find if they were carrying slaves. It was also the place where slaves were brought if any were found aboard. The Portuguese ship on which Ajayi was taken as a slave was intercepted by the British naval squadron in April 1822, and he, like thousands of other uprooted, disorientated people from inland Africa, was put ashore in Sierra Leone. By this time, Sierra Leone was becoming a Christian community. It was one of the few early successes of the missionary movement, though the Christian public at large was probably less conscious of the success than of the appalling mortality of missionaries in what became known as the White Man’s Grave. To all appearances the whole way of life of Sierra Leone – clothing, buildings, language, education, religion, even names – closely followed Western models. These were people of diverse origins whose cohesion and original identity were now beyond recall. They accepted the combination of Christian faith and Western lifestyle that Sierra Leone offered, a combination already represented in the oldest inhabitants of the colony, the settled slaves from the New World. Such was the setting in which young Ajayi now found himself. We know little of his early years there. Later he wrote that about the third year of my liberation from the slavery of man, I was convinced of another worse state of slavery, namely, that of sin and Satan. It pleased the Lord to open my heart … I was admitted into the visible Church of Christ here on earth as a soldier to fight manfully under his banner against our spiritual enemies. He was baptized by the Reverend John Rahan, of the (Anglican) Church Missionary Society, taking the name Samuel Crowther, after a member of that society’s home committee. Mr. Crowther was an eminent clergyman; his young namesake was to make the name far more celebrated. Crowther had spent those early years in Sierra Leone at school, getting an English education, adding carpentry to his traditional weaving and agricultural skills. In 1827 the Church Missionary Society decided, for the sake of Sierra Leone’s future Christian leadership, to provide education to a higher level than the colony’s modest schools had given. The resultant “Christian Institution” developed as Fourah Bay College, which eventually offered the first university education in tropical Africa. Crowther was one of its first students. The Loom of Language This period marked the beginning of the work that was to form one of the most abiding parts of Crowther’s legacy. He continued to have contact with Raban, who had baptized him; and Raban was one of the few missionaries in Sierra Leone to take African languages seriously. To many of his colleagues the priority was to teach English, which would render the African languages unnecessary. Raban realized that such policy was a dead end; he also realized that Yoruba, Crowther’s mother tongue, was a major language. (Yoruba had not been prominent in the early years of Sierra Leone, but the political circumstances that had led to young Ajayi’s captivity were to bring many other Yoruba to the colony.) Crowther became an informant for Raban, who between 1828 and 1830 published three little books about Yoruba; and almost certainly he also assisted another pioneer African linguist, the Quaker educationist Hannah Kilham. Crowther was appointed a schoolmaster of the mission, serving in the new villages created to receive “liberated Africans” from the slave ships. A schoolmaster was an evangelist; in Sierra Leone church and school were inseparable. We get glimpses of an eager, vigorous young man who, at least at first, was highly confrontational in his encounters with representatives of Islam and the old religions in Africa. In later life he valued the lessons of this apprenticeship - the futility of abuse, the need to build personal relationships, and the ability to listen patiently. Crowther began study of the Temne language, which suggests a missionary vision toward the hinterland of Sierra Leone. But he also worked systematically at his own language, as far as the equipment to hand allowed. Transformation of the Scene Two developments now opened a new chapter for Crowther and for Sierra Leone Christianity. One was a new link with Yorubaland. Enterprising liberated Africans, banding together and buying confiscated slave ships, began trading far afield from Freetown. Some of Yoruba origin found their way back to their homeland. They settled there, but kept the Sierra Leone connections and the ways of life of Christian Freetown. The second development was the Niger Expedition of 1841, the brief flowering of the humanitarian vision for Africa of Sir Thomas Fowell Buxton. This investigative mission, intended to prepare the way for an alliance of “Christianity, commerce and civilization” that would destroy the slave trade and bring peace and prosperity to the Niger, relied heavily on Sierra Leone for interpreters and other helpers. The missionary society representatives also came from Sierra Leone. One was J. F. Schön, a German missionary who had striven with languages of the Niger, learning from liberated Africans in Sierra Leone. The other was Crowther. Crowther’s services to the disaster-stricken expedition were invaluable. Schön cited them as evidence of his thesis that the key to the evangelization of inland Africa lay in Sierra Leone. Sierra Leone had Christians such as Crowther to form the task force; it had among the liberated Africans brought there from the slave ships a vast language laboratory for the study of all the languages of West Africa, as well as a source of native speakers as missionaries; and in the institution at Fourah Bay it had a base for study and training. The Niger Expedition had shown Crowther’s qualities, and he was brought to England for study and ordination. The latter was of exceptional significance. Anglican ordination could be received only from a bishop, and there was no bishop nearer than London. Here then, in 1843, began Sierra Leone’s indigenous ministry. Here, too, began Crowther’s literary career, with the publication of Yoruba Vocabulary, including an account of grammatical structure, surely the first such work by a native speaker of an African language. The Yoruba Mission Meanwhile, the new connection between Sierra Leone and Yorubaland had convinced the CMS of the timeliness of a mission to the Yoruba. There had been no opportunity to train that African mission force foreseen by Schön and Crowther in their report on the Niger Expedition, but at least in Crowther there was one ordained Yoruba missionary available. Thus, after an initial reconnaissance by Henry Townsend, an English missionary from Sierra Leone, a mission party went to Abeokuta, the state of the Egba section of the Yoruba people. It was headed by Townsend, Crowther, and a German missionary, C. A. Gollmer, with a large group of Sierra Leoneans from the liberated Yoruba community. These included carpenters and builders who were also teachers and catechists. The mission intended to demonstrate a whole new way of life, of which the church and the school and the well-built house were all a part. They were establishing Sierra Leone in Yorubaland. The Sierra Leone trader-immigrants, the people who had first brought Abeokuta to the attention of the mission, became the nucleus of the new Christian community. The CMS Yoruba mission is a story in itself. How the mission, working on Buxton’s principles, introduced the growing and processing of cotton and arranged for its export, thereby keeping Abeokuta out of the slave economy; how the missionaries identified with Abeokuta under invasion and reaped their reward afterward; how the CMS mobilized Christian opinion to influence the British government on behalf of Abeokuta; and the toils into which the mission fell amid inter-Yoruba and colonial conflicts, have been well told elsewhere. Crowther came to London in 1851 to present the cause of Abeokuta. He saw government ministers; he had an interview with the Queen and Prince Albert; he spoke at meetings all over the country, invariably to great effect. This grave, eloquent, well-informed black clergyman was the most impressive tribute to the effect of the missionary movement that most British people had seen; and Henry Venn, the CMS secretary who organized the visit, believed that it was Crowther who finally moved the government to action. But the missionaries’ day-to-day activities lay in commending the Gospel and nourishing the infant church. There was a particularly moving incident for Crowther, when he was reunited with the mother and sister from whom he had been separated when the raiders took them more than twenty years earlier. They were among the first in Abeokuta to be baptized. In Sierra Leone the church had used English in its worship. The new mission worked in Yoruba, with the advantage of native speakers in Crowther and his family and in most of the auxiliaries, and with Crowther’s book to assist the Europeans. Townsend, an excellent practical linguist, even edited a Yoruba newspaper. But the most demanding activity was Bible translation. The significance of the Yoruba version has not always been observed. It was not the first translation into an African language; but, insofar as Crowther was the leading influence in its production, it was the first by a native speaker. Early missionary translations naturally relied heavily on native speakers as informants and guides; but in no earlier case was a native speaker able to judge and act on an equal footing with the European. Crowther insisted that the translation should indicate tone – a new departure. In vocabulary and style he sought to get behind colloquial speech by listening to the elders, by noting significant words that emerged in his discussions with Muslims or specialists in the old religion. Over the years, wherever he was, he noted words, proverbs, forms of speech. One of his hardest blows was the loss of the notes of eleven years of such observations, and some manuscript translations, when his house burned down in 1862. Written Yoruba was the product of missionary committee work, Crowther interacting with his European colleagues on matters of orthography. Henry Venn engaged the best linguistic expertise available in Europe - not only Schön and the society’s regular linguistic adviser, Professor Samuel Lee of Cambridge, but the great German philologist Lepsius. The outcome may be seen in the durability of the Yoruba version of the Scriptures to which Crowther was the chief contributor and in the vigorous vernacular literature in Yoruba that has grown up. New Niger Expeditions and a Mission to the Niger In 1854 the merchant McGregor Laird sponsored a new Niger expedition, on principles similar to the first, but with a happier outcome. The CMS sent Crowther on this expedition. It revived the vision he had seen in 1841 – a chain of missionary operations hundreds of miles along the Niger, into the heart of the continent. He urged a beginning at Onitsha, in Igboland. The opportunity was not long in coming. In 1857, he and J. C. Taylor, a Sierra Leonean clergyman of liberated Igbo parentage, joined Laird’s next expedition to the Niger. Taylor opened the Igbo mission at Onitsha; Crowther went upriver. Shipwrecked, and stranded for months, he began to study the Nupe language and surveyed openings to the Nupe and Hausa peoples. The Niger Mission had begun. Henry Venn soon made a formal structure for it. But it was a mission on a new principle. Crowther led a mission force consisting entirely of Africans. Sierra Leone, as he and Schön had foreseen so long ago, was now evangelizing inland Africa. For nearly half a century that tiny country sent a stream of missionaries, ordained and lay, to the Niger territories. The area was vast and diverse: Muslim emirates in the north, ocean-trading city-states in the Delta, the vast Igbo populations in between. It is cruel that the missionary contribution of Sierra Leone has been persistently overlooked, and even denied. It is possible here to consider only three aspects of a remarkable story. Two have been somewhat neglected. More Legacy in Language One of these is the continued contribution to language study and translation. Crowther himself wrote the first book on Igbo. He begged Schön, now serving an English parish, to complete his Hausa dictionary. He sent one of his missionaries to study Hausa with Schön. Most of his Sierra Leone staff, unlike people of his own generation, were not native speakers of the languages of the areas they served. The great Sierra Leone language laboratory was closing down; English and the common language, Krio, took over from the languages of the liberated. Add to this the limited education of many Niger missionaries, and their record of translation and publication is remarkable. The Engagement with Islam Crowther’s Niger Mission also represents the first sustained missionary engagement with African Islam in modern times. In the Upper Niger areas in Crowther’s time, Islam, largely accepted by the chiefs, was working slowly through the population in coexistence with the old religion. From his early experiences in Sierra Leone, Crowther understood how Islamic practice could merge with traditional views of power. He found a demand for Arabic Bibles, but was cautious about supplying them unless he could be sure they would not be used for charms. His insight was justified later, when the young European missionaries who succeeded him wrote out passages of Scripture on request, pleased at such a means of Scripture distribution. They stirred up the anger of Muslim clerics – not because they were circulating Christian Scriptures, but because they were giving them free, thus undercutting the trade in quranic charms. In discussion with Muslims, Crowther sought common ground and found it at the nexus of Qur’an and Bible: Christ as the great prophet, his miraculous birth, Gabriel as the messenger of God. He enjoyed courteous and friendly relations with Muslim rulers, and his writings trace various discussions with rulers, courts, and clerics, recording the questions raised by Muslims, and his own answers, the latter as far as possible in the words of Scripture: “After many years’ experience, I have found that the Bible, the sword of the Spirit, must fight its own battle, by the guidance of the Holy Spirit.” Christians should of course defend Trinitarian doctrine, but let them do so mindful of the horror-stricken cry of the Qur’an, “Is it possible that Thou dost teach that Thou and Thy Mother are two Gods?” In other words, Christians must show that the things that the Muslims fear as blasphemous are no part of Christian doctrine. Crowther, though no great scholar or Arabist, developed an approach to Islam in its African setting that reflected the patience and the readiness to listen that marked his entire missionary method. Avoiding denunciation and allegations of false prophecy, it worked by acceptance of what the Qur’an says of Christ, and an effective knowledge of the Bible. Crowther looked to the future with hope; the average African Christian knew the Bible much better than the average African Muslim knew the Qur’an. And he pondered the fact that the Muslim rule of faith was expressed in Arabic, the Christian in Hausa, or Nupe or Yoruba. The result was different understandings of how the faith was to be applied in life. The Indigenization of the Episcopate The best-known aspect of Crowther’s later career is also the most controversial: his representation of the indigenous church principle. We have seen that he was the first ordained minister of his church in his place. It was the policy of Henry Venn, then newly at the helm of the CMS, to strengthen the indigenous ministry. More and more Africans were ordained, some for the Yoruba mission. And Venn wanted well-educated, well-trained African clergy; such people as Crowther’s son Dandeson (who became archdeacon) and his son-in-law T. B. Macaulay (who became principal of Lagos Grammar School) were better educated than many of the homespun English missionaries. Venn sought self-governing, self-supporting, self-propagating churches with a fully indigenous pastorate. In Anglican terms, this meant indigenous bishops. The missionary role was a temporary one; once a church was established, the missionary should move on. The birth of the church brought the euthanasia of the mission. With the growth of the Yoruba church, Venn sought to get these principles applied in Yorubaland. Even the best European missionaries thought this impractical, the hobbyhorse of a doctrinaire home-based administrator. As we have seen, Venn made a new sphere of leadership for Crowther, the outstanding indigenous minister in West Africa. But he went further, and in 1864 secured the consecration of Crowther as bishop of “the countries of Western Africa beyond the limits of the Queen’s dominions,” a title reflecting some constraints imposed by Crowther’s European colleagues and the peculiarities of the relationship of the Church of England to the Crown. Crowther, a genuinely humble man, resisted; Venn would take no refusal. In one sense, the new diocese represented the triumph of the three-self principle and the indigenization of the episcopate. But it reflected a compromise, rather than the full expression of those principles. It was, after all, essentially a mission, drawing most of its clergy not from natives of the soil but from Sierra Leone. Its ministry was “native” only in the sense of not being European. Three-self principles required it to be self-supporting; this meant meager resources, missionaries who got no home leave, and the need to present education as a salable product. The story of the later years of the Niger mission has often been told and variously interpreted. It still raises passions and causes bitterness. There is no need here to recount more than the essentials: that questions arose about the lives of some of the missionaries; that European missionaries were brought into the mission, and then took it over, brushing aside the old bishop (he was over eighty) and suspending or dismissing his staff. In 1891 Crowther, a desolate, broken man, suffered a stroke; on the last day of the year, he died. A European bishop was appointed to succeed him. The self-governing church and the indigenization of the episcopate were abandoned. Contemporary mission accounts all praise Crowther’s personal integrity, graciousness, and godliness. In the Yoruba mission, blessed with many strong, not to say prickly, personalities, his influence had been irenic. In Britain he was recognized as a cooperative and effective platform speaker. (A CMS official remembered Crowther’s being called on to give a conference address on “Mission and Women” and holding his audience spellbound.) Yet the same sources not only declared Crowther “a weak bishop” but drew the moral that “the African race” lacked the capacity to rule. European thought about Africa had changed since the time of Buxton; the Western powers were now in Africa to govern. Missionary thought about Africa had changed since the days of Henry Venn; there were plenty of keen, young Englishmen to extend the mission and order the church; a self-governing church now seemed to matter much less. And evangelical religion had changed since Crowther’s conversion; it had become more individualistic and more otherworldly. A young English missionary was distressed that the old bishop who preached so splendidly on the blood of Christ could urge on a chief the advantages of having a school and make no reference to the future life. This story illustrates in brief the two evangelical itineraries: the short route via Keswick, and the long one via the White Man’s Grave, the Niger Expedition and the courts of Muslim rulers of the north. There were some unexpected legacies even from the last sad days. One section of the Niger mission, that in the Niger Delta, was financially self-supporting. Declining the European takeover, it long maintained a separate existence under Crowther’s son, Archdeacon Dandeson Crowther, within the Anglican Communion but outside the CMS. It grew at a phenomenal rate, becoming so self-propagating that it ceased to be self-supporting. Other voices called for direct schism; the refusal to appoint an African successor to Crowther, despite the manifest availability of outstanding African clergy, marks an important point in the history of African Independent churches. The treatment of Crowther, and still more the question of his successor, gave a focus for the incipient nationalist movement of which E. W. Blyden was the most eloquent spokesman. Crowther thus has his own modern place in the martyrology of African nationalism. But the majority of Christians, including those natural successors of Crowther who were passed over or, worse, suffered denigration or abuse, took no such course. They simply waited. Crowther was the outstanding representative of a whole body of West African church leaders who came to the fore in the pre-Imperial age and were superseded in the Imperial. But the Imperial age itself was to be only an episode. The legacy of Samuel Ajayi Crowther, the humble, devout exponent of a Christian faith that was essentially African and essentially missionary, has passed to the whole vast church of Africa and thus to the whole vast church of Christ. Andrew F. Walls Crowther himself spelled his Yoruba name (which he employed as a second name) thus. The modern spelling is Ajayi, and this spelling is commonly used today, especially by Nigerian writers. On the relation of the orisa to Olorun, see E. B. Idowu, Olódùmaré : God in Yoruba Belief (London: Longmans, 1962). Idowu argues that Olorun is never called an orisa, nor classed among them. The story is representative of hundreds that show the God of the Bible active in the African past through such prophecies of the Christian future of Africa. Walls, “A Second Narrative of Samuel Ajayi Crowther’s Early Life,” Bulletin of the Society for African History 2 (1965): 14. On Buxton, see pages 11-17, above. Crowther was not the first African to receive Anglican ordination. As early as 1765, Philip Quaque, from Cape Coast in what is now Ghana, who had been brought to England as a boy, was appointed chaplain to the British trading settlement at Cape Coast. He died in 1816. Crowther had never heard of him until he went ashore at Cape Coast en route to the Niger in 1841 and saw a memorial tablet. See Jesse Page, The Black Bishop (London: Hodder and Stoughton, 1908), p. 53. Especially by J. F. A. Ajayi, Christian Missions in Nigeria : 1841-1891 (London: Longmans, 1965). See also S. O. Biobaku, *The Egba and Their Neighbours : 1842-1874 *(Oxford: Clarendon Press, 1957). Repeated, for instance, by Stephen Neill, Christian Missions, Pelican History of the Church (Harmondsworth: Penguin Books, 1964), p. 306, who said, “It is only to be regretted that its Christianity has not proved expansive.” In fact, few countries can claim so much expansion in proportion to the numbers of the Christian population. See P. E. H. Hair, The Early Study of Nigerian Languages (Cambridge: Cambridge Univ. Press, 1967), p. 82, for an assessment. See Stephen Neill, Christian Missions (pp. 377f.), for the common impression of the linguistic incompetence of Crowther and the Niger missionaries. Hair’s careful catalog of their translations in the languages of the Lower Niger, as well as his descriptions of Crowther’s linguistic surveys in the Upper Niger, show how misleading this is. Crowther, Experiences with Heathens and Mohammedans in West Africa (London, 1892), p. 28. See E. A. Ayandele,The Missionary Impact on Modern Nigeria : 1842-1914 (London: Longmans, 1966), for a representative modern African view. Neill (Christian Missions, p. 377) reflects the traditional “missionary” view. Ajayi, Christian Missions in Nigeria, sets the context, and G. O. M. Tasie notes some neglected factors in his Christian Missionary Enterprise in the Niger Delta : 1864-1918 (Leiden: Brill, 1978). Ajayi, Christian Missions in Nigeria, p. 218. For the story, see Tasie, Christian Missionary Enterprise in the Niger Delta. See also Jehu J. Hanciles, “Dandeson Coates Crowther and the Niger Delta Pastorate: Blazing Torch or Flickering Flame?” International Bulletin of Missionary Research 18, no. 4 (1994): 166-72. See J. B. Webster, The African Churches among the Yoruba (Oxford: Clarendon Press, 1964). See, for instance, H. R. Lynch, Edward Wilmot Blyden (London: Oxford Univ. Press, 1967). Works by S. A. Crowther (other than translations and linguistic works) 1843 (with J. F. Schön) Journal of an Expedition up the Niger in 1841. London. 1855 Journal of an Expedition up the Niger and Tshadda Rivers. London. 1859 (with J. C. Taylor) The Gospel on the Banks of the Niger… London. Reprint, London: Dawsons, 1968. 1965 (by A. F. Walls) “A Second Narrative of Samuel Ajayi Crowther’s Early Life,” Bulletin of the Society for African Church History 2: 5-14. An autobiographical fragment. Works about S. A. Crowther Ajayi, J. F. A. Christian Missions in Nigeria : 1841-1891. London: Longmans, 1965. ——–. “How Yoruba Was Reduced to Writing,” Odu : Journal of Yoruba Studies (196l): 49-58. Ayandele, E. A. The Missionary Impact on Modern Nigeria: 1842-1914. London: Longmans, 1966. Hair, P. E. H. The Early Study of Nigerian Languages. Cambridge: Cambridge Univ. Press, 1967. Mackenzie, P. R. Inter-religious Encounters in Nigeria. S. A. Crowther’ s Attitude to African Traditional Religion and Islam. Leicester: Leicester Univ. Press, 1976. Page, Jesse. The Black Bishop. London: Hodder and Stoughton, 1908. Still the fullest biography, though limited in value. Shenk, W. R. Henry Venn : Missionary Statesman. Maryknoll, N.Y.: Orbis Books, 1983. Tasie, G. O. M. Christian Missionary Enterprise in the Niger Delta: 1864-1918. Leiden: Brill, 1978. Walls, A. F. “Black Europeans, White Africans.” In D. Baker (ed.), Religious Motivation : Biographical and Sociological Problems of the Church Historian, pp. 339-48. Studies in Church History, Cambridge: Cambridge Univ. Press, 1978. This article, from the International Bulletin of Missionary Research, Jan. 92, Vol. 16 Issue 1, p. 15-21, is reproduced, with permission, from Mission Legacies : Biographical Studies of Leaders of the Modern Missionary Movement, copyright© 1994, edited by G. H. Anderson, R. T. Coote, N. A. Horner, J. M. Phillips. All rights reserved.
<urn:uuid:1c7aa2a5-8cb0-413b-aeac-9acf29f7a3eb>
CC-MAIN-2021-21
https://dacb.org/stories/nigeria/legacy-crowther/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989614.9/warc/CC-MAIN-20210511122905-20210511152905-00378.warc.gz
en
0.967163
6,478
2.953125
3
1.A.17 The Calcium-dependent Chloride Channel (Ca-ClC) Family The Anoctamin Superfamily of cation and anion channels, as well as lipid scramblases, includes three functionally characterized families: the Anoctamin (ANO), Transmembrane Channel (TMC) and Ca2+-permeable Stress-gated Cation Channel (CSC) families. There are also four families of functionally uncharacterized proteins, which are referred to as the Anoctamin-like (ANO-L), Transmembrane Channel-like (TMC-L), and CSC-like (CSC-L1 and CSC-L2) families (Medrano-Soto et al. 2018). Protein clusters and trees showing the relative relationships among the seven families were constructued, and topological analyses suggested that the members of these families have essentially the same topologies. Comparative examination of these homologous families provided insight into possible mechanisms of action, indicated the currently recognized organismal distributions of these proteins, and suggested drug design potential for the disease-related channel proteins (Medrano-Soto et al. 2018). During the first postnatal week of mouse development, the current amplitude grew, and transducer adaptation became faster and more effective, due partly to a developmental switch from TMC2- to TMC1-containing channels and partly to an increase in channel expression (Goldring et al. 2019). Nist-Lund et al. 2019 designed TMC1 and TMC2 gene replacement therapies which corrected hearing and vertigo disorders. TMC1 and TMC2 are hair cell transduction channels (Jia et al. 2019). Signaling through the interleukin-4 and interleukin-13 receptor complexes regulates cholangiocyte TMEM16A expression and biliary secretion (Dutta et al. 2020). ANOs 3-7 in the anoctamin/Tmem16 family are intracellular membrane proteins (Duran et al. 2012). Both anion channels (such as TMEM16A) and phospholipid scramblases (such as TMEM16F) are activated by intracellular Ca2+ in the low microM range, but many divalent cations at mM concentrations further activate (Nguyen et al. 2021). Impaired chloride transport can cause diseases as diverse as cystic fibrosis, myotonia, epilepsy, hyperekplexia, lysosomal storage disease, deafness, renal salt loss, kidney stones and osteopetrosis. These disorders are caused by mutations in genes belonging to non-related gene families, including CLC chloride channels and GABA- and glycine-receptors. Diseases due to mutations in Anoctamin 1 TMEM16E and bestrophin 1 might be due to a loss of Ca2+-activated Cl- channels, although this remains to be shown (Planells-Cases and Jentsch, 2009). The evolution and functional divergence of anoctamin family members has been reported (Milenkovic et al. 2010). Some, but not all TMEM16 homologues can catalyze phospholipid flipping as phospholipid scramblases in addition to their roles as ion channels (Malvezzi et al. 2013). Compromised anoctamin function causes a wide range of diseases, such as hearing loss (ANO2), bleeding disorders (ANO6), ataxia and dystonia (ANO3), persistent borrelia and mycobacteria infection (ANO10), skeletal syndromes like gnathodiaphyseal dysplasia and limb girdle muscle dystrophy (ANO5), and cancer (ANO1) (Kunzelmann et al. 2015). Calcium-activated chloride channels (CaCCs) in response to calcium release from intracellular stores, mediated by G-protein coupled receptors, can lead to CaCC activation, and prominent inflammatory mediators like bradykinin or serotonin also stimulate CaCCs via such a mechanism (Salzer and Boehm 2019). The transport of bicarbonate (HCO3-) by anion channels and its relevance to human diseases has been discussed (Shin et al. 2020). Calcium-dependent chloride channels are required for normal electrolyte and fluid secretion, olfactory perception, and neuronal and smooth muscle excitability in animals (Pang et al. 2013). Treatment of bronchial epithelial cells with interleukin-4 (IL-4) causes increased calcium-dependent chloride channel activity. Caputo et al., 2008 performed a global gene expression analysis to identify membrane proteins that are regulated by IL-4. TMEM16A is associated with calcium-dependent chloride current, as measured with halide-sensitive fluorescent proteins, short-circuit current, and patch-clamp techniques. Their results indicated that TMEM16A is an intrinsic constituent (9 putative TMSs) of the calcium-dependent chloride channel. These results have been confirmed and extended by Yang et al., 2008 and Ferrera et al., 2009. Transmembrane protein 16B (TMEM16B) is also a Ca2+-activated Cl- channel but with different voltage dependence and unitary conductance (Galietta, 2009). Scudieri et al. (2011) reported that TMEM16A has a putative structure consisting of eight transmembrane domains with both the amino- and the carboxy-termini protruding in the cytosol. TMEM16A is also characterized by the existence of different protein variants generated by alternative splicing. TMEM16B (anoctamin-2) is also associated with CaCC activity although with different properties. TMEM16B-dependent channels require higher intracellular Ca2+ concentrations and have faster activation and deactivation kinetics. Expression of other anoctamins is instead devoid of detectable channel activity. These proteins, such as TMEM16F (anoctamin-6), may have different functions. Yue et al. 2019 have presented a comparative overview of the diverse functions of TMC channels in different species. All vertebrate cells regulate their cell volume by activating chloride channels thereby activating regulatory volume decrease. Almaça et al., 2009 showed that the Ca2+-activated Cl- channel TMEM16A together with other TMEM16 proteins are activated by cell swelling through an autocrine mechanism that involves ATP release and binding to purinergic P2Y(2) receptors. TMEM16A channels are activated by ATP through an increase in intracellular Ca2+ and a Ca2+-independent mechanism engaging extracellular-regulated protein kinases (ERK1/2). The ability of epithelial cells to activate a Cl- conductance upon cell swelling, and to decrease their cell volume was dependent on TMEM16 proteins. Activation was reduced in the colonic epithelium and in salivary acinar cells from mice lacking expression of TMEM16A. Thus, TMEM16 proteins appear to be a crucial component of epithelial volume-regulated Cl- channels and may also have a function during proliferation and apoptotic cell death. Interstitial cells of Cajal (ICC) generate pacemaker activity (slow waves) in gastrointestinal (GI) smooth muscles. Several conductances, such as Ca2+-activated Cl- channels (CaCC) and non-selective cation channels (NSCC) have been suggested to be involved in slow wave depolarization. Hwang et al., 2009 investigated the expression and function of anoctamin 1 (ANO1), encoded by Tmem16a, which is highly expressed in ICC. GI muscles express splice variants of the Tmem16a transcript in addition to other paralogues of the Tmem16a family. ANO1 protein is expressed abundantly and specifically in ICC in all regions of the murine, non-human primate (Macaca fascicularis) and human GI tracts. CaCC blocking drugs, niflumic acid and 4,4-diisothiocyano-2,2-stillbene-disulfonic acid (DIDS) reduced the frequency and blocked slow waves in murine, primate, human small intestine and stomach in a concentration-dependent manner. Slow waves failed to develop by birth in mice homozygous for a null allele of Tmem16a and did not develop subsequent to birth in organ culture, as in wildtype and heterozygous muscles. These data demonstrate the fundamental role of ANO1 in the generation of slow waves in GI ICC (Hwang et al., 2009). The calcium-activated chloride channel anoctamin1 (ANO1; TMEM16A) is fundamental for the function of epithelial organs, and mice lacking ANO1 expression exhibit transport defects and a pathology similar to that of cystic fibrosis. They also show a general defect of epithelial electrolyte transport. Schreiber et al., (2010) analyzed expression of all ten members (ANO1-ANO10) in a broad range of murine tissues and detected predominant expression of ANO1, 6, 7, 8, 9, 10 in epithelial tissues, while ANO2, 3, 4, 5 are common in neuronal and muscle tissues. When expressed in Fisher Rat Thyroid (FTR) cells, all ANO proteins localized to the plasma membrane, but only ANO1, 2, 6, and 7 produced Ca2+-activated Cl- conductance. In contrast, ANO9 and ANO10 suppressed baseline Cl- conductance, and coexpression of ANO9 with ANO1 inhibited ANO1 activity. Patch clamping of ANO-expressing FRT cells indicated that apart from ANO1, ANO6 and 10 produced chloride currents, but with very different Ca2+ sensitivity and activation time. Thus, each tissue expresses a set of anoctamins that form cell- and tissue-specific Ca2+-dependent Cl- channels (Schreiber et al., 2010). In all animal cells, phospholipids are asymmetrically distributed between the outer and inner leaflets of the plasma membrane. This asymmetrical phospholipid distribution is disrupted in various biological systems. For example, when blood platelets are activated, they expose phosphatidylserine (PtdSer) to trigger the clotting system. The PtdSer exposure is believed to be mediated by Ca2+-dependent phospholipid scramblases that transport phospholipids bidirectionally. Suzuki et al. (2010) showed that TMEM16F (transmembrane protein 16F) is essential for the Ca2+-dependent exposure of phosphatidylserine on the cell surface. Wild-type and mutant forms of TMEM16F were localized to the plasma membrane and conferred Ca2+-dependent scrambling of phospholipids. A patient with Scott syndrome, which results from a defect in phospholipid scrambling activity, was found to carry a mutation at a splice-acceptor site of the gene encoding TMEM16F, causing premature termination of the protein (Suzuki et al., 2010). The Ca-ClC anoctamin (Tmem16) gene family was first identified by bioinformatic analysis in 2004. In 2008, it was shown independently by 3 laboratories that the first two members (Tmem16A and Tmem16B) of this 10-gene family are Ca2+-activated Cl- channels. Because these proteins are thought to have 8 transmembrane domains and be anion-selective channels, the alternative name, Anoctamin (anion and octa=8), has been proposed. It is not clear that all members of this family are anion channels or have the same 8-transmembrane domain topology. Between 2008 and 2011, there have been nearly 100 papers published on this gene family (Duran and Hartzell, 2011). Ano1 has been linked to cancer while mutations in Ano5 are linked to several forms of muscular dystrophy (LGMDL2 and MMD-3). Mutations in Ano10 are linked to autosomal recessive spinocerebellar ataxia, while mutations in Ano6 are linked to Scott syndrome, a rare bleeding disorder. Duran and Hartzell (2011) have reviewed the physiology and structure-function relationships of the Tmem16 gene family. Tmc1 and Tmc2 (TC#s 1.A.17.4.6 and 1.A.17.4.1, respectively) may play a role in hearing and are required for normal function of cochlear hair cells, possibly as Ca2+ channels or Ca2+ channel subunits (Kim and Fettiplace 2013) (see also family 1.A.82). Mice lacking both channels lack hair cell mechanosensory potentials (Kawashima et al. 2011). There are 8 members of this family in humans, 1 in Drosophila and 2 in C. elegans. One of the latter two is expressed in mechanoreceptors (Smith et al. 2010). Tmc-1 is a sodium-sensitive cation Ca2+ channel required for salt (Na+) chemosensation in C. elegans where it is required for salt-evoked neuronal activity and behavioural avoidance of high concentrations of NaCl (Chatzigeorgiou et al. 2013). Most evidence is consistent with TMCs being pore-forming subunits of the hair-cell transduction channel (Corey and Holt 2016). Hair cells express two molecularly and functionally distinct mechanotransduction channels with different subcellular distributions. One is activated by sound and is responsible for sensory transduction. This sensory transduction channel is expressed in hair cell stereocilia, and its activity is affected by mutations in the genes encoding the transmembrane proteins TMHS (TC# 1.A.82.1.1), TMIE (TC# 9.A.30.1.1), TMC1 and TMC2 (family 1.A.17.4) (Wu et al. 2016). The other is the Piezo2 channel (TC# 1.A.75.1.2). Mutations in transmembrane channel-like gene 1 (TMC1/Tmc1) cause dominant or recessive hearing loss in humans and mice. Tmc1 mRNA is specifically expressed in neurosensory hair cells of the inner ear. Cochlear neurosensory hair cells of Tmc1 mutant mice fail to mature into fully functional sensory receptors and exhibit concomitant structural degeneration that could be a cause or an effect of the maturational defect. The molecular and cellular functions of TMC1 protein are substantially unknown due, at least in part, to in situ expression levels that are prohibitively low for direct biochemical analysis (Labay et al., 2010). There are seven additional mammalian TMC paralogs. An initial PSORT-II analysis of human and mouse TMC proteins did not detect N-terminal signal sequences or other trafficking signals. The TMC proteins are predicted to contain 6-10 TMSs and a novel, conserved region termed the TMC domain. Human TMC6 (also known as EVER1) and TMC8 (EVER2) proteins are retained in the endoplasmic reticulum (Labay et al., 2010). Truncating mutations of EVER1 and EVER2 cause epidermodysplasia verruciformis (EV; MIM 226400), characterized by susceptibility to cutaneous human papilloma virus infections and associated non-melanoma skin cancers. Sound stimuli elicit movement of the stereocilia that make up the hair bundle of cochlear hair cells, putting tension on the tip links connecting the stereocilia and thereby opening mechanotransducer (MT) channels. Tmc1 and Tmc2, two members of the transmembrane channel-like family, are necessary for mechanotransduction. Kim et al. (2013) recorded MT currents elicited by hair bundle deflections in mice with null mutations of Tmc1, Tmc2, or both. During the first postnatal week. They observed normal MT currents in hair cells lacking Tmc1 or Tmc2; however, in the absence of both isoforms, we recorded a large MT current that was phase-shifted 180°, being evoked by displacements of the hair bundle away from its tallest edge rather than toward it as in wild-type hair cells. The anomalous MT current in hair cells lacking Tmc1 and Tmc2 was blocked by FM1-43, dihydrostreptomycin, and extracellular Ca2+ at concentrations similar to those that blocked wild type. MT channels in the double knockouts carried Ca2+ with a lower permeability than wild-type or single mutants. The MT current in double knockouts persisted during exposure to submicromolar Ca2+, even though this treatment destroyed the tip links. Kim et al. (2013) concluded that the Tmc isoforms do not themselves constitute the MT channel but are essential for targeting and interaction with the tip link. Changes in the MT conductance and Ca2+ permeability observed in the absence of Tmc1 mutants may stem from loss of interaction with protein partners in the transduction complex. See also (Kim et al. 2013). Ion channels promote the development and progression of tumors. TMEM16A is overexpressed in several tumor types. The role of TMEM16A in gliomas and the potential underlying mechanisms were analyzed by Liu et al. 2014. TMEM16A was abundant in various grades of gliomas and cultured glioma cells. Knockdown of TMEM16A suppressed cell proliferation, migration and invasion. Nuclear factor kappaB (NFkappaB) was activated by overexpression of TMEM16A, and, TMEM16A regulated the expression of NFkappaB-mediated genes, including cyclin D1, cyclin E and cmyc, involved in cell proliferation, and matrix metalloproteinases (MMPs)2 and MMP9, which are associated with the migration and invasion of glioma cells. Activation of the TMEM16A-encoded CaCC (ANO1) is mediated by Ca2+, Sr2+, and Ba2+. Mg2+ competes with Ca2+ in binding to the divalent-cation binding site without activating the channel. The anion occupancy in the pore-as revealed by the permeability ratios of these anions appeared to be inversely correlated with the apparent affinity of the ANO1 inhibition by niflumic acid (NFA) (Ni et al. 2014). On the other hand, NFA inhibition was neither affected by the degree of the channel activation nor influenced by the types of divalent cations used for channel activation. These results suggest that the NFA inhibition of ANO1 is likely mediated by altering pore function, not through changing channel gating. Ca2+-activated Cl- channels (CaCCs) are a class of Cl- channels activated by intracellular Ca2+ that are known to mediate numerous physiological functions. In 2008, the molecular identity of CaCCs was found to be anoctamin 1 (ANO1/TMEM16A). Its roles have been studied in electrophysiological, histological, and genetic aspects. ANO1 is known to mediate Cl- secretion in secretory epithelia such as airways, salivary glands, intestines, renal tubules, and sweat glands (Oh and Jung 2016). ANO1 is a heat sensor activated by noxious heat in somatosensory neurons and mediates acute pain sensation as well as chronic pain. ANO1 is also observed in vascular as well as airway smooth muscles, controlling vascular tone as well as airway hypersensitivity. ANO1 is upregulated in numerous types of cancers and thus thought to be involved in tumorigenesis. ANO1 is also found in proliferating cells. In addition to ANO1, involvement of its paralogs in pathophysiological conditions has also been reported. ANO2 is involved in olfaction, whereas ANO6 works as a scramblase whose mutation causes a rare bleeding disorder, the Scott syndrome. ANO5 is associated with muscle and bone diseases (Oh and Jung 2016). An X-ray crystal structure of a fungal TMEM16 has been reported, which explains a precise molecular gating mechanism as well as ion conduction or phospholipid transport across the plasma membrane (Brunner et al. 2014). Polar and charged lipid headgroups are believed to move through the low-dielectric environment of the membrane by traversing a hydrophilic groove on the membrane-spanning surface of the protein. Bethel and Grabe 2016 explored the membrane-protein interactions involved in lipid scrambling. A global pattern of charged and hydrophobic surface residues bends the membrane in a large-amplitude sinusoidal wave, resulting in bilayer thinning across the hydrophilic groove. Atomic simulations uncovered two lipid headgroup- interaction sites flanking the groove. The cytoplasmic site nucleates headgroup-dipole stacking interactions that form a chain of lipid molecules that penetrate into the groove. In two instances, a cytoplasmic lipid interdigitates into this chain, crosses the bilayer, and enters the extracellular leaflet, and the reverse process happens twice as well. Several family members appear to all bend the membrane - even those that lack scramblase activity. Sequence alignments show that the lipid interaction sites are conserved in many family members but less so in those with reduced scrambling ability (Bethel and Grabe 2016). TMEM16A forms a dimer with two pores. Dang et al. 2017 presened de novo atomic structures of the transmembrane domains of mouse TMEM16A in nanodiscs and in lauryl maltose neopentyl glycol as determined by single-particle electron cryo-microscopy. These structures reveal the ion permeation pore and represent different functional states (Dang et al. 2017). The structure in lauryl maltose neopentyl glycol has one Ca2+ ion resolved within each monomer with a constricted pore; this is likely to correspond to a closed state, because a CaCC with a single Ca2+ occupancy requires membrane depolarization in order to open. The structure in nanodiscs has two Ca2+ ions per monomer, and its pore is in a closed conformation. Ten residues are distributed along the pore that interact with permeant anions and affect anion selectivity, and seven pore-lining residues cluster near pore constrictions and regulate channel gating (Dang et al. 2017). Overexpression of TMEM16A may be associated with cancer progression. Zhang et al. 2017 showed that four flavinoids - luteolin, galangin, quercetin and fisetin - have inhibitory IC50 values ranging from 4.5 to 15 muM. These flavonoids inhibited TMEM16A currents as well as cell proliferation and migration of LA795 cancer cells. A good correlation between TMEM16A current inhibition and cell proliferation and migration was observed (Zhang et al. 2017). Similar to TMEM16F and 16E, seven TMEM16 family members were found to carry a domain (SCRD; scrambling domain) spanning the fourth and fifth TMSs that conferred scrambling ability to TMEM16A. By introducing point mutations into TMEM16F, Gyobu et al. 2017 found that a lysine in the fourth TMS of the SCRD as well as an arginine in the third and a glutamic acid in the sixth transmembrane segment were important for exposing phosphatidylserine from the inner to the outer leaflet. These results suggest that TMEM16 provides a cleft containing hydrophilic 'stepping stones' for the outward translocation of phospholipids (Gyobu et al. 2017). Hair cells in the inner ear convert mechanical stimuli provided by sound waves and head movements into electrical signals. Several mechanically evoked ionic currents with different properties have been recorded in hair cells. In 2018, searches for the protein(s) that form the underlying ion channel(s) were not definitive. The mechanoelectrical transduction (MET) channel is near the tips of stereocilia in hair cell. It is responsible for sensory transduction (Qiu and Müller 2018). Several components of the sensory mechanotransduction machinery have been identified, including the multi-transmembrane proteins tetraspan membrane protein in hair cell stereocilia (TMHS)/LHFPL5, transmembrane inner ear (TMIE) and transmembrane channel-like proteins 1 and 2 (TMC1/2). However, there remains considerable uncertainty regarding the molecules that form the channel pore. In addition to the sensory MET channel, hair cells express the mechanically gated ion channel PIEZO2, which is localized near the base of stereocilia and is not essential for sensory transduction. The function of PIEZO2 in hair cells is not entirely clear, but it may play a role in damage sensing and repair processes. Additional stretch-activated channels of unknown molecular identity are found to localize at the basolateral membrane of hair cells. Cunningham and Müller 2018 review current knowledge regarding the different mechanically gated ion channels in hair cells and discuss open questions concerning their molecular compositions and functions. TMEM16F is an enigmatic Ca2+-activated phospholipid scramblase (CaPLSase) that passively transports phospholipids down their chemical gradients and mediates blood coagulation, bone development and viral infection. Le et al. 2019 identified an inner activation gate, formed of three hydrophobic residues, F518, Y563 and I612, in the middle of the phospholipid permeation pathway. Disrupting the inner gate alters phospholipid permeation. Lysine substitutions of F518 and Y563 lead to constitutively active CaPLSases that bypass Ca2+-dependent activation. Strikingly, an analogous lysine mutation to TMEM16F-F518 in TMEM16A (L543K) is sufficient to confer CaPLSase activity to this Ca2+-activated Cl- channel (Le et al. 2019). Both lipid and ion translocation by Ca2+-regulated TMEM16 transmembrane proteins utilizes a membrane-exposed hydrophilic groove, several conformations of which are observed in TMEM16 protein structures. From analyses of atomistic molecular dynamics simulations of Ca2+-bound nhTMEM16, the mechanism of a conformational transition of the groove from membrane-exposed to occluded involves the repositioning of TMS4 following its disengagement from a TMS3/TMS4 interaction interface (Khelashvili et al. 2019). Residue L302 is a key element in the hydrophobic TMS3/TMS4 interaction patch that braces the open-groove conformation, which should be changed by an L302A mutation. The structure of the L302A mutant determined by cryo-EM reveals a partially closed groove that could translocate ions, but not lipids. This was corroborated with functional assays showing severely impaired lipid scrambling, but robust channel activity by L302A (Khelashvili et al. 2019). Membrane lipids are both the substrates and a mechanistically responsive environment for TMEM16 scramblase proteins (Khelashvili et al. 2019). The last 4 TMSs in members of TC subfamily 1.A.17.5 show sequence similarity to a family of 5 TMS proteins in TC family 9.B.306. Thus, the latter may have been the precursor of the calcium-recognition domain of the anoctamins (see description of TC family 9.B.306). The reactions believed to be catalyzed by channels of the Ca-ClC family, in addition to lipid scrambling, are: Cl- (out) ⇌ Cl- (in) Cations (e.g., Ca2+) (out) ⇌ Cations (e.g., Ca2+) (in)
<urn:uuid:c9387960-adb4-43f9-a5b4-59f11d90c46e>
CC-MAIN-2021-21
https://tcdb.org/search/result.php?tc=1.A.17
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989705.28/warc/CC-MAIN-20210512193253-20210512223253-00177.warc.gz
en
0.92001
5,991
2.546875
3
Disclosure: This post may contain affiliate links. Read my disclaimer for more information. Here’s the thing. Your brain is the powerhouse that controls every single thing in your life. Because it’s the one responsible for your every action, thinking, and feelings. With the help of about 100 billion neurons transmitting signals across your brain, you’re capable of forming memories, thoughts, and feelings. In fact, you won’t be able to read and digest every word I’ve written so far if it’s not because of your brain. The brain is amazing, isn’t it? It’s the boss of your heart, the one that keeps your lungs breathing, and the partner that helps you focus and concentrate. That’s why maintaining your brain’s health is crucial. And the best way to keep your brain healthy and boost your memory is by consuming brain healing foods. While you may find plenty of brain food supplements in the market, nothing beats the goodness of whole foods. Fortunately, there are many foods that improve memory and concentration you probably didn’t know about, which include super fruits that’ll serve as excellent brain foods snacks for kids and students when studying for exams. In this post, you’ll learn 10 of the best fruits for brain health according to science. Read on if you want to find out more about the best food for brain growth that helps boost your memory, concentration, learning, and cognitive function. - Brain fruit #1: Blueberries - Brain fruit #2: Strawberry - Brain fruit #3: Bilberry - Brain fruit #4: Blackcurrant - Brain fruit #5: Other berries - Brain fruit #6: Oranges - Brain fruit #7: Cherries - Brain fruit #8: Plums - Brain fruit #9: Avocados - Brain fruit #10: Apples - Other nutritious fruits for healthy brain - Related Questions Brain fruit #1: Blueberries When people talk about brain fruit, blueberries would usually come out at the top of the list. In fact, Dr. Steven Pratt, the author of the best-selling book, “SuperFoods Rx: Fourteen Foods Proven to Change Your Life,” called these natural candy “brain berries.” Blueberries give you a lot of health benefits, including promoting heart health, preventing cancer and aging, and improving your brain health. A recent clinical study found that berry fruits, such as blueberries, can reduce the risk of age-related neurodegenerative disorders, such as dementia and Alzheimer’s disease. Berry fruits can also enhance motor and cognitive functions. But what exactly makes blueberry a superfruit for the brain? Specifically, blueberries contain a high amount of flavonoids (polyphenolics) subclass called anthocyanins, which can pass through the blood-brain barrier and concentrate on the learning and memory area. A study published in Neurobiology of Aging proved that antioxidants supplementation protects the brain from age-related oxidative stress. Antioxidants contained in anthocyanins in blueberries are especially great at safeguarding your brain from stress and degeneration. A study published in Free Radical Biology and Medicine also suggests that among the antioxidant sources, phytochemicals in berry fruits (which include anthocyanin and caffeic acid) play an advantageous role in brain aging and neurodegenerative diseases. Their neuroprotective effects are mainly due to their anti-oxidative, anti-proliferative, anti-inflammatory, and antiviral functions. Moreover, a summarized review published in the Current Opinion in Clinical Nutrition and Metabolic Care, antioxidant-packed berry fruits are found to be beneficial in improving learning and memory. The positive cognitive effect may be due to the capability of berry polyphenols to directly communicate with aging neurons. The improved interactions between brain cells help the neurons to properly carry out their function during aging. Researchers also found that blueberry polyphenols, among others, can prevent aging and related brain disorders through increased neurogenesis in the brain of an adult. Moreover, a preliminary study published in the Journal of Agriculture and Food Chemistry suggests that blueberry supplements help boost memory, especially amongst the elderly. Animal studies have also shown that a diet rich in blueberries can considerably boost both motor skills and learning ability of aging rats, allowing the older rats to possess the same mental capability as that of younger rats. They may even delay short-term memory loss! Brain fruit #2: Strawberry Strawberries or scientifically known as Fragaria ananassa, are a native Mediterranean species that also grow in other regions of Eastern Europe. Aside from containing rich amounts of antioxidants, strawberries are one of the best natural sources of vitamin C. They are also rich in manganese and contain a decent amount of vitamin B9 and potassium. Amongst the berries, strawberries are the most commonly consumed across the globe. Similar to blueberries, strawberries are an excellent food for brain power that can keep your brain sharp as you grow older. A recent study by the Harvard researchers from Brigham and Women’s Hospital, which was released in the Annals of Neurology, discovered that high consumption of berries, including strawberries, over time can help slow down memory deterioration in older women by 2 ½ years. The Morris water maze performance has also shown that strawberry extracts can enhance cognitive capabilities. An animal study published in the Journal of Neuroscience reported that maintaining rats for 2 months in antioxidant diets with strawberry extracts help prevent neurochemical and behavioral changes caused by aging. The scholars also reported the enhancement of motor behavior, spatial learning, and memory of the rats. Aside from their brain-boosting effect, studies have shown that strawberries provide plenty of other potential health benefits, which includes: - Improve heart health and reduce the risk of death associated with heart diseases - Regulates blood sugar - Prevent certain cancer, such as oral cancer - Enhance blood antioxidant status - Reduce oxidative stress and inflammation Brain fruit #3: Bilberry Bilberries or scientifically known as Vaccinium myrtillus L. are originally found in Northern Europe, but today they can also be found in some parts of Asia and North America. They are sometimes referred to as European blueberry, huckleberry, blueberry, and whortleberry. Since bilberries look pretty much the same as the North American blueberries, they are often called blueberries. Bilberries provide one of the richest natural sources of anthocyanins, which makes them rich in antioxidants and gives them the blue-black color. They are also rich in vitamin C, E, and manganese, and contain carotenoids, zeaxanthin, and lutein. Each of these components provides significant health effects. While bilberries are popularly known for their role in enhancing vision, studies have shown that bilberry can potentially promote brain health. A study released in the Molecular Nutrition and Food Research suggests that bilberry and its anthocyanins have neuroprotective properties and preserve brain and retinal function. Another study published in Nutrients reported that bilberry can enhance long-term and working memory in the elderly population. Aside from neuroprotective and ocular effects, several studies have reported the potential health benefits of bilberry, including: - Anti-obesity and lower blood sugar level - Lipid-mitigating effects - Improve heart health - Lower oxidative stress - Anti-microbial properties Brain fruit #4: Blackcurrant For more than half a century, blackcurrant was called the “forbidden fruit” and was banned in the United States. The prohibition of this translucent pulp of green-red shades berry was because back then, people believed it helped spread a fungus (white pine blister rust) which harmed the timber industry. Fortunately, a new variety of disease-resistant currants were introduced back in 1966 along with innovative methods to prevent the fungi. Soon after, the government left it up to the states to legalize the fruit and in 2013, New York was amongst the earliest states to repeal the ban. Some states have also started to lift the ban. Nowadays, blackcurrants are grown by the farmers in the Northeast and Pacific Northwest of the United States and used to make foods, such as jams, jellies, oils, and teas. Blackcurrants are a nutrient-dense fruit with plenty of health benefits. Both red and black currants contain 4x more vitamin C than oranges and 2x more antioxidants than blueberries! A recent study in the Journal of Nutritional Biochemistry listed blackcurrant as one of the powerful fruit that offers neuroprotection in Alzheimer’s disease. In this study, the researchers also found that cells treated with anthocyanin-dense blackcurrant extracts considerably decreased the production of reactive oxygen species, which plays a major role in chronic disorders, including neurodegenerative diseases. Another animal study in the Journal of Toxicology and Environmental Health demonstrated the protective effect of blackcurrant juice against oxidative stress formation in the brain, liver, and serum. The potential of blackcurrant extracts in protecting brain neuronal cell damage has also been reported in the Journal of Food Nutrition Research. Moreover, blackcurrants contain anti-inflammatory substances that decrease neuroinflammation and may lead to enhanced memory, learning, and cognitive capacity. Aside from the potent antioxidant, anti-inflammatory, and anti-microbial properties demonstrated in modern laboratories, blackcurrants provide several research-proven and promising nutrition and health benefits, including: - Rich in vitamins, including vitamins A, B-5, B-6, B-1, C and E. - Traditional herbal medicine - Boost immune system - Reduce eye fatigue and improve visual function - Relax contraction in gastrointestinal disorders - Promote kidney health Brain fruit #5: Other berries Aside from the berries mentioned above, research shows that other types of berries also contain flavonoids, a type of plant chemicals which gives berries their brilliant colors and helps keep your brain sharp. A well-written review by Dr. Miller and Shukitt-Hale from the human nutrition center of USDA suggested that the consumption of berries, including blackberries, strawberries, blueberries, and other berry fruits, has a positive impact on your brain. Here are some other berry fruits that can boost your brainpower according to studies. - Blackberries: Aside from being rich in antioxidants, blackberries are rich in vitamin C and K, dietary fibers, manganese, and folate. Just one cup or 144g of blackberries contains about 7.6 grams of fiber and have enough vitamin C to satisfy half of your recommended daily dose of vitamin C. - Raspberries: Alongside flavonoids, raspberries contain plenty of other antioxidants, including vitamin C and E, beta-carotene, lycopene, lutein, zeaxanthin, and selenium. - Mulberries: Apart from helping you to prevent ROS production and decrease the degree of neuronal damage, mulberries are used in traditional medicine as diuretics, antipyretic, anti-inflammatory, antitussive, and prevent high blood sugar. - Cranberries: In addition to boosting neuronal function and restoring your brain’s power to produce a neuroprotective response to stress, studies showed that cranberries help treat UTIs, promote heart health, improve dental health, and slow down cancer progression. Brain fruit #6: Oranges You might be growing up recognizing oranges as a superfruit with plenty of vitamin C, right? In fact, if you eat a medium orange, you can easily get your required daily dose of vitamin C. But what can vitamin C do to your body anyways? You might’ve heard the infamous roles of vitamin C in collagen formation, scurvy prevention, or iron absorption enhancement in your body, but these are not the only benefits you can get from vitamin C. Surprisingly, vitamin C is also vital for your brain health. It helps support your brain health as you grow older. A review (with a specific focus on in vivo experiments and clinical research) published in the journal of Nutrients suggests vitamin C as a key factor in preventing mental degeneration. The findings also suggest a direct impact of vitamin C deficiency on brain function, particularly during the progression or regeneration after traumatic brain injury. Another recent critical review released in the Journal of Alzheimer’s Disease reported that eating enough foods rich in vitamin C can reduce the risk of age-related cognitive decline and Alzheimer’s disease. Vitamin C is also a potent antioxidant that helps combat the free radicals that can harm the brain cells. Apart from vitamin C, oranges are also rich in flavonoids which help keep your brain cells healthy. A study published in the American Journal of Clinical Nutrition reported that drinking flavanone-rich orange juice could significantly boost brain function in elderly people. In this study, a group of 37 healthy women and men with ages ranging from 60 to 81 years old were recruited to consume around 500 mL of orange juice everyday for over eight weeks. The study measured their memory, verbal fluency, and reaction time, all of which were combined into one overall score called the Global Cognitive Function. By the end of the study, the participants showed an astounding 8% improvement in global cognitive function. Oranges are also an excellent source of folate or vitamin B9, which you need to eat in an optimum amount for sufficient brain functioning. Folate deficiency may lead to neurological disorders, such as depression and cognitive impairment. Aside from promoting brain health, other potential health benefits of oranges according to research include: - Reduce blood pressure - Reduce the risk of heart disease - Prevent kidney stone - Reduce blood cholesterol level Brain fruit #7: Cherries Cherries are the stone fruit family’s smallest members. They are one of the popular fruits you often see on cakes. They’re not just tasty, but also rich in various nutrients, including carbs, proteins, fibers, and minerals like potassium, copper, and manganese. Studies have also shown that cherries are packed with vitamin C and polyphenols, all of which provide antioxidant and anti-inflammatory effects. A 2016 animal study published in the journal of Age reported that tart cherry enhanced working memory of aged rats. The findings also suggested that the addition of cherries in your diet may promote healthy agings and reduce the onset of diseases associated with brain degeneration. Another recent study released in the journal of Food & Function which was conducted by the University of Delaware reported the positive impact of cherries on the cognitive health of older adults. The study involved 37 participants ranging from 65 to 80 years old. 20 of them were assigned to drink a cup of tart cherry juice in the morning and in the evening daily in the span of 12 weeks. The remaining participants were asked to drink a placebo containing no cherry but have similar color, flavor, and sugar content as that of the cherry. The results were astounding. The group drinking tart cherry juice portrayed the following improvement: - Increased subjective memory (contentment with memory domain) by up to 5% - Reduction of movement time by 4% - Enhancement of visual sustained attention by 3% - 23% fewer errors in PAL test assessing episodic visual memory and new learning - 18% fewer total errors in spatial working memory tests Nevertheless, further studies are needed to test a bigger sample in a more extended period. Aside from improving cognitive function, studies showed that cherries may provide the following impressive health effects. - Boost muscle recovery - Improve exercise performance - Protect the heart - Improve arthritis and gout symptoms - Enhance sleep quality and relieve insomnia Brain fruit #8: Plums Plums belong to the same family like peaches and apricots. They have a myriad of shapes and colors with more than 2000 varieties. The United States is the second-leading supplier of plums following China, and California is responsible for most plum harvests in the US. These low-calorie fruits are packed with various nutrients, including vitamins and minerals, such as calcium, folate, and magnesium. Plums contain high-amount of sugars and plenty of vitamin C and carotene, which transform into vitamin A. They are also rich in flavonoids, particularly anthocyanins, which supply a decent amount of natural antioxidants. Similar to blueberries, plums that are rich in antioxidants have neuroprotective properties and can benefit your brain. A study published in the journal of Nutrition reported the efficiency of plum juice in reducing cognitive decline in aged rats. However, unlike 100% plum juice, findings suggest that dried plum powder does not improve working memory at all, possibly because there is a smaller amount of phenolics in the latter. Another study released in the British Journal of Nutrition reported that the polyphenol-rich Oriental plums help improve brain function and relieve some symptoms of neurodegenerative conditions when included in a high-cholesterol diet. Research on the effects of plum on health continuously shows promising results on memory-boosting properties. Although more extensive studies need to be done to assess the effects of plums on human health, several preliminary research suggested that the following potential health benefits of plums. - Possess anti-allergic property - Promote bone health - Protect heart health - Reduce blood sugar level - Boost immunity Interesting fact: Dried plums are called prunes. But not all plums are prunes. Because while prunes and plums share the same family, prunes can be a dried plum of any variety of plum. Typically, prunes come from European Plum (Prunus domestica). Brain fruit #9: Avocados From smoothies to salad, you can pretty much find avocados in all kinds of recipes. Avocados are just too tasty to ignore. Plus, they’re packed with plenty of nutrition with almost 20 vitamins and minerals in each serving! Compared to other fruits, avocados are richer in protein and contain less amount of sugar. But unlike most fruits which usually consist primarily of carbs, avocados have a relatively high content of fat. Luckily, about two-thirds of avocados’ total fats are monounsaturated fats, which are healthy fats that are good for your heart. You can find this type of fats in olive oil and some nuts as well. Avocados help improve blood flow and boost memory and concentration. In fact, Dr. Steven Pratt, the writer of “SuperfoodsRx: Fourteen Foods Proven to Change Your Life”, believed that avocados are almost as excellent as blueberries in boosting brain power and maintaining brain health. A study released in the October 2012 issue of Federation of American Societies for Experimental Biology reported that the mono-unsaturated fats in avocados help protect brain cells known as the astrocytes. Avocados also contain a high amount of phytochemicals, particularly antioxidants. One study in the book series entitled Advances in Neurobiology reported that the antioxidant-rich avocados may play a critical role in the prevention of neurodegenerative diseases. Another study published in the 2016 Nutrients journal suggested that eating avocados can help increase neural lutein and enhances cognitive performance. Hence, researchers proposed that avocados may be an effective dietary approach for cognitive health in aging people. Moreover, the high amount of vitamin K and folate in avocados help inhibit blood clot in the brain and hence reduce the risk of getting a stroke. Aside from their contribution to mental health, studies showed that avocados have many other potential health effects, including: - Increase good cholesterol and reduce bad cholesterol level - Promote heart health - Loaded with dietary fiber for better digestion - Rich in carotenoids which protect visual health and minimize the risk of macular degeneration and cataracts - Prevent cancer, such as prostate cancer - May help relieve arthritis symptoms - Promote weight loss Interesting fact: Did you know? Avocados are also called alligator pear or butter fruit and are botanically categorized as a single-berry seed. They are consumed from even way back in 10,000 BC! Brain fruit #10: Apples “An apple a day keeps the doctor away.” This quote gives a clear indication of the health benefits of apples. But here’s a new reason to bite on an apple every day — it keeps your mental juices flowing! Apples are the major source of quercetin, an antioxidant that protects your brain cells. Researchers from Cornell University reported that the quercetins help safeguard your brain cells from free radical damages which will result in cognitive degeneration. A 2017 study from King Saud University in Saudi Arabia further reiterated the protective effect of quercetin against neurodegenerative conditions associated with stress. To savor every quercetin that an apple has to offer, be sure to consume the apples along with their skins. A series of recent studies published in several journals, including the Journal of Alzheimer’s Disease, Journal of Neurochemistry, and Journal of Nutrition, Health and Aging, provided novel findings on the apple juice concentrate’s potential in preventing oxidative damage and decline in cognitive function associated with the normal aging process. Other potential health benefits of apples according to studies include: - Have a cardioprotective effect - Lower risk of diabetes - Help protect against stomach injury from a class of painkillers (NSAIDs) - Promote bone health - Help regulate the immune system and fight asthma - Reduce the risk of cancer - May possess prebiotic properties which can promote the growth of gut bacteria - May help induce weight loss Other nutritious fruits for healthy brain Here are some other fruits for a healthy brain you may want to try: - Red grapes - Banana or green banana What foods are bad for your brain and memory? Studies showed that poor diet is associated with dementia and Alzheimer’s disease. High caloric foods like sugary beverages are especially bad for brain health. Other foods that are bad for your brain and memory are refined carbs, highly processed foods, alcohol, and foods loaded with trans fats. Is banana good for memory? Banana is an excellent choice of food for increasing brain power. It comprises of tryptophan, a type of essential amino acid that produces serotonin. Serotonin is responsible for preserving and boosting memory, regulating mood, and enhance sex drive. What are the top 5 brain foods? Studies showed that the 5 best brain healing foods that improve memory and concentration include leafy vegetables, berries, coffee, dark chocolate, and foods rich in omega-3 fatty acids, such as fatty fish like salmon and nuts like walnuts. Like what you read? Pin this post to your favorite Pinterest board!
<urn:uuid:f50aca0a-4790-45b6-bde1-c5dd81fd5a0c>
CC-MAIN-2021-21
https://kitchenicious.com/10-best-fruits-for-the-brain-backed-by-science/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991252.15/warc/CC-MAIN-20210512035557-20210512065557-00378.warc.gz
en
0.932346
4,853
2.671875
3
A movement spearheaded by Speaker of the House Henry Clay that called for internal improvements, higher protectionist tariffs, and a strong national banking system. The system’s supporters, including Daniel Webster, succeeded in chartering the Bank of the United States in 1816 and creating both the Tariff of 1816and the steeper Tariff of Abominations in 1828. They also funded the Cumberland Road from Maryland to Missouri and supported the construction of various other roads and canals. A small-scale 1838–1839 turf war, fought between American and Canadian woodsmen in northern Maine, that almost erupted into a larger war between Britain and the United States. The Aroostook War convinced both countries that settlement of northern Maine territorial disputes had to be negotiated promptly. The dispute was resolved by the Webster-Ashburton Treaty of 1842, negotiated by Secretary of State Daniel Webster and Lord Ashburton of Britain, which established a permanent border between Maine and Canada. A private bank, chartered in 1816 by proponents of Henry Clay’s American System, that provided the fledgling United States with solid credit and financial stability in the 1820s and 1830s under the leadership of Nicholas Biddle. Many in the West and South, however, despised the Bank because they saw it as a symbol for aristocracy and greed. In 1832, Andrew Jackson initiated the Bank War by vetoing a bill to renew the Bank’s charter. He eventually destroyed the Bank in the 1830s by withholding all federal gold and silver deposits and putting them in smaller banks instead. Without any reserves, the Bank withered until its charter expired in 1836. Deprived of stable credit, the blossoming financial sector of the economy crashed in the Panic of 1837. A conflict between Andrew Jackson and Henry Clay over 1832 legislation that was intended to renew the charter of the Bank of the United States. Clay pushed the bill through Congress, hoping it would slim Jackson’s reelection chances: signing the charter would cost Jackson support among southern and western voters who opposed the bank, whereas vetoing the charter would alienate wealthier eastern voters. Jackson vetoed the bill, betting correctly that his supporters in the South and West outnumbered the rich in the East. Upon reelection, Jackson withheld all federal deposits from the Bank, rendering it essentially useless until its charter expired in 1836. A brief 1832 war in Illinois in which the U.S. Army trounced Chief Black Hawk and about 1,000 of his Sauk and Fox followers, who refused to be resettled according to the Indian Removal Act. An area of western New York State that earned its nickname as a result of its especially high concentration of hellfire-and-damnation revivalist preaching in the 1830s. The Burned-Over District was the birthplace of many new faiths, sects, and denominations, including the Mormon church and the Oneida community. Religious zeal also made the area a hotbed for reform movements during the 1840s. An 1821 Supreme Court ruling that set an important precedent reaffirming the Court’s authority to review all decisions made by state courts. When the supreme court of Virginia found the Cohen brothers guilty of illegally selling lottery tickets, the brothers appealed their case to the U.S. Supreme Court. Chief Justice John Marshall heard the case and ruled against the family. Though he concurred with the state court’s decision, he nonetheless cemented the Supreme Court’s authority over the state courts. This case was one of many during the early 1800s in which Marshall expanded the Court’s and the federal government’s power. A tariff, proposed by Henry Clay, that ended the Nullification Crisis dispute between Andrew Jackson and South Carolina. The compromise tariff repealed the Tariff of Abominations and reduced duties on foreign goods gradually over a decade to the levels set by the Tariff of 1816. A scandal that arose during the election of 1824 that tainted John Quincy Adams’s entire term in office. When neither Adams nor his opponent, Andrew Jackson, received enough electoral votes to become president, the election was thrown to the House of Representatives. Speaker of the House Henry Clay, who hated Jackson, threw his support behind Adams, which effectively won him the presidency. When Adams later announced Clay as his new secretary of state, Jackson and the American people cried foul. Adams was accused of having made a “corrupt bargain,” and the political fallout rendered him politically paralyzed during his term. A 1793 invention by Eli Whitney that enabled automatic separation of cotton seeds from raw cotton fiber. The cotton gin made cotton farming much easier and more profitable for southern planters, prompting them not only to increase their cotton output but also to increase their demand for slave labor. Along with Whitney’s other innovation, the use of interchangeable parts, the cotton gin stimulated the growth of textile manufacturing in the North and the birth of the wage labor system. A federally funded road, also known as the National Road, that was completed in 1837 and then expanded several times throughout the antebellum period. When finally completed, the Cumberland Road stretched all the way from Maryland to Illinois. It was a one of the most significant internal improvements made under Henry Clay’s American System. An 1819 Supreme Court ruling that upheld the right of private institutions to hold private contracts. When the New Hampshire state legislature revised Dartmouth College’s original charter from King George III, the college appealed to the U.S. Supreme Court. Chief Justice John Marshall ruled that even though the college’s contract predated the Revolutionary War, it was still a legal contract with which the state of New Hampshire could not interfere. This precedent asserted federal authority and protected contracts from state governments. A declaration read at the 1848 Seneca Falls Convention for women’s rights. The Declaration of Sentiments mimicked the Declaration of Independence by stating that “all men and women were created equal.” Written primarily by suffragette Elizabeth Cady Stanton, it is regarded as one of the most important achievements of the early women’s rights movement. A nickname given to James Monroe’s early years as president (1816–1819), when the Democratic-Republicans were the only political party and nationalist Americans concentrated on improving America. The Era of Good Feelings dissipated after the crisis over Missouri in 1819 and the Panic of 1819. A canal between the New York cities of Albany and Buffalo, completed in 1825. The canal, considered a marvel of the modern world at the time, allowed western farmers to ship surplus crops to sell in the North and allowed northern manufacturers to ship finished goods to sell in the West. An 1810 Supreme Court decision in which the Court ruled that the Georgia state legislature could not cancel a contract that a previous legislature had already granted. The decision by Chief Justice John Marshall protected the permanence of legal contracts and established the Supreme Court’s power to overrule state laws. An 1833 bill that authorized the federal government to use military force to collect tariff duties. The bill demonstrated Andrew Jackson’s resolve to end the 1832–1833Nullification Crisis in South Carolina. An order that the House of Representatives, beleaguered by the growing abolitionist movement in the North, passed in 1836 to ban further discussion of slavery. An 1824 Supreme Court ruling that declared that the state of New York could not grant a monopoly to a company engaged in interstate commerce. Chief Justice John Marshall thus exerted federal power by upholding that only the federal government had the right to regulate interstate commerce according to the Constitution. An 1840 bill that created an independent U.S. Treasury. The bill established the independent treasury to hold public funds in reserve and to prevent excessive lending by state banks, thus guarding against inflation. The Independent Treasury Bill was a response to the Panic of 1837, which many blamed on the risky and excessive lending practices of state banks. An 1830 act, supported by Andrew Jackson, that authorized the U.S. Army to evict by force all Native Americans east of the Mississippi River and resettle them in “permanent” reservations in present-day Oklahoma and Nebraska. Thousands of Native Americans died on the “Trail of Tears” to their new and unwanted home. The Army was forced to fight the Black Hawk War and Second Seminole War after some tribes refused to leave. A system, devised by Eli Whitney in 1797, that allowed machines to mass-produce identical goods. This innovation prompted a boom in new factories in the North during the antebellum period. A term referring to infrastructure projects, mostly involving transportation, that were key features of Henry Clay’s American System. Scores of canals and roads were dug to link the East with the West during the period from 1816 to 1852. The most famous of these were the Erie Canal and the Cumberland Road. A party, known formally as the American Party, of nativist Americans who wanted to stop the tide of foreign immigrants from Ireland and Germany entering the United States in the 1840s and 1850s. The Know-Nothings nominated former president Millard Fillmore in the 1856 presidential election. Members of the American Party were so secretive that they often claimed to “know nothing” whenever questioned, hence the nickname. A northern abolitionist party that formed in 1840 when the abolitionist movement split into a social wing and a political wing. The party nominated James G. Birney in the election of 1844 against Whig Henry Clay and Democrat James K. Polk. Surprisingly, the Liberty Party siphoned just enough votes away from Clay to throw the election to the Democrats. An 1851 law that prohibited the sale, manufacture, and consumption of alcohol in the state of Maine. The law, a huge victory for the temperance movement, encouraged other states in the North to pass similar prohibitory laws. A belief, common in the United States in the mid-1800s, that Americans had been “manifestly destined” by God to settle and spread democracy across the continent and perhaps even the entire western hemisphere. To achieve this destiny, thousands left their homes during the 1840s and 1850s and embarked on journeys on the Oregon Trail, on the Mormon Trail to Utah, or to mine for gold in California. Manifest destiny also led many southerners to seek—unsuccessfully—new slave territories in places as far away as Nicaragua and Cuba. Manifest destiny led presidents John Tyler and James K. Polk to annex Texas, acquire Oregon from Britain, and wage the Mexican War to seize California. An 1819 Supreme Court ruling that upheld the constitutionality of the Bank of the United States. Chief Justice John Marshall, like Alexander Hamilton, was a loose constructionist who believed that the federal government was authorized to create the Bank even though the Constitution said nothing about it. President Andrew Jackson ignored Marshall’s ruling and vetoed a bill to renew the Bank’s charter in 1832 on the grounds that it was unconstitutional. An invention by Cyrus McCormick that had a profound effect on agriculture in the West during the 1840s and 1850s. Most western farmers had been planting corn, but the mower-reaper allowed them to plant wheat, which was far more profitable than corn. As farmers planted more and more wheat, they began to ship their surpluses to manufacturing cities in the North and Northeast. An 1820 compromise, devised by Henry Clay, to admit Maine as a free state and Missouri as a slave state. The compromise maintained the sectional balance in the Senate—twelve free states and twelve slave states—and forbade slavery north of the 36° 30' parallel. It ended a potentially catastrophic dispute and tabled all further slavery discussions for the next couple of decades. An 1823 policy statement, drafted by James Monroe and his secretary of state, John Quincy Adams, warning Old World colonial powers to stay out of affairs in the Western Hemisphere. The doctrine stated that the New World was closed to further colonization and that European attempts to interfere would be considered hostile acts against the United States. In return, the United States would not interfere in Europe’s internal affairs or with existing European colonies in the New World. Britain, anxious to preserve a hold on its remaining colonies in North America, helped enforce the doctrine. The Monroe Doctrine has had great influence on American foreign policy over the years. A crisis over the Tariff of 1828 (Tariff of Abominations), which was enormously unpopular in the South. Andrew Jackson’s supporters pushed the tariff through Congress during John Quincy Adams’s term, but when Jackson took office, his vice president, John C. Calhoun, opposed the tariff vehemently. Calhoun secretly wrote and published an essay called “South Carolina Exposition and Protest” to encourage state legislatures in the South to nullify the tariff. Though Jackson personally disliked the tariff, he refused to allow any state to disobey a federal statute. When South Carolina did nullify the tax in 1832, Jackson threatened to use the military to enforce the law. Fortunately, Henry Clay proposed the Compromise Tariff of 1833to reduce the tariff gradually over a decade. A financial panic, caused in part by overspeculation in western lands, that slid the U.S. economy into a decade-long depression. Farmers in the West and South were hit hardest, but the depression’s effects were felt everywhere. The panic helped bring an end to the Era of Good Feelings. A financial panic caused by the default of many of the smaller “pet banks” that Andrew Jackson had used to deposit federal funds when he withheld them from the Bank of the United States in the 1830s. The crisis was compounded by overspeculation, the failure of Jackson’s Specie Circular (which required that all land be purchased with hard currency), and the lack of available credit due to the banking crisis. A war fought by the U.S. Army against members of the Seminole tribe in Florida who refused to be resettled west of the Mississippi River in the late 1830s. A convention of early women’s rights activists in Seneca Falls, New York, in 1848 to launch the American feminist movement. The convention’s Declaration of Sentiments, penned by Elizabeth Cady Stanton, was modeled on the Declaration of Independence in its declaration that all men and women were created equal. An essay, written anonymously by Vice President John C. Calhoun, that called on the southern states to declare the 1828Tariff of Abominations null and void. The essay encouraged South Carolina legislators to nullify the tariff, pitting the state against President Andrew Jackson in the most serious internal conflict the nation had yet faced. This Nullification Crisis is regarded as one of the stepping stones that eventually led to Civil War. Resolutions introduced in 1847 by Congressman Abraham Lincoln,who, unconvinced that the Mexican army had attacked U.S. forces unprovoked, demanded to know the exact spot where Mexicans had attacked. Lincoln’s persistence—and the confusing answers that Democrats gave—suggested that General Zachary Taylor, or perhaps even President James K. Polk himself, had provoked the attack and initiated the Mexican War. An 1819 act passed by the northern-dominated House of Representatives in an attempt to curb westward expansion of slavery. The act declared that Missouri could be admitted to the Union as a slave state, but only on the condition that no more slaves enter the territory and that its existing slaves gradually be freed. Outraged southern legislators, who wanted to push slavery westward, blocked the act in the Senate, throwing Congress into a logjam. The crisis eventually was resolved by the Missouri Compromise of 1820. A tariff, passed under the leadership of Henry Clay, that was designed to protect American manufacturing (prior tariffs had had the sole purpose of raising revenue). Whereas northerners loved the tariff, southerners disliked it, for they had little manufacturing to protect but still had to pay higher prices for foreign goods. The tariff was a key component of the American System. See Tariff of Abominations. A slight reduction on the “Tariff of Abominations” that was passed as a gesture of good will to encourage South Carolina to end the Nullification Crisis. Most South Carolinians saw the concessions as minimal at best and declared both the Tariff of Abominations and the Tariff of 1832 null and void out of principle. See Compromise Tariff of 1833. A tariff passed by John Tyler that brought duties on foreign manufactured goods down to the level of the Compromise Tariff of 1833. See Walker Tariff. A nickname for the Tariff of 1828 that reflected southerners’ enormous objections to the tariff. Vice President John C. Calhoun’s opposition to the tariff and his publication of the “South Carolina Exposition and Protest” pushed the nation into the Nullification Crisis. When South Carolina’s legislature followed Calhoun’s advice and declared the tariff null and void in their state, President Andrew Jackson threatened to use the military to enforce the tariff. Fortunately, Henry Clay proposed the Compromise Tariff of 1833, which settled the dispute. The route by which thousands of Native Americans, primarily Cherokee, were forcibly removed in the 1830s from their southeastern homelands and relocated to new reservations west of the Mississippi. This program of relocation was initiated under Andrew Jackson’s Indian Removal Act. The journey has been labeled the “Trail of Tears” because countless Native Americans, forced to walk hundreds miles under horrible conditions, died along the way. An American philosophical and intellectual movement of the 1830s–1850s whose followers believed that truth “transcended” the reality perceivable by the five senses. Transcendentalism originated in New England and was especially strong in eastern Massachusetts, where Ralph Waldo Emerson and Henry David Thoreau lived. The movement emphasized individuality and strength of character, and most of its members were reformers, abolitionists, and Whigs. A treaty between the United States and Britain that established a fixed border with Canada from Minnesota to the Rocky Mountains. The treaty also declared that both countries would occupy the Oregon Territory jointly until 1828. Though not highly regarded at the time, the treaty is considered one of John Quincy Adams’s most important achievements as secretary of state to James Monroe. The extension of voting rights to nearly every white American male during the antebellum period. In the early United States, men had had to meet certain property-ownership and literacy qualifications in order to vote, but during the 1830s and 1840s, more and more states eliminated these restrictions. As more men in the poorer classes were able to vote, the Democrats received a huge boost in popularity. An 1846 tariff that lowered tariff rates, which had climbed higher and higher after their brief reduction in 1842. An 1842 treaty between the United States and Britain that established a permanent border between Maine and Canada after the Aroostook War. A party formed in 1834 under the leadership of Henry Clay and Daniel Webster. The Whigs, named after an anti-British party during the Revolutionary War era, promoted a platform of social reform (education, prison, temperance, and so on), abolition of slavery, and limited westward expansion. Several Whig candidates ran and lost against Martin Van Buren in the election of 1836, but the party rebounded four years later when they put William Henry Harrison in the White House. Fly-by-night banking operations that plagued the West and South during the 1800s. The wildcat banks were highly unstable because they were impermanent, printed their own unregulated paper money, and had almost no solid credit. Whenever there was a financial panic, as in 1819 and 1837, many of these banks went bankrupt.
<urn:uuid:5d3db0f1-4eef-453d-a33b-4187ae128c70>
CC-MAIN-2021-21
https://www.sparknotes.com/history/american/precivilwar/terms/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991562.85/warc/CC-MAIN-20210519012635-20210519042635-00418.warc.gz
en
0.964982
4,115
4.09375
4
The Arctic Ocean seabed is expected to contain substantial natural resource reserves, which states seek to lay claim to. The once influential idea that this could lead to a scramble for the Arctic and inter-state conflict has generally been considered unlikely. Until now, the Arctic Ocean coastal states have followed rule-based procedures to settle their overlapping claims in the Arctic Ocean. The United Nations Convention on the Law of the Sea (UNCLOS) provides a legal framework for the delineation of the outer limits of the continental shelf. Russia, Canada, Denmark and Norway have submitted, or are in the process of submitting, their claims to the relevant United Nations body, the Commission on the Limits of the Continental Shelf. Despite the growing tension between Russia and other Arctic Ocean coastal states, it is likely that the continental shelf claims will be settled in an orderly fashion. This is mostly due to the fact that the UNCLOS treaty works for the benefit of the coastal states. However, adverse political dynamics may challenge the status of, and adherence to, the relevant legal processes in the Arctic. Most of these are related to uncertainty over Russia. Consequently, the possibility of unilateral and illegal action cannot be completely ruled out. When the Russian expedition planted their national flag on the North Pole seabed in August 2007, many became convinced that the scramble for abundant Arctic hydrocarbon resources had begun. It was expected, especially in the media, that states would engage in power politics to gain access to these resource reserves in a manner that could lead to a new Cold War in the Arctic. While this story-line was initially accepted in expert circles as well, it was soon dismissed as an over- exaggeration. There were two main reasons for this. First, the Arctic Ocean coastal states reaffirmed their commitment to the United Nations Convention on the Law of the Sea (UNCLOS) – the so-called ‘constitution of the seas’ – in the Arctic. In the 2008 Ilulissat Declaration, the states agreed not only that ‘the law of the sea provides for important rights and obligations concerning the delineation of the outer limits of the continental shelf’, but also more broadly that ‘we remain committed to this legal framework and to the orderly settlement of any possible overlapping claims’.1 Second, all coastal states other than the United States2 started to prepare their submissions to extend their respective continental shelves to the relevant United Nations body, the Commission on the Limits of the Continental Shelf (CLCS), on the basis of UNCLOS. Until now, various submissions to the CLCS have been made. Denmark made a vast continental shelf claim in December 2014 that included the seabed at the North Pole and most of the Lomonosov Ridge – an underwater ridge that runs across the Arctic Ocean. As early as 2001, Russia’s submission had claimed that most of the same ridge belonged to its continental shelf. Given the CLCS’s dissatisfaction with the Russian claim, Russia revised and finally resubmitted its claim to the Commission on 3 August, 2015. As anticipated, the updated claim still overlaps with the Danish claim. A third formal claim to the Arctic seabed is expected to emerge from Canada.3 This paper investigates this ongoing process of extending national continental shelves in the Arctic Ocean. In order to contextualize the analysis, the paper starts by explicating the historical evolution of UNCLOS as the internationally recognized legal framework in which the extension of continental shelves is being pursued. The paper continues by asking whether UNCLOS is working as intended in the Arctic. To this end, the paper explicates a number of factors why the delimitation of continental shelves is likely to proceed in an orderly manner despite overlapping claims and, consequently, why these claims are not expected to lead to significant international tensions between the Arctic coastal states. As respect for international law can never be guaranteed, the paper also highlights existing and potential adverse political dynamics that may challenge the status of, and adherence to, UNCLOS in the Arctic. Most of these are related to uncertainty over Russia. The evolution of UNCLOS and continental shelf claims Prior to World War II, coastal states enjoyed sovereignty over only a narrow territorial sea, three to four nautical miles in extent. This was dramatically changed after the war by the 1945 Truman Proclamation whereby ‘the Government of the United States regards the natural resources of the subsoil and seabed of the continental shelf beneath the high seas but contiguous to the coasts of the United States as appertaining to the United States, subject to its jurisdiction and control’.4 This heralded the era of creeping coastal state jurisdiction, especially in regard to the seabed, the outer limit of which was defined in Article 1 of the 1958 Continental Shelf Convention as follows: “For the purpose of this analysis, the term ‘continental shelf’ is used as referring (a) to the seabed and subsoil of the submarine areas adjacent to the coast but outside the area of the territorial sea, to a depth of 200 metres or, beyond that limit, to where the depth of the superjacent waters admits of the exploitation of the natural resources of the said areas; (b) to the seabed and subsoil of similar submarine areas adjacent to the coasts of islands.” The problem with this definition was that it effectively permitted coastal states to expand their seabed presence with the development of technology, to the extent that even ocean floors could have been divided between the coastal states. A counter-force to this trend came from Maltese ambassador Arvid Pardo, who in 1967 proposed in the UN General Assembly that the ocean floor should be designated as part of the common heritage of humankind and governed by an international governance mechanism that would share the economic benefits of the ocean floor’s riches equitably between developing and developed states. Pardo’s proposal also acted as a major impetus for convening the United Nations Conference on the Law of the Sea III, which sought to produce a comprehensive ‘Constitution’ of the oceans and became the 1982 UNCLOS. UNCLOS was negotiated over an extended period – from 1974 to 1982 – as a package deal in that it permitted no reservations to the Convention and contained an elaborate dispute settlement mechanism. It succeeded in achieving a compromise between various groupings of states with differing interests related to the seabed. For instance, states having a broad continental margin5 had rules accepted that allowed the resources of the whole continental margin to be subject to the sovereign rights of coastal states; geologically disadvantaged states (those whose continental margin was minimal) managed to push for a rule that entitles all states to a continental shelf of a minimum of 200 nautical miles. UNCLOS was also successful in defining the outer limit of the continental shelf more clearly than its 1958 predecessor and in designating the ocean floor as part of the common heritage of mankind and having it governed by the International Seabed Authority (ISBA). Even though states with broad continental margins were able to extend the outer limit of the continental shelf to cover the whole geophysical continental margin (and in some exceptional cases areas beyond it) during the negotiations, they had to make compromises as well. For example, they had to submit to rules requiring them to transfer some of the revenues from offshore hydrocarbon exploitation on their extended continental shelf to developing states via the ISBA and, more importantly, had to scientifically prove the extent of their continental shelf to the 21-member CLCS. This submission must be made by a coastal state within 10 years of its becoming a party to UNCLOS if it considers that its continental margin exceeds 200 nautical miles. The CLCS can only make recommendations but these recommendations are legally influential because the coastal states’ outer limits become final and binding only when they have been established on the basis of the recommendations. The deadline for such submissions is fairly tight given that states need to provide the Commission with a vast amount of scientific and technical data. This is because it was considered necessary to define the outer limits of continental shelves as quickly as possible: it is only after establishing these limits that the boundary between states’ continental shelves and the area under the jurisdiction of the ISBA can be defined. Is UNCLOS working as intended in the Arctic? Up to now, Arctic coastal states have followed the rule-based UNCLOS procedure and submitted their claims to the CLCS. Russia was the first country to make such a submission to the CLCS in 2001 and also the first to which the Commission issued recommendations in 2002. Russia was requested by the CLCS to gather additional scientific data and finally, in early August 2015, Russia made a revised submission to the CLCS. Norway made a submission in 2006 and has now received recommendations from the CLCS according to which it is gradually drawing the outermost limits of its continental shelf. Denmark made its submission in December 2014, Canada is currently undertaking surveys to collect further data, and the United States has published the results of its continental shelf programme.6 It seems that after the delineation and delimitation of continental shelves in the Arctic Ocean, there will not be much common area left for the ISBA to administer. It is likely that these continental shelf claims will be settled in an orderly fashion. First, it is in the common interests of all Arctic coastal states to have as large continental shelves as possible, something that an orderly settlement can produce cost-effectively. Secondly, and as mentioned, the coastal states have committed themselves via the 2008 Ilulissat Declaration to ‘orderly settlement of any possible overlapping claims’. Perhaps even more importantly today, this commitment has been reaffirmed since the annexation of Crimea. The Danish 2014 submission, in particular, not only acknowledges that there will be overlapping claims, but has taken steps to mitigate any potential tension arising from this through a preliminary consultation with other Arctic coastal states. With respect to Russia, the Danish submission includes an agreement (via an exchange of notes), which was concluded after the Crimean annexation by Russia (27 March 2014), wherein both states agree that either can proceed with its submission to the CLCS, and that delimitation will then be implemented by the two states themselves. Importantly, the most significant resource reserves are within the Exclusive Economic Zones or territorial waters of the Arctic coastal states and there does not seem to be an abundance of valuable seabed resources in the overlapping areas to compete over. Even if there were, it would likely take decades before technology would allow the commercial use of those operationally and financially challenging areas. Furthermore, as difficult and costly hydrocarbon extraction in Arctic waters has relied on international public-private co-operation which, in turn, benefits from a favourable and low-risk operating and investment environment, interstate disputes over continental shelf extensions are unlikely to be conducive to commercial activities in the Arctic offshore. Another issue is the extensive backlog of submissions awaiting review in the CLCS, meaning that it may well take until 2020 or beyond before the CLCS is able to process them all, as there are over 100 from all corners of the globe. Even if, say, Denmark, Canada and Russia experienced problems in settling the North Pole area boundaries – or Denmark and Russia the boundary in the Lomonosov Ridge – there is no indication that this would necessarily lead to tensions. From a historical perspective, it is important to remember some of the lessons learned from the past that highlight the possibility of negotiated and peaceful agreements in the Arctic. For example, Barents Sea boundary negotiations between the Soviet Union and subsequently Russia with its NATO neighbour Norway took over 40 years to resolve, but resolved they were. Furthermore, even during the Cold War, Norway and the Soviet Union were able to establish a fisheries agreement in the disputed area between the states. Growing uncertainty and its implications for UNCLOS in the Arctic While the settlement of continental shelf claims is likely to take place in an orderly fashion, there are also adverse geopolitical dynamics – mostly related to relations between Russia and other Arctic Ocean Coastal states – that might jeopardize this. First, geostrategic and economic considerations play a major role in the way different countries regard UNCLOS in the Arctic. For Russia, export revenues from the energy sector are vital for socio-economic development, its foreign policy toolbox, and its quest to regain great-power status. As Russia’s mature oil and gas fields are steadily being depleted, it is forced to develop its frontier energy regions, most notably the Arctic. Consequently, Russia has considered it prudent to endorse UNCLOS in the Arctic not only to gain access to new resources, but also to generate a stable and predictable investment and operating environment as a necessary enabler of regional socio-economic development. However, Russian Arctic ambitions are becoming increasingly difficult to realize. This is not only due to challenging operating conditions, but also to adverse market conditions and Western sanctions against Russia that together hinder the pace and scope of economic development. If the Arctic economic potential does not materialize, this could have serious implications for the status of UNCLOS in the Arctic – especially if the deteriorated political relations between Russia and the West continue. If the biggest stabilizing factor, namely common economic interests, were eliminated from the equation, the region could still be utilized as a tool in domestic and international politics. Russia has invested considerable international and domestic political capital in developing the Arctic, and utilized the region in nation-building and identity politics. The development of Arctic mega-projects has even been compared to the Soviet space programme of the 1960s and 70s, both as evidence of the country’s greatness and as a tool for general technological development. Consequently, Russia has a lot at stake in the region and it is conceivable that the Arctic could increasingly witness other ‘uses’ besides the economic one. For example, it could be increasingly employed in the construction of enemy images that incite nationalism at home. Furthermore, as Russian military capabilities remain uncontested in the Arctic, it could even be constructed as a new hostile theatre for domestically targeted ‘foreign policy victories’ that secure regime stability in a situation where the Russian domestic political and economic system is facing severe problems. If these adverse dynamics became more widely entrenched, this would, in practice, mean a reversal of the co-operative political imaginary of, and spirit in, the Arctic. In this context, the role of international law could be undermined and it is not totally out of the question that the political dynamics affecting the overlapping continental shelf issue could consequently take a turn for the worse. Second, Russia’s consistent commitment to international law can no longer be taken for granted under the current regime. In the Arctic, Russia has failed to respect UNCLOS in the case of the 2013-14 diplomatic dispute between the Netherlands and Russia over the capture of the Greenpeace ship Arctic Sunrise after the organization’s protest at the Prirazlomnoye oil rig in the Pechora Sea. In particular, Russia failed to follow the UNCLOS provisions and its own explicit commitment to the treaty by declining to accept UNCLOS arbitration mechanisms. This raised serious doubts about Russia’s consistent commitment to UNCLOS when its vital national interests, such as resource exploitation, are threatened. More importantly, the annexation of Crimea and the ongoing conflict in Ukraine highlight even more clearly that Russia is prepared to dismiss the foundational international norms and commitments it has previously endorsed. These include key principles – sovereign equality, non-use of force, inviolability of frontiers, and the territorial integrity of states – agreed upon in the 1975 Helsinki Final Act, as well as other international obligations such as the security assurances to Ukraine agreed upon in the 1994 Budapest Memorandum, conventional and nuclear arms limitation frameworks, and best practices in conducting military exercises. As a result, Western perceptions of Russia and its intentions have deteriorated. There is widespread distrust of Russia in the West today, particularly given the perceived discrepancy between what the Russian leadership says and what it does. Unlike the Soviet Union, contemporary Russia under President Vladimir Putin is seen as a very unpredictable power in Europe. Given these developments, the emerging question is whether or not one should expect Russia to remain consistently committed to its legal and diplomatic obligations in the Arctic, including the established maritime order and its foundational legal corpus, UNCLOS, let alone its diplomatic agreement with Denmark on a negotiated settlement over continental shelf claims. At the very least, Russia’s recent track record does raise serious concerns in this respect that need to be considered also in the context of continental shelf claims. Third, Russia has chosen the path of a revisionist power in Europe. Most recently, this has become evident with the annexation of Crimea and the ongoing conflict in Eastern Ukraine. The current regime in Russia considers the collapse of the Soviet Union as a geopolitical catastrophe that not only diminished the status of Russia, but also shattered the perceived legitimate territorial integrity of the state. The annexation of Crimea can be interpreted as an act to reclaim lost territory, albeit at the cost of significant financial and reputational losses, as well as operational difficulty. The question then arises of the implications of this for territorial stability in the Arctic. In this respect, Russia has been known to send mixed signals. In the late 2000s, Russia made what appeared to be a unilateral claim to the seabed of the North Pole while at the same time endorsing UNCLOS and resolving a border dispute with Norway in the Barents Sea. More recently, during the conflict in Ukraine, Russia’s public endorsements of international law and co-operation have co-existed with bolder rhetoric about the territorial value of, and Russia’s territorial designs on, the Arctic. Dmitry Rogozin, Deputy Prime Minister and the head of Russia’s Arctic Commission, has been at the epicentre of this issue. In April 2015, he emphasized the significant, even semi-religious value of the Arctic in a much-circulated tweet: ‘The Arctic – Russia’s Mecca’.7 He later went on to argue that the annexation of Crimea was a historic restorative act with a potential parallel in the north: ‘Last year, we had the historic reunification of Sevastopol and the Crimea. This year, we present a new view and new powerful stress on the development of the Arctic. Basically, it is all about the same [thing]’.8 Even if these statements were mere nationalistic rhetoric or simply meant for domestic consumption, they are nevertheless public speech acts that reinforce the uncertainty about Russia’s territorial intentions. Today, in the light of the annexation of Crimea, it is not altogether unreasonable to ask whether Russia could simply decide to further ‘restore’ its territorial integrity by claiming much of the Lomonosov Ridge as a natural extension of its land mass. Fourth, the ambiguous comments are also worrisome when viewed in the context of Russia’s ongoing military build-up in the Arctic, which it is pursuing in tandem with the continental shelf process. For contemporary security analysts, a threat is typically understood as a combination of capability and harmful intent, or the perception of such intent. That said, the latter part of the equation has intensified in the eyes of the West with regard to Russia. At the same time, Russia has also signalled its intention to improve its Arctic capabilities by re-opening various military bases and establishing a new strategic military command in the region. The securing of the Arctic was also recently highlighted in Russia’s new 2014 military strategy. Traditionally, and certainly before the crisis in Ukraine, the increase in Russian military presence and capabilities in the Arctic was widely interpreted as legitimate state behaviour to improve situational awareness and the ability to respond to various safety and security scenarios in an opening region. During the crisis, growing uncertainty about Russia’s intentions has opened the door for alternative interpretations. Russia’s growing capability and activity in the Arctic have again been interpreted as an indicator of aggressive and threatening behaviour, and as an illustration of Russia’s intention to militarize and dominate the region. For example, the establishment of military bases along the Northern Sea Route can be read as de facto control of the maritime area with potential implications for freedom of navigation and territorial stability, as the military presence could act as a coercive back-up or backdrop to secure Russia’s interest to extend its continental shelf northwards. Growing military capabilities, especially in a time of uncertainty about Russia’s intentions, may reintroduce the classic security dilemma to the Arctic. This would be detrimental to the spirit of co-operation in the region, with potential implications for the reliability of legal procedures – in this case the orderly delimitation of continental shelves – in the Arctic. The post-Cold War Arctic has been one of the most peaceful areas on the planet, characterized by bilateral negotiations, multilateral co-operation and governance, and public-private joint ventures. As a result of powerful incentives for stability in combination with relatively well-functioning Arctic governance, the potential for a major inter-state conflict in the Arctic has generally been regarded as quite low. In order to ensure peace and co-operation in the region, legitimate and confidence-building governance mechanisms remain vital. UNCLOS has been crucial in this respect. Although the treaty has been challenged in other parts of the world – for instance in the South China Sea with similar dynamics related to hydrocarbon resources, undefined boundaries and major power interests – UNCLOS has been working as intended in the Arctic. This is mostly due to the fact that the treaty works for the benefit of the coastal states, generating much-needed predictability in the region. The continental shelf processes are expected to continue in an orderly manner also in the future. Nonetheless, the status of international law critically depends on the nations’ political will to adhere to it. According to official statements by all the Arctic states, this will exists in the Arctic despite the generally worsened relations between the West and Russia. However, predicting future Arctic trajectories has become more difficult due to heightened uncertainty. As indicated, threat is a combination of capability and intention. If one follows this formula, the overall risk levels can be considered to have risen in the Arctic as well. While the continental shelf process is ongoing, Russia is building up its military presence in the region and remains uncontested in this respect. More importantly, it is the intention part of the equation that has become more difficult to discern, given the discrepancy between what Russia says and what it does in other parts of the world. As this is the case, the possibility of unilateral and illegal action cannot be completely ruled out. In the absence of clear global enforcement, the status of, and respect for, international law – and UNCLOS in particular here – must be understood in specific political contexts. 1 See the Ilulissat Declaration at http://www.oceanlaw.org/downloads/arctic/Ilulissat_Declaration.pdf. 2 The US is not a party to UNCLOS, but develops its continental shelf claim on the basis of the customary law of the sea. 3 For a complete list of submissions, see http://www.un.org/depts/los/clcs_new/commission_submissions.htm. 5 In most cases the legal continental shelf can be equated with the continental margin, as defined in Article 76 (3) of UNCLOS: ‘The continental margin comprises the submerged prolongation of the land mass of the coastal State, and consists of the seabed and subsoil of the shelf, the slope and the rise’. 7 The Washington Post (2015) ‘The Arctic is Russia’s Mecca, says top Moscow official’, http://www.washingtonpost.com/blogs/worldviews/wp/2015/04/20/the-arctic-is-russias-mecca-says-top-moscow-official/. Accessed June 5, 2015. 8 Barents Observer (2015) ‘Expansionist Rogozin looks to Arctic’, http://barentsobserver.com/en/arctic/2015/04/expansionist-rogozin-looks-arctic-21-04. Accessed June 5, 2015.
<urn:uuid:8a4a6c0c-9b81-4ecb-839e-f29e078b0289>
CC-MAIN-2021-21
https://www.fiia.fi/en/publication/continental-shelf-claims-in-the-arctic?read
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988923.22/warc/CC-MAIN-20210508181551-20210508211551-00337.warc.gz
en
0.955257
4,987
2.921875
3
TO GLIMPSE THE FUTURE of the Great Plains, you take Route 191 past the Crazy Mountains through Lewistown, Montana, to a 123,000-acre tract of former ranchland where human habitation is scant and the bison roam. The soil contains bentonite, a kind of clay used in makeup. The land gets a measly 11 inches of rain per year, rendering it unfit for most agricultural uses. When it rains in the spring, which isn’t often, the mud dries into hard ruts that last into the fall and jolt the spine of even the most deliberate driver. I pull over to the side of the road and watch a red-tailed hawk hunting overhead. A pronghorn deer startles at my approach and nervously circles the spot where it dropped its newborns. The prairie undulates like a vast inland sea, with no cars or houses visible for miles in any direction. In the spring, the grasslands are dotted with purple, yellow, and white flowers, and waterfowl rise from the marshes and watering holes. By August, temperatures can hit 107 degrees and up. In the winter, the prairie roads are covered in six-foot drifts of snow in which animals and people freeze to death. What makes this landscape so remarkable is the intimation of what some people see as the future of conservation in America, a future that many people who live here—who see themselves as the true conservationists—don’t want. It is a big, dreamy idea that began in academic journals and is now slowly making its way into county land registers. One day, if all goes according to plan, the surrounding ranches and public lands will be part of a 3 million-acre grassland reserve run by the American Prairie Foundation (APF), a private entity formed at the urging of the World Wildlife Fund (WWF) and bankrolled by big-name donors (PDF) who believe that landscapes like this one should be objects of awestruck contemplation, rather than pastures for cattle. The notion that people living on the Plains should cede their land to bison is rooted in a deliberately heretical 1987 article in the academic magazine Planning, titled “The Great Plains: From Dust to Dust.” Authored by professors Frank and Deborah Popper (he teaches at Princeton and Rutgers; she teaches at Princeton and the City University of New York), it suggested that a large portion of the Great Plains—comprising most of Montana, the Dakotas, Wyoming, and parts of six other Western and Midwestern states—would become almost completely depopulated within a single generation, and should therefore be “returned to its original pre-white state,” i.e., a bison range. The Poppers’ proposal, which they called the “buffalo commons,” struck many residents of the states in question as the very apogee of East Coast academic insanity, and few readers imagined that it would become a foundation of public policy in the West. Yet the couple’s predictions are proving prescient. According to a 2009 Census report (PDF), nearly two-thirds of Great Plains counties declined in population between 1950 and 2007, with 69 of those counties losing more than half their people. As the people are leaving, the buffalo are multiplying. Major herds in Montana include the 3,900-head Yellowstone herd, the 6,000-head herd at Ted Turner’s ranch on the Gallatin River, and the 400-head National Bison Range Herd near Flathead Lake, established by Congress at the turn of the 20th century to save the American bison from extinction. Yet all of these efforts pale next to the APF’s plan to create a reserve the size of Connecticut in this stretch of Montana—an expansive savannah that looks like parts of Africa and is capable of supporting many thousands of bison. The core of this proposed range is an area twice the size of Seattle where the APF’s starter herd now resides. The foundation’s strategy is to buy up local ranches in order to gain control over associated grazing leases on vast expanses of public land, with the goal of returning a wide area of Montana to something resembling its pre-European state at an estimated cost of $450 million. (The group has raised $40 million so far.) With sufficient land to roam and grass to eat, a bison herd will naturally double every four or five years. The estimated number of bison required to maintain sufficient genetic diversity within a herd is around 400, and the APF is already more than halfway there. In January 2010, it received 96 new bison from Canada, descendants of the Pablo-Allard herd, one of the last significant free-ranging bison populations in the United States. “This land is cheap, it’s for sale, and it’s intact prairie,” says Alison Fox, the APF staffer who brought me here. The county’s population is roughly 40 percent (PDF) of its 1919 peak (PDF). The average rancher, Fox tells me, is 58 years old, and most people home-school their children because the schools are too few and far between. The people who make their lives in this environment are a special breed of individualists, but there are fewer and fewer of them, she says. That is why the bison are coming back. BRYCE CHRISTENSEN is a huge, gentle, bald-headed man with a walrus mustache who spent more than three decades working for Montana Fish, Wildlife, and Parks, which was pretty much the dream job of every boy in Montana whose family didn’t own a profitable ranch. Now he spends his days managing the APF reserve, checking on the animals, and removing fences and other relics of human habitation from the plains. “We got about 14,000 acres of pasture open to the bison now,” he says, gesturing out at the range. “They’d like bigger, I think.” While Christensen has removed about 15 miles of fence thus far, the herd is capable of traveling well beyond the current pasture boundary in a single day of grazing, which means that there is always more fencing to be taken down. Bison are a keystone of an ecosystem that once included more than 1,500 species of plants, 350 birds, 220 butterflies, and 90 mammals. Mountain plovers use bison wallows as nesting sites. Bison keep trees from invading the open grasslands by scoring them with their horns, and they disperse seeds by eating and excreting them. When the bison die, they are eaten in turn by bald eagles, ravens, and black-billed magpies. A ride across the prairie in Christensen’s mud-caked Dodge truck reveals that this landscape is not as untouched as it looks. Every mile or so, we pass a man-made watering hole for cattle, which will eventually dry up or get choked out by invasive species. (See “Predator vs. Alien.”) One of the dominant plants here is crested wheatgrass, a hardy Russian perennial that the US government introduced to the Plains in the 1930s for use as forage. “It’s very challenging to get rid of crested,” Christensen admits. The pretty yellow flowers are sweet clover, another imported species. “We’re gonna do a series of 300- to 500-acre burns starting out by those trees,” he explains. “After it burns, it will be low enough that the bison will eat it when the shoots come up.” For all the effort required to restore the grasslands, perhaps the most challenging part of Christensen’s job in this ecosystem is to establish friendly working relationships with the local ranchers, whose way of life the APF ever-so-gently aims to eliminate. “Many of them want to stay,” he says when I ask him how the ranchers feel about selling their land. “If that changes, we would like to be seen as the buyer of choice.” The Phillips County News had been particularly vociferous in its opposition to the project. Christensen mentions a series of cartoons that depicted the APF as skylarking weirdos whose idea of progress was to take the country back to the 1850s. “Circle up the wagons, folks,” editorialized a staff writer assigned to the story. “We’re about to be overrun by a bunch of eastern based nature lovers herding buffalo onto our range.” Christensen feels that the paper has let local passions get in the way of the facts. “I think the source of the fear,” he says, “is that we are very open about our dreams of what this landscape will look like 25 years from now.” We climb a rise, and I encounter about 70 bison, old and young, disporting themselves on the beginnings of the reserve. A circle of huge bulls sits in the dirt, occasionally lifting their massive heads to survey the scene. The older bulls are distinguished by heavy beards and full bonnets. By comparison, the younger bulls look more like skinny hipsters from Williamsburg. A playful red calf, four to six weeks old, trails behind its mother, who is visiting with two other cows. Female calves remain with their mothers for up to three summers, while males leave earlier. “I think that’s No. 5,” Christensen says, pointing to a large cow that has a reputation for charging visitors. Dragonflies flit through the tall grass as the air around them vibrates with the deep, resonant rumble of bison talking to each other about the weather, or whatever it is that they talk about. I notice that one of the calves has an injured leg and is trying to interest its mother in feeding it. “Would you help that animal?” I ask Christensen. “No,” he answers shortly. “We did have a mother die in childbirth,” he adds, after a pause. “Hey, that’s nature.” In the wildlife service, he might have shot a calf in the same predicament. “If you do shoot it,” he says, “it’s definitely going to die.” After half an hour of persistent effort, the limping calf convinces its mother that it’s not about to die, and she decides to let it nurse, swishing her tail with what seems like mild annoyance for the first 30 seconds or so as the hungry calf suckles. Perhaps encouraged by the unexpected strength of the response, she stands patiently as the calf feeds. “These bison have just done phenomenal,” Christensen says with evident satisfaction. “It’s just great country.” WE WALK FOR A WHILE, our conversation punctuated by the crunch of dry grass underfoot and the gentle brushing of thistle, sage, and meadow foxtail (PDF)—yet another invasive species that will be hard to eradicate. The new bison from Canada are doing fine, Christensen says. I will be able to identify them by the letters “CAN” branded into their flesh, a condition imposed by the US Department of Agriculture (USDA) to help prevent the spread of disease from bison to cattle. While technically classified (PDF) as livestock, bison are loathed by local ranchers, who worry that the animals might infect their cattle with anything from anthrax and mad cow disease to the dreaded bovine brucellosis, an infectious disease that causes cows to spontaneously abort their calves. The ranchers’ antipathy stems from a fear of damage to their herds, and also from what the bison have come to represent—namely, the desire of conservationists to destroy their way of life in the pursuit of a whimsical dream of returning to an imagined Eden. The local Native American tribes, especially the Gros Ventre on the nearby Fort Belknap reservation, support the APF but are also wary about embracing the idea of a vast bison reserve with too much enthusiasm. Those living on the reservation are sometimes employed by the ranchers, whose very recent ancestors exterminated the bison in an astonishingly short period of time thanks to the Sharps buffalo rifle, a weapon that could take down a bull (or a warrior on horseback) at a distance of 1,500 yards. In 1881, when the Northern Pacific railroad reached Miles City, Montana, the seat of Custer County, a commercial buffalo hunter named Vic Smith killed 4,500 bison in less than a year, getting three dollars apiece for their hides. If money was the primary motive for buffalo hunting, the destruction of the economic and cultural life of the Plains tribes to make way for white settlement was a close second. By 1886, there were hardly any bison left in Montana. We get back into Christensen’s truck and drive over to the Fort Peck Reservoir, formed by one of largest earthen dams in the world, a massive public project that displaced many of the ranchers here from their original landholdings on the Missouri River. As we sit on a high bluff looking down on the empty lake, Christensen ponders the future of the dream to which he has dedicated himself. The land will return to its natural, untouched state. Families might come and camp here and see what Lewis and Clark saw. We drive over to a ring of empty canvas yurts erected by the APF to host visitors. Among the recent guests was a group of Chinese conservation workers who slept out on the prairie and photographed bison with their new cameras. A bank of hard gray clouds looms above the hills. I decide to watch the coming storm from inside the APF ranch house, which is furnished in comfortable Western style. I gaze out the living-room window at the high clouds shadowing low-slung power lines and a single gray metal Quonset hut. A tracing of a public map is laid out on a table by my side, next to a list of local ranchers and their landholdings. The map shows where each of their properties is located in relation to the reserve. It is an unsettling document, a roll call of families whose neighbors have long since given up on this inhospitable landscape: Kevin Cass, Gene Barnard, Bill French, and the Barthelness family, including Leo, Chris, Darla, and Leo Jr. Above my head is a handmade charm that contains a legend darkly etched in glass, which reads, “There’s nothing like a dream to create a future.” MY READING for the evening is a journal article by a World Wildlife Fund staffer named Steve Forrest that lays out the innovative strategy on which the APF’s approach to conservation is founded, and which brought the APF to Phillips County, where it has purchased 12 ranches to date. Whoever controls the ranches also controls the much larger tracts of grazing land that the ranchers lease from the Bureau of Land Management. Because these leases are tied to the ranch rather than the rancher, and because bison are the legal equivalent of cattle in the eyes of the USDA, each acre of ranchland the APF buys and stocks with bison turns into three acres of freshly minted nature park. Connect the private ranches and their leased BLM grazing lands with existing national parks and tribal lands, and a 3 million-acre grassland park is born. Phillips County was chosen for this experiment because BLM lands comprise one-third of its land base, amounting to 1 million acres in a county with fewer than 4,000 residents (PDF). There are about 500 farms and ranch operations in the county, a manageable number given that these operations run a combined $1.7 million in the red, making it a buyer’s market. At the WWF offices in Bozeman before my trip, Steve Forrest told me that bison were, in fact, a late addition to the grand vision of a grasslands reserve. The idea of using the animals to anchor the park emerged from a 2006 meeting of about 50 conservationists and biologists at Ted Turner’s ranch in New Mexico. Attendees produced a map of projected bison recovery on the Plains over the next 20, 50, and 100 years and issued what has become known as the Vermejo Statement, which reads (PDF) in part: “Over the next century, the ecological recovery of the North American bison will occur when multiple large herds move freely across extensive landscapes within all major habitats of their historic range, interacting in ecologically significant ways with the fullest possible set of other native species, and inspiring, sustaining and connecting human cultures.” The statement’s origins, Forrest said, lay in the convergence of the WWF’s work on grasslands with the work of the biologist Jim Derr at Texas A&M University. Forrest nods and smiles when I mention the Poppers, but he is quick to claim that they had little to do with the practical side of building a large-scale conservation project on the prairie—a fact that, while true, is also a way of eliding the politically damaging connection to a couple whose names have become a byword for the notion that humans are less important than bison. Derr concluded (PDF) that many of the wild bison in conservation herds lacked sufficient genetic purity to pass on the pristine bison genome. The introgression of cattle genes into bison herds can be the product of the deliberate cross-breeding of bison and cattle for commercial purposes, or it can happen naturally. Either way, many wild bison in the fabled conservation herds were, by the standards of the purists, hardly bison at all. At Turner’s ranch, Derr presented his findings (PDF) to an audience that Forrest describes as “anyone who had any bison weight at all.” The meeting, held in a gorgeous Spanish-style lodge once owned by the Pennzoil Company, was hosted by the Turner Foundation. “Ted was suddenly very interested, because one of the herds tested that did not have cattle DNA was his herd,” Forrest explains. “The message was that we could lose wild bison. It was critical.” Turner himself attended some of the sessions along with his head rancher Marv Jensen. After the sessions, the attendees dined on bison steak and plotted out a practical path to making their dream a reality. As the largest mammal native to North America, bison are “charismatic megafauna” capable of attracting human backing for conservation in a way that, say, prairie dogs can’t. The bison’s physical charisma, its place in the American historical imagination, and its role in the prairie ecosystem—not to mention its legal status as a type of cow—made it the perfect anchor for the grassland park the WWF hoped to establish. Forrest sees the landscapes he seeks to preserve as being very much like great works of art. “It’s not like we are creating this for some alien race to appreciate later,” he told me, drumming his fingers on a wooden conference table. “We take out a few fences, we knock out a few power poles, and that’s it: That’s what the first humans saw when they stood on a tall peak looking down at the valley. That is a really compelling, spiritually important thing that we need.” Through such visions, Forrest says, human beings can be brought to appreciate their connectedness to other organisms and to a larger ecosystem—a conservationist’s version of the emotions that animate our romantic attachments to art and religion. The fact that Forrest’s plan pushes this aesthetic of harmonious interconnectedness by means of old-fashioned legal and political leverage and donations from wealthy benefactors has not been lost on some residents of Phillips County. For ranchers, the creation of a buffalo commons is not an act of restorative devotion but the forced transformation of the farms and ranchlands to which they and their families have devoted decades of toil. The APF’s cause was not helped by the appearance of a Montana Fish, Wildlife, and Parks manager named Arnie Dood at the annual meeting of the Phillips County Livestock Association last June. According to the Phillips County News, Dood told the ranchers they could “either be a part of setting what the future is going to look like” or they could sit around and complain; he added that “people are xenophobic in Central Montana.” (Dood denies making the comment. “What does that word mean?” he asks, laughing.) The paper also ran a cartoon depicting a city slicker seated on the rear end of a sleeping bison, heading toward the year 1850; he holds a pamphlet bearing the initials “WWF,” “APF,” and “PETA.” In editorials, articles, and letters, the News and its readers gleefully attacked the APF from every angle—fears that the buffalo would infect people’s cattle with brucellosis, elitism of reserve proponents, the return of wolves to the prairie. (See story below.) In one of many angry letters to the editor, retired wildlife biologist Jack D. Jones railed against the influence of “global organizations like the WWF” and warned that “this could be shoved down our throats, like ‘Obamacare.'” The paper covered one meeting where more than 200 county residents aired their fears. Among them was Rose Stoneberg, a local activist and landowner who warned that the APF would try to get the land through monument designation rather than paying landowners. Maxine Korman, another local, was paraphrased by the paper explaining that the grasslands plan was only a small part of “an apparent United Nations project designed at creating a wildlands refuge extending from the Yucatan to the Yukon.” At another public meeting, the News noted, someone proposed a vote on the idea of allowing free-roaming bison onto the Plains. The tally was 92 against, zero in favor. GENE BARNARD, a fit 92-year-old rancher whose spread is part of the proposed reserve, lives next door to the APF base camp in a group of houses inhabited by family members, dogs, trucks, and old machinery. He knows as much about Phillips County as anyone alive. Since he was born here in 1918, the county’s population (PDF) has declined (PDF) by almost 60 percent. When I get out of my truck inside Barnard’s family compound, I am surrounded by a pack of five barking dogs who hold me at bay until the rancher’s son-in-law, Jerry Mahan, arrives to rescue me. A tall man with long sideburns and sun-frazzled hair, he wears a frayed army jacket, sunglasses, and jeans, all seeming better suited to a crisp fall day than to the current 98-degree heat. In addition to giving him a certain resemblance to the late Hunter S. Thompson, the layers protect him from the swarms of dive-bombing mosquitoes that breed in the gullies after it rains. When I suggest that driving on the badly rutted dirt roads must be especially difficult during the brief wet season, he nods. “Three feet of gumbo and 800 feet of sand,” he says, jerking his thumb towards the road, and then nodding towards the open prairie beyond. “We’ve taken three people out in body bags. This country isn’t forgiving.” He walks me to the door of the small clapboard house where Gene Barnard is awaiting my visit at a kitchen table piled high with newspapers and ketchup bottles. He rises to greet me, then bends down to chase a pair of yapping dogs with a plastic fly swatter. He apologizes and offers me a seat. His parents came to Montana before the First World War, he says, lured by the promise of bountiful land. “The Great Northern Railway, they put out advertisements everywhere. Grain this high,” he chuckles, holding up his hand to the level of his chest. “It didn’t take too long to find out, coming out here from places with 20 to 30 inches of rain, that 10 inches was the short end of the stick.” Barnard’s father, two uncles, and a brother-in-law migrated from Virginia to North Dakota to Montana, and his father took a job as a teacher before filing a claim on some upland pasture. When I ask why his father claimed such undesirable land, he tells me that he had the same question. “I asked him, ‘My God, when there’s all this lowland, why’d you file on upland?'” Barnard remembers. “He said the sheep was eating it off, it was all green then.” The family ranch eventually grew to 23,000 acres, but life was always hard. As a boy, Barnard remembers being sent out to pick beans and gooseberries and to weed the garden before breakfast. When he was 10 years old, he managed a team of horses and mowed hay. The days were long, and after dinner the family generally went straight to bed. It wasn’t until his family purchased an Aladdin lamp in the 1930s, he says, that reading was possible after dark. “They advertised it on the radio station we got from Salt Lake City,” he remembers with a laugh. “They said, ‘You can tell a fly from a raisin.'” There were those who couldn’t stand the isolation of a life in which the nearest town, Malta, was two days away on horseback; those who were moved off their unprofitable land by the Resettlement Administration during the Great Depression; and those who simply found better opportunities elsewhere. Even as his neighbors drifted away, Barnard made a determined choice to stay on the prairie. When I ask him what he likes best about a life that most people would consider difficult, his face lights up. “I guess it’s a challenge,” he says. “Kinda keeps your brain in gear.” Pasture your cattle too high up in the winter, and you will pay for it in the spring. Fail to ensure enough access to water in the summer, and cattle will die. “You’d go to the neighbors, go through their cattle, see if yours got mixed in. Then you ride out. The only sounds you’d hear would be the glacial gravel. Our skies were clearer then, too. It gives you a sense of reverence. We’re only looking at a small segment of this. You’d see thousands of stars. You just are awed by the power of God. It must be the same thing when the man wrote, ‘How great thou art.'” It strikes me that Barnard’s reverence for nature and his feeling of communion with something larger than himself are akin to the emotions driving conservationists like Steve Forrest, who hope to turn Barnard’s ranch into a bison range. In the eyes of Barnard and his fellow ranchers, the APF represents outsiders who are interfering with nature, while the ranchers themselves are the true conservationists. The main reason these men and women stick here so stubbornly is that they love the land. Barnard’s father lived here well into his nineties, and three of his four children live within 50 miles of the ranch. “The ranchers have been here for 125 years, experiencing everything,” he tells me. As for the APF, Barnard flatly dismisses them as “promoters” who are raising the property assessments of working ranchers by overpaying for land. When I suggest that the reserve will allow tourists to experience the same kind of communion with nature that he so movingly described to me, he laughs. “If the people from back East come out here and roll their windows down, and that 107-degree heat hits them along with a swarm of mosquitoes, then they’re going to get the hell out of here.” A WEEK LATER, at an Italian restaurant on the Upper East Side of Manhattan, I meet Frank and Deborah Popper, whose article popularized the idea of turning the Great Plains into a bison range more than 20 years ago. A bullet-headed man in his sixties, Frank offers a winning mix of intellectual enthusiasm and neediness, backed by lightning flashes of insight. Deborah, who plays the stable one in the Poppers’ intellectual partnership, brings her husband down to earth with shrewd, practical insights. He was raised in Chicago while she grew up in New York. They finish each other’s sentences and contradict each other with the self-aware rhythms of a married couple enjoying an old soft-shoe routine. For Frank, the fate of the Great Plains (PDF) is already sealed. “It’s a done deal,” he says flatly. “The only question is how the deal gets done.” The year-by-year fluctuations in the population and economic well-being of rural counties cannot obscure a general downward trend that continues to strip the Plains of inhabitants. “Not only have the pressures on the rural parts of the Plains continued to mount, but there are new pressures that didn’t exist back in 1987,” Frank adds. The Cold War is over. Most of the region’s missile silos and Air Force bases have closed down. The buffalo commons offers a compelling narrative to describe the mess that human beings have made of the prairie. “This is a decline-and-redemption story, with the buffalo commons being the agent of redemption,” Frank says, as Deborah shoots him a warning glance. “That sounds very pompous,” he adds quickly. “It’s a story of decline insofar as it involves the white settlers, the army who nearly kill off the buffalo,” he explains. “Then you have the Dust Bowl, depopulation; all the environmental indicators fall, like the water table.” Where human settlement has failed, the bison might offer us a fresh beginning. That promise of redemption is why the Poppers’ idea has attracted the interest and support of famous writers—including Anne Matthews of Princeton, whose 1992 book Where the Buffalo Roam profiled the Poppers and their work and was a finalist for the Pulitzer Prize in nonfiction, and the novelist Annie Proulx, who included a funny and approving passage about the Poppers in one of her Great Plains novels. In 2004, former Kansas governor Mike Hayden, once a fierce critic of the buffalo commons, publicly endorsed the idea, in part because Kansas is famously short on public land, and most of its counties are losing population. The commons “makes more sense every year,” he told the Kansas City Star in 2009. Yet it is possible to read the story of the Plains and its settlers another way. When I tell the Poppers about my visit with Gene Barnard, Deborah nods. “In the writing of that ’87 piece, we had a long argument about how to finish it,” she says, turning to her husband. “One of the things that struck me more than it struck you was that there would always be people that would insist on staying.” Frank nods as he chews through a forkful of pasta. “I am absolutely in awe of the guy,” he proclaims, shifting into enthusiast mode. “An older pioneer type who you don’t see in the Berkshires. I am fascinated by that.” “It’s a beautiful narrative,” Deborah says, putting her fork down on the table and taking a contemplative sip of white wine. “It’s painful in many ways. Our question is, ‘Can you change the vision and the incentives?’ How do we fix what we screwed up?” I tell her about the view of the prairie from the top of a buffalo jump near the APF’s base camp, where native hunters stampeded short-sighted bison off a sheer 150-foot cliff. The scene of waving grass, exposed riverbeds, and rolling green land below seemed iconic, like something drawn from the collective human unconscious, I add, as Deborah nods. Part of the paradox of our appreciation of nature is that we put ourselves in the landscape even as we want to remove ourselves from it, I suggest. Out on the prairie, shadows of passing clouds move across the open spaces just as the landscape itself is shadowed by the human presence, light but always visible in the man-made scars on a nearby rock—perhaps recording a bison kill—or the outlines of a vanished corral. Removing ranchers from the land to which they have given their lives is no less a deliberate and destructive human act than exterminating bison. An empty landscape that reminds us of the origins of our species is no less a reflection of human imagination and priorities than a ranch. The imagined past is the same as the imagined future. Both are figments of our imagination. The question is, which do we value more?
<urn:uuid:70ada34f-3ac0-45db-9417-b267e00788ca>
CC-MAIN-2021-21
https://adops.motherjones.com/environment/2011/02/buffalo-commons-american-prairie-foundation/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988953.13/warc/CC-MAIN-20210509002206-20210509032206-00054.warc.gz
en
0.96377
7,035
2.9375
3
David Stock and the National Tuberculosis Advisory Committee (NTAC) The primary role of any tuberculosis (TB) control program is to ensure the prompt identification and effective treatment of active disease. The host immune system often succeeds in containing the initial (or primary) infection with Mycobacterium tuberculosis (Mtb), but may fail to eliminate the pathogen. The persistence of viable organisms explains the potential for the development of active disease years or even decades after infection. This is known as latent tuberculosis infection (LTBI) although, rather than a distinct entity, this probably represents part of a dynamic spectrum.1 Individuals with LTBI are asymptomatic and it is therefore clinically undetectable. The World Health Organization (WHO) estimates that one-third of the global population has been infected with Mtb2, with highest prevalence of LTBI in countries/regions with the highest prevalence of active disease.3 In 2013, 88% of 1322 notifications in Australia were in the overseas-born population (incidence 19.5 per 100,000 v. 1.0 per 100,000), with this proportion rising over the course of the last decade.4 Combined with epidemiological evidence of low local transmission, this strongly implies that the vast majority resulted from reactivation of latent infection acquired prior to immigration.5, 6 Contrasting trends in TB incidence in other developed countries probably reflect differences in policy regarding LTBI.7 Conclusion: The diagnosis and treatment of LTBI represents an important opportunity for intervention by jurisdictional TB control programs. The development of initiatives with the ultimate goal of eliminating TB on a global scale could lead one to conclude that highly inclusive, if not universal, testing should be undertaken in order to optimise capture of LTBI. However, such an undertaking would be prohibitively expensive, impractical and inevitably compromise the positive predictive value of the chosen test(s). We must therefore focus our resources on at-risk groups who would benefit from treatment. Given the very low rates of transmission within Australia, it is clear that progressing toward TB elimination is largely contingent on the implementation of strategies to detect and treat LTBI in migrants from high incidence countries. The 2 key factors to take into account when identifying an individual or population at risk are the pre-test probability (PTP) of LTBI and the risk of progression to active disease. - Close (household) contacts of pulmonary TB - Migrants# from countries with a high incidence of TB* - Healthcare workers from settings with high TB incidence8 # Migrants comprises those who have moved to Australia with the intent of staying long-term and those whose residence is time-limited, e.g. overseas students *A cohort study from the United Kingdom showed that programmatic testing of migrants from countries with a range of TB incidence thresholds from 40 to 250 per 100,000 would be cost-effective and identify the majority of individuals with LTBI.9 Lowering the threshold to 40 in 100,000, as recommended by the National Institute of Clinical Excellence (NICE)10, while also cost-effective, substantially increases the cohort size, consequently increasing the workload for local TB services11. A recent publication supports the implementation of a similar, targeted strategy across Australia12, although there is, as yet, no local cost-effectiveness data. Increased risk of progression to active disease: - Evidence of recent infection - Fibrotic change consistent with TB on chest radiograph without history of previous treatment - Human immunodeficiency virus (HIV) infection - Other co-morbidities, including silicosis, renal failure (chronic kidney disease stage V), poorly controlled diabetes mellitus, certain malignancies (haematological, head & neck, lung), previous gastrectomy or jejuno-ileal bypass surgery, malnutrition and alcohol abuse - Treatment with anti-tumour necrosis factor (anti-TNFa) inhibitors - Solid organ transplant recipients - Other immunosuppressive therapy, including long-term oral corticosteroids (prednisolone ≥15mg/day or equivalent) - Young children, especially those aged <5 years NTAC recommends that the following groups are tested for LTBI: - Those identified by contact tracing within Australia - Migrants (from any country) with a history of TB contact within the last 2 years - Migrants from countries with a high incidence of TB - Aged 35 or under - Aged over 35 with one or more risk factors for reactivation Prioritisation of recent migrants and those staying permanently is advisable. NTAC acknowledges that implementing this recommendation may require an increase in resources and the relative importance of competing demands would need to be carefully considered at a jurisdictional level. - People living with HIV infection - Patients commencing anti-TNFα therapy - Patients being assessed for solid organ transplantation - Australian residents returning from a prolonged period working in a healthcare setting in a high incidence country, and migrants from high incidence settings intending to work in Australian healthcare settings Testing should generally be performed on an intention-to-treat basis, i.e. on the understanding that a diagnosis of LTBI will result in an offer of treatment. An individual risk-benefit assessment should be undertaken to inform this decision. Two types of tests are currently in use for the diagnosis of LTBI in Australia, the tuberculin skin test (TST) and interferon-gamma release assay (IGRA). Both TST and IGRA are indirect tests, demonstrating immune sensitisation to Mtb. They cannot, therefore, distinguish between elimination and persistence of Mtb following primary infection. There is also the potential for test failure in the setting of impaired host immunity. Furthermore, neither test can distinguish between recent and distant infection, nor reliably predict progression to active disease. The principal advantage of IGRA over TST is in terms of specificity. Unlike TST, the test outcome is unaffected by previous testing, Bacille Calmette-Guerin (BCG) vaccination (typically high coverage in at risk populations) or exposure to non-tuberculous mycobacteria (NTM), with the exception of M. marinum, M. szulgai and M. kansasii. The major drawbacks have been the relative unfamiliarity and higher cost of the test. There is also a possibility that an indeterminate result may be returned due to failure of the positive or negative control. Both TST and IGRA are acceptable for the diagnosis of LTBI. This is consistent with recently published WHO guidelines13 on testing in high income, low prevalence countries. For further information please refer to the following NTAC document: Position Statement on Interferon-γ Release Immunoassays in the Detection of Latent Tuberculosis Infection. The following represents a reasonable approach to the interpretation of TST results: Regard as TST-positive if - ≥ 10mm but < 15mm with a history of close contact or an abnormal chest radiograph (calcified nodules, upper lobe fibrosis). Consider performing IGRA as a supplementary test - ≥ 5 but < 10mm in those age < 5 years with high PTP AND increased risk of progression to active disease. If age ≥ 2 years consider performing IGRA as a supplementary test, noting that indeterminant IGRA results may be more likely in very young children14 - ≥ 5mm and immunosuppressed (IGRA can be performed concurrently – treat if either is positive) Treatment can be offered to HIV-infected and pre-school age contacts of an infectious case without prior testing in recognition of their susceptibility to meningitis and disseminated infection.15 A diagnosis of LTBI requires ensuring that active disease is excluded. Prior to initiation of LTBI treatment, patients should have a chest x-ray performed, sputum (induced if necessary) cultured for TB where feasible, and be reviewed by a clinician with experience in the diagnosis and management of TB. This is the most widely used and evidence-based regimen, although there is debate as to the most appropriate duration of treatment. A risk reduction in excess of 90% can be achieved after 12 months of daily self-administration but, as one would expect, field effectiveness is compromised by declining adherence.16 Extrapolation from trial data had suggested that treatment for 9 months is optimal17, but subsequent meta-analyses concluded that there is no demonstrable benefit in continuing beyond 6 months.18, 19 There are no head-to-head studies of 6 versus 9 months isoniazid (INH). There is evidence of an extremely durable treatment response in a low-prevalence setting.20 The principal safety concern has been the hepatotoxic potential of this drug21, although more recent data has shown very low rates of significant hepatitis, perhaps as a result of better patient selection and treatment monitoring22. Dose: 10mg/kg daily, up to a maximum of 300mg. The challenges faced by both physicians and patients in trying to maintain treatment adherence over many months led to a search for equally effective but shorter regimens. Evidence in the literature is limited to a single trial in silicosis patients23 and some observational data24, 25. The Centers for Disease Control and Prevention (CDC) in the United States recommend treatment for 4 months as an alternative to INH.26 Acceptable safety and completion rates have been established for this regimen27 and a trial is currently recruiting in an attempt to address the lack of efficacy data28. RIF is a cytochrome P450 inducer and the potential for drug interactions may need to be carefully considered. Dose: 10mg/kg daily, up to 600mg. This combination has been shown to have an equivalent efficacy and safety profile to INH.29 Evidence supporting its use in the treatment of LTBI comes predominantly from studies conducted in children.30, 31, 32 Daily treatment for 3 months is recommended as an alternative to INH monotherapy by NICE10 but is not in widespread use in Australia. Rifapentine is a potent, long-acting rifamycin. An open-label study of weekly, directly observed therapy (DOT) with this combination for 12 weeks showed non-inferiority to 9 months of daily, self-administered INH.33 It also appears to be efficacious and well-tolerated in HIV-infected adults.34 Durability of response and performance in certain settings (e.g. no DOT, age < 2yrs) are yet to be established. This regimen is now recommended by the CDC.35 It is not currently registered for use in Australia but is the subject of considerable interest. Important - Rifampicin-pyrazinamide (RIF-PZA) is not generally recommended due to an unacceptable risk of significant hepatotoxicity in non-HIV infected individuals26. NTAC recommends that: - INH for 6-9 months is the standard of care for the treatment of LTBI in adults - RIF-INH for 3 months is an acceptable alternative, especially when treating LTBI in children - Rifampicin for 4 months can be used in the event of intolerance of INH or infection by a suspected/known INH-resistant organism Infection with a multi-drug resistant (MDR) organism Isoniazid and rifamycins are unlikely to be effective in the setting of MDR-TB infection. As is the case for fully-drug susceptible organisms, the great majority will not progress to active disease. The potential consequences of MDR-TB transmission are, however, substantial and contacts should therefore be managed by an experienced TB physician. Fluoroquinolone-based preventative therapy has been used in Australia and internationally, with accumulating evidence relating to the magnitude of protection provided36, 37. Regardless of preventative therapy administered, contacts should be closely monitored for signs of active disease for at least 2 years.38 Routine monitoring of liver function is not necessary in the under 35’s without risk factors (regular alcohol consumption, pre-existing liver disease). Otherwise these should be checked monthly for a minimum of 3 months. Transaminases over 5 times the upper limit of normal (ULN) according to your local laboratory reference range should prompt cessation of treatment, with a lower cut-off of 3 times the ULN if symptoms are present. All patients should be educated about symptoms of hepatitis and advised to stop treatment pending assessment by a doctor if they are concerned. * ≥ 100 per 100,000 based on WHO estimates (http://who.int/tb/country/data/profiles/en/). This threshold has been chosen by consensus, considering both epidemiological risk of LTBI and cohort size. Targeted testing for migrants from countries of incidence 40 – 99 per 100,000 should be considered where resourcing is favourable or where underlying medical conditions suggest a significant risk of disease progression or severe manifestations of disease not otherwise specified in the above recommendations. ** http://www1.health.gov.au/internet/main/publishing.nsf/Content/cda-cdi3601i.htm Please note this position statement is currently under review and an updated version will be published shortly. *** Pyridoxine (Vitamin B6) 25mg daily may be co-prescribed in adults for all regimens containing isoniazid to minimise the risk of peripheral neuropathy. The author would like to acknowledge the National Tuberculosis Advisory Committee members both past and present (in alphabetical order): Associate Professor Anthony Allworth, Dr Ral Antic, Dr Ivan Bastian, Mr Philip Clift, Dr Jo Cochrane, Dr Chris Coulter (Chair), Associate Professor Justin Denholm, Dr Paul Douglas, Dr Steve Graham, Dr Jennie Hood, Clinical Associate Professor Mark Hurwitz, Dr Vicki Krause, Mr Chris Lowbridge, Professor Ben Marais, Ms Rhonda Owen, Ms Tracie Reinten, Dr Richard Stapledon, Dr David Stock, Dr Brett Sutton, Ms Cindy Toms, Dr Justin Waring: with Dr Anna Colwell (Medical Advisor) and the NTAC Secretariat from the Department of Health. Dr David Stock Staff Specialist Respiratory and General Medicine Royal Hobart Hospital email@example.com - Barry CE, 3rd, Boshoff HI, Dartois V, Dick T, Ehrt S, Flynn J, et al. The spectrum of latent tuberculosis: rethinking the biology and intervention strategies. Nat Rev Microbiol 2009;7(12):845-855. - Dye C, Scheele S, Dolin P, Pathania V, Raviglione MC. Consensus statement. Global burden of tuberculosis: estimated incidence, prevalence, and mortality by country. WHO Global Surveillance and Monitoring Project. JAMA 1999;282(7):677-686. - Houben RM, Dodd PJ. The Global Burden of Latent Tuberculosis Infection: A Re-estimation Using Mathematical Modelling. PLoS Med 2016;13(10):e1002152. - Toms C, Stapledon R, Waring J, Douglas P. Tuberculosis notifications in Australia, 2012 and 2013. Commun Dis Intell Q Rep 2015;39(2):E217-235. - Globan M, Lavender C, Leslie D, Brown L, Denholm J, Raios K, et al. Molecular epidemiology of tuberculosis in Victoria, Australia, reveals low level of transmission. Int J Tuberc Lung Dis 2016;20(5):652-658. - Gurjav U, Outhred AC, Jelfs P, McCallum N, Wang Q, Hill-Cawthorne GA, et al. Whole Genome Sequencing Demonstrates Limited Transmission within Identified Mycobacterium tuberculosis Clusters in New South Wales, Australia. PLoS One 2016;11(10):e0163612. - Ormerod LP. Further evidence supporting programmatic screening for, and treatment of latent TB Infection (LTBI) in new entrants to the UK from high TB prevalence countries. Thorax 2013;68(3):201. - Waring J, National Tuberculosis Advisory Committee. National Tuberculosis Advisory Committee Guideline: Management of Tuberculosis Risk in Healthcare Workers in Australia. Communicable Diseases Intelligence 2017;in press. - Pareek M, Watson JP, Ormerod LP, Kon OM, Woltmann G, White PJ, et al. Screening of immigrants in the UK for imported latent tuberculosis: a multicentre cohort study and cost-effectiveness analysis. Lancet Infect Dis 2011;11(6):435-444. - In: Tuberculosis: Clinical diagnosis and management of tuberculosis, and measures for its prevention and control. London; 2011. - Pareek M, Bond M, Shorey J, Seneviratne S, Guy M, White P, et al. Community-based evaluation of immigrant tuberculosis screening using interferon gamma release assays and tuberculin skin testing: observational study and economic analysis. Thorax 2013;68(3):230-239. - Denholm JT, McBryde ES. Can Australia eliminate TB? Modelling immigration strategies for reaching MDG targets in a low-transmission setting. Aust N Z J Public Health 2014;38(1):78-82. - In: Guidelines on the Management of Latent Tuberculosis Infection. Geneva; 2015. - Connell TG, Tebruegge M, Ritz N, Bryant PA, Leslie D, Curtis N. Indeterminate interferon-gamma release assay results in children. Pediatr Infect Dis J 2010;29(3):285-286. - Assistance TCfT. International Standards for Tuberculosis Care (ISTC). In. The Hague; 2009. - Efficacy of various durations of isoniazid preventive therapy for tuberculosis: five years of follow-up in the IUAT trial. International Union Against Tuberculosis Committee on Prophylaxis. Bull World Health Organ 1982;60(4):555-564. - Comstock GW. How much isoniazid is needed for prevention of tuberculosis among immunocompetent adults? Int J Tuberc Lung Dis 1999;3(10):847-850. - Smieja MJ, Marchetti CA, Cook DJ, Smaill FM. Isoniazid for preventing tuberculosis in non-HIV infected persons. Cochrane Database Syst Rev 2000(2):CD001363. - Akolo C, Adetifa I, Shepperd S, Volmink J. Treatment of latent tuberculosis infection in HIV infected persons. Cochrane Database Syst Rev 2010(1):CD000171. - Comstock GW, Baum C, Snider DE, Jr. Isoniazid prophylaxis among Alaskan Eskimos: a final report of the bethel isoniazid studies. Am Rev Respir Dis 1979;119(5):827-830. - Kopanoff DE, Snider DE, Jr., Caras GJ. Isoniazid-related hepatitis: a U.S. Public Health Service cooperative surveillance study. Am Rev Respir Dis 1978;117(6):991-1001. - Nolan CM, Goldberg SV, Buskin SE. Hepatotoxicity associated with isoniazid preventive therapy: a 7-year survey from a public health tuberculosis clinic. JAMA 1999;281(11):1014-1018. - A double-blind placebo-controlled clinical trial of three antituberculosis chemoprophylaxis regimens in patients with silicosis in Hong Kong. Hong Kong Chest Service/Tuberculosis Research Centre, Madras/British Medical Research Council. Am Rev Respir Dis 1992;145(1):36-41. - Polesky A, Farber HW, Gottlieb DJ, Park H, Levinson S, O’Connell JJ, et al. Rifampin preventive therapy for tuberculosis in Boston’s homeless. Am J Respir Crit Care Med 1996;154(5):1473-1477. - Villarino ME, Ridzon R, Weismuller PC, Elcock M, Maxwell RM, Meador J, et al. Rifampin preventive therapy for tuberculosis infection: experience with 157 adolescents. Am J Respir Crit Care Med 1997;155(5):1735-1738. - Update: adverse event data and revised American Thoracic Society/CDC recommendations against the use of rifampin and pyrazinamide for treatment of latent tuberculosis infection--United States, 2003. MMWR Morb Mortal Wkly Rep 2003;52(31):735-739. - Ziakas PD, Mylonakis E. 4 months of rifampin compared with 9 months of isoniazid for the management of latent tuberculosis infection: a meta-analysis and cost-effectiveness study that focuses on compliance and liver toxicity. Clin Infect Dis 2009;49(12):1883-1889. - Menzies D. Randomized Clinical Trial Comparing 4RIF vs. 9INH for LTBI Treatment-effectiveness. ClinicalTrials.gov 2009. Report No.: NCT00931736. Available from: https://clinicaltrials.gov/ct2/show/NCT00931736 - Ena J, Valls V. Short-course therapy with rifampin plus isoniazid, compared with standard therapy with isoniazid, for latent tuberculosis infection: a meta-analysis. Clin Infect Dis 2005;40(5):670-676. - Panickar JR, Hoskyns W. Treatment failure in tuberculosis. Eur Respir J 2007;29(3):561-564. - Spyridis NP, Spyridis PG, Gelesme A, Sypsa V, Valianatou M, Metsou F, et al. The effectiveness of a 9-month regimen of isoniazid alone versus 3- and 4-month regimens of isoniazid plus rifampin for treatment of latent tuberculosis infection in children: results of an 11-year randomized study. Clin Infect Dis 2007;45(6):715-722. - Bright-Thomas R, Nandwani S, Smith J, Morris JA, Ormerod LP. Effectiveness of 3 months of rifampicin and isoniazid chemoprophylaxis for the treatment of latent tuberculosis infection in children. Arch Dis Child 2010;95(8):600-602. - Sterling TR, Villarino ME, Borisov AS, Shang N, Gordin F, Bliven-Sizemore E, et al. Three months of rifapentine and isoniazid for latent tuberculosis infection. N Engl J Med 2011;365(23):2155-2166. - Martinson NA, Barnes GL, Moulton LH, Msandiwa R, Hausler H, Ram M, et al. New regimens to prevent tuberculosis in adults with HIV infection. N Engl J Med 2011;365(1):11-20. - Recommendations for use of an isoniazid-rifapentine regimen with direct observation to treat latent Mycobacterium tuberculosis infection. MMWR Morb Mortal Wkly Rep 2011;60(48):1650-1653. - Marks SM, Mase SR, Morris SB. Systematic Review, Meta-analysis, and Cost-effectiveness of Treatment of Latent Tuberculosis to Reduce Progression to Multidrug-Resistant Tuberculosis. Clin Infect Dis 2017;64(12):1670-1677. - Denholm JT, Leslie DE, Jenkin GA, Darby J, Johnson PD, Graham SM, et al. Long-term follow-up of contacts exposed to multidrug-resistant tuberculosis in Victoria, Australia, 1995-2010. Int J Tuberc Lung Dis 2012;16(10):1320-1325. - Fox GJ, Dobler CC, Marais BJ, Denholm JT. Preventive therapy for latent tuberculosis infection-the promise and the challenges. Int J Infect Dis 2017;56:68-76.
<urn:uuid:8b78e61d-7d4c-4930-8701-23fd50c8d97d>
CC-MAIN-2021-21
https://livelonger.health.gov.au/internet/main/publishing.nsf/Content/cdi4103-d
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.80/warc/CC-MAIN-20210507090724-20210507120724-00378.warc.gz
en
0.85194
5,125
2.578125
3
The Biblical Story of Ishmael and Isaac: An Analysis and Comparison with the Islamic Narrative Originally Published: January 11, 2014 Updated: March 29, 2015 “Say ye: “We believe in Allah, and the revelation given to us, and to Abraham, Ismael, Isaac, Jacob, and the Tribes, and that given to Moses and Jesus, and that given to (all) prophets from their Lord: We make no difference between one and another of them: And we bow to Allah (in Islam).”” – The Holy Quran, Surah Al-Baqarah, 2:136 The Biblical story of Abraham’s two sons, Ishmael and Isaac, is at the heart of theological and historical disagreements among Jews and Christians on the one hand and Muslims on the other. It is among many issues which have been debated by followers of the three Abrahamic religions for centuries and remains a highly contentious issue even to the present day. What is it about the story as it is told in the Judeo-Christian tradition which puts it at stark contrast with that of the Islamic tradition? Is the acceptance of the Biblical version justified for Jews and Christians or are they the victims of an insidious deception which has caused them to reject the truth? In this article, we will discuss this possibility. We will first summarize the Biblical version of the story of Ishmael and Isaac (peace be upon them). Following the summary will be an analysis of the Biblical story to discuss the internal contradictions and inconsistencies which ravage the text and yet which somehow have remained hidden from the vast majority of Jews and Christians. Finally, we will summarize the Islamic version and compare it to the Biblical one. It is hoped that from this objective analysis, the reader will find that the Islamic version, and not the Biblical one, is much more deserving of acceptance. The Biblical Story The Biblical story of Abraham and his two sons is found in the first book of the Hebrew Bible, Genesis. For the purposes of this article, we will concentrate specifically on the contents of Genesis 16-18, 21-22, and 25. As the story goes, Abraham and Sarah (originally known as Abram and Sarai, respectively) had both grown old and yet still had no child. Desperate to “build a family”, Sarah urged Abraham to impregnate her slave Hagar, who was an Egyptian. After some tensions developed between Sarah and the now-pregnant Hagar, the latter fled from the abuse of her mistress and encountered an angel who prophesied that she would bear a son who would be named Ishmael. And so it was that Hagar bore the eighty-six year old Abraham a son. The story then moves 13 years forward to when Abraham was 99 years old and so Ishmael was 13 years old. It was at this point that God gave the name Abraham to the patriarch and also instituted the “Covenant of Circumcision”. God also announced that Sarah would bear a son named Isaac, with whom God would establish His covenant (and not with Ishmael, the first-born son). In keeping with the “Covenant of Circumcision”, Abraham circumcised himself as well as Ishmael and every other male in his household. One day, Abraham was visited by three angels, who again announced to him the birth of Isaac as well as to tell him of the impending destruction of Sodom, where Abraham’s nephew Lot lived. As had been promised, Sarah became pregnant and gave birth to a son, whom Abraham named Isaac. At this point, Abraham was now 100 years old. As before, tensions again began to rise between Hagar and Sarah. By this time, Isaac had been weaned and Sarah demanded of Abraham to: “Get rid of that slave woman and her son, for that woman’s son will never share in the inheritance with my son Isaac.” Though Abraham was distressed, God ordered him to do as Sarah had demanded, and told Abraham not to be distraught for He would make a nation out of “the son of the slave”. Obeying God’s command, Abraham sent Hagar and Ishmael into the desert of Beersheba, where their limited supplies quickly dwindled. Faced with the prospect of death, Hagar placed the 16-year old Ishmael under a bush, unable to watch her son die of thirst. However, both were saved when God intervened and provided water. Ishmael lived and grew up to be an archer in the desert. He also had 12 sons and lived to the age of 137 and was also present with his brother Isaac when their father, passed away at age 175. Meanwhile, once Hagar and Ishmael had been sent out, Isaac’s status as God’s chosen was established. It began with a test. God commanded Abraham to sacrifice Isaac, his “only son, whom you love”, in the “region of Moriah”. Abraham complied with the command, but just before sacrificing Isaac, he was stopped by an angel. Abraham had passed the test with flying colors and God promised him that: “I will surely bless you and make your descendants as numerous as the stars in the sky and as the sand on the seashore. Your descendants will take possession of the cities of their enemies, and through your offspring all nations on earth will be blessed, because you have obeyed me.” Thus, the Bible establishes that it would be through Isaac, and not Ishmael, that God would bless “all nations on earth”. Analyzing the Story The story of Ishmael and Isaac, as summarized above, is accepted as a historically accurate version of events by Jews and Christians. But is it really? In actuality, a careful reading of the text will reveal several flaws in the story which cannot be reconciled through reason. In this section of the article, we will see the evidence for why this story must be rejected. According to the story, Hagar and Ishmael were exiled shortly after Isaac was weaned: “The child grew and was weaned, and on the day Isaac was weaned Abraham held a great feast. But Sarah saw that the son whom Hagar the Egyptian had borne to Abraham was mocking, and she said to Abraham, “Get rid of that slave woman and her son, for that woman’s son will never share in the inheritance with my son Isaac.” The matter distressed Abraham greatly because it concerned his son. But God said to him, “Do not be so distressed about the boy and your slave woman. Listen to whatever Sarah tells you, because it is through Isaac that your offspring will be reckoned. I will make the son of the slave into a nation also, because he is your offspring.”” According to the Jewish commentator Rashi, weaning occurred when a child was 24-months old (i.e. 2 years old). This would mean that Ishmael would have been 16 years old at the time, as mentioned above. We know this because Abraham was 86 years old when Ishmael was born and 100 years old when Isaac was born, as stated in the summary. As such, Ishmael was old enough to be married and have his own family and certainly old enough to be considered a man who would be expected to be caring for his mother, and not the other way around. Even in modern times, a 16-year old is expected to bear certain responsibilities. In fact, the Bible states that even young children and boys were called to become prophets, which is of course a great responsibility. For example, Samuel is described as a “boy” when he began preaching: “The boy Samuel ministered before the Lord under Eli. In those days the word of the Lord was rare; there were not many visions.” According to Josephus, Samuel was not even a teenager when he became a prophet: “…to Samuel the prophet, who was yet a child, he openly shewed his sorrow for his sons’ destruction. […] Now when Samuel was twelve years old, he began to prophesy…” So clearly, even a 12-year old, though still considered a “child”, could still be given heavy responsibilities. Hence, even children in ancient times were expected to show more maturity than people would expect from children at that age in the modern world. Since this is irrefutable, a contradiction arises when we read the Genesis account of Hagar and Ishmael’s exile. Since Ishmael would have been a teenager (older than Samuel was when he became a prophet) and more likely to be caring for his mother than the other way around, the Genesis account is most certainly erroneous because it describes him as if he was an infant! It states [our comments in bold]: “Early the next morning Abraham took some food and a skin of water and gave them to Hagar. He set them on her shoulders and then sent her off with the boy. [Rashi states that Hagar carried Ishmael on her shoulders because he was unable to walk due to a curse placed on him by Sarah!] She went on her way and wandered in the Desert of Beersheba. When the water in the skin was gone, she put the boy under one of the bushes. Then she went off and sat down about a bowshot away, for she thought, “I cannot watch the boy die.” And as she sat there, she began to sob. God heard the boy crying, and the angel of God called to Hagar from heaven and said to her, “What is the matter, Hagar? Do not be afraid; God has heard the boy crying as he lies there. Lift the boy up and take him by the hand, for I will make him into a great nation.” Then God opened her eyes and she saw a well of water. So she went and filled the skin with water and gave the boy a drink. God was with the boy as he grew up. He lived in the desert and became an archer. While he was living in the Desert of Paran, his mother got a wife for him from Egypt.” When reading this account, one would have to question Ishmael’s actual age. He was clearly not 16 years old, because the text describes him consistently as a “boy” or “lad” who was completely dependent on his mother. This is even clearer from Genesis 21:20, which states that God was with Ishmael “as he grew up”. How could this be if he was already 16 years old and would probably have been married already under normal circumstances? He was already old enough to be considered a man. How much more “growing up” did he have to do? Even in modern times, a 16-year old is considered old enough to be able to work and drive a car. Why then does Genesis treat Ishmael as if he was a child…unless he really was? As we shall see later, this possibility fits in well with the Islamic version of the story. As Dr. Laurence Brown has observed: “…Genesis 21:14-19 portrays the outcast Ishmael as a helpless infant rather than an able-bodied sixteen-year-old youth…” The proof of Ishmael’s actual age can be seen in the Hebrew text. The Hebrew word used to describe the 16-year old Ishmael is “hay-ye-led” (translated by the NIV as “boy”), and it is ironically the same word used to describe the 2-year old Isaac (but translated by the NIV as “child”)! Why is the word translated differently within the same chapter? If there is any lingering doubt as to the real meaning of the word, we should consider that it is almost exclusively used in the Bible to literally describe children, and more specifically, young children or infants. Examples of its usage in the Bible are the following passages: “But when she could hide him no longer, she got a papyrus basket for him and coated it with tar and pitch. Then she placed the child in it and put it among the reeds along the bank of the Nile.” “Then Naomi took the child in her arms and cared for him. The women living there said, “Naomi has a son!” And they named him Obed. He was the father of Jesse, the father of David.” “After Nathan had gone home, the Lord struck the child that Uriah’s wife had borne to David, and he became ill.” Another place where this word is used is Ecclesiastes 4:15, but a different form is used in verse 13. Let us see these verses: “Better a poor but wise youth [ye-led] than an old but foolish king who no longer knows how to heed a warning. The youth may have come from prison to the kingship, or he may have been born in poverty within his kingdom. I saw that all who lived and walked under the sun followed the youth [hay-ye-led], the king’s successor.” We can see the obvious inconsistency with which this word is translated. Nevertheless, it is clear that the word refers to a child, specifically one who is less than 13 years of age. How do we know this? In the commentary on Ecclesiastes 4:13, Rashi explains that in the Jewish tradition, any boy less than 13-years of age was considered a child, whereas anyone 13-years or older was considered a man: “…why is it called a child? Because it does not enter man until thirteen years.” Hence, we can see that Ishmael too must have been a child, for why else was he referred to as a “boy” (hay-ye-led)? The account in Genesis 21 is, thus, chronologically wrong and is possible proof that later editors placed the story in the wrong section of Genesis (for obvious polemical reasons) and that this incident must have occurred much earlier than the Bible claims. It must have occurred when Ishmael was still a baby or a young child. Alternatively, it is also possible that the contradiction is the inevitable result of different versions of the story that have been joined together as one long narrative. Indeed, it is the general view of Biblical scholars that the books of the Pentateuch are the result of this editorial process. The Book of Genesis, including the account of Ishmael and Isaac, is no different. Some Jews and Christians may object that Ishmael was a child when he was cast out. They may point to the fact that Genesis 21 refers to him both as “hay-ye-led” and “han-na’ar”. The latter is used in most cases in the Bible to describe a “young man”, so its usage would be appropriate when referring to Ishmael, who was 16-years old at the time, and hence an adult according to Rashi. However, this argument only raises another contradiction since it does not change the fact that the word “hay-ye-led” mostly refers to young children. Given that Ishmael is shown to be helpless and completely dependent on his mother, it is unlikely that he would have been described as “han-na’ar” and hence the use of the word is actually inappropriate. He cannot be both “hay-ye-led” and “han-na’ar”, just as in English, he cannot be described both as a “child” and as an “adult”. In addition to this inconsistency, we also need to consider the incident of Abraham’s near sacrifice of Isaac, as told in the Bible, for it is partially responsible for the disagreements between the Judeo-Christian and Islamic traditions. As mentioned in the summary, Abraham was commanded by God to sacrifice Isaac, who was referred to as Abraham’s “only son”. After reading this passage, we must ask the obvious question. Why did God refer to Isaac as Abraham’s “only son” when he clearly had two sons, Ishmael being the other and the elder of the two? In fact, earlier God had specifically counted Ishmael among Abraham’s “offspring” (and of course, there was no reason not to): “I will make the son of the slave into a nation also, because he is your offspring.” Why did God specifically refer to Ishmael as Abraham’s progeny in one place and then referred to Isaac as his “only son” in another place? The Jewish commentator Rashi offered a rather bizarre extra-biblical conversation between God and Abraham to explain this contradiction: “He [Abraham] said to Him, “I have two sons.” He [God] said to him,“ Your only one.” He said to Him,“ This one is the only son of his mother, and that one is the only son of his mother.” He said to him,“ Whom you love.” He said to Him,“ I love them both.” He said to him,“ Isaac.” Now why did He not disclose this to him at the beginning? In order not to confuse him suddenly, lest his mind become distracted and bewildered, and also to endear the commandment to him and to reward him for each and every expression. — [from Sanh. 89b, Gen. Rabbah 39:9, 55:7].” Can any rational person accept this explanation? If the intention was to avoid confusing Abraham, then God would have simply mentioned Isaac by name without adding the phrase “your only son, whom you love”? Clearly, this explanation makes little sense and does not reconcile the contradiction. It is no wonder, then, that Christian apologists have come up with other explanations, though they are just as absurd as the one suggested by Rashi. For example, Emir F. Caner and Ergun M. Caner make the following claim: “First, the term ‘only’ may be in reference to your ‘beloved’ son (John 1:18, 3:16). Second, the verse is an affirmation of the inheritance intended for Isaac, the legitimate heir of Abraham, and not Ishmael, born from a concubine who thereby had no right to the promises of God. It is clear that Isaac is the one God desired to bless (Genesis 21:12).” This is, of course, just a mindless repetition of an age-old polemical argument, but it clearly has no merit. First, to refer to the Gospel of John in the New Testament to explain the meaning of the phrase in Genesis 22 is both irrelevant and absurd. What does one book have to do with the other, even if Christians believe in both? Certainly, Jews place no importance on the Gospel of John and the concept of Jesus being the “son of God”! Second, the idea that Ishmael was not a “legitimate heir of Abraham” is refuted by the Bible itself, which clearly states that he was a legitimate son of Abraham. Why then would he not be a “legitimate heir” as well? Moreover, Genesis 16:3 states clearly that Hagar was given to Abraham by Sarah to be his “wife”: “So after Abram had been living in Canaan ten years, Sarai his wife took her Egyptian slave Hagar and gave her to her husband to be his wife.” It seems the Caners, and indeed all apologists who use this argument, simply pick and choose some passages from the Bible while ignoring others. So, we must look for an alternative answer by asking a different question. What if this part of the story is also chronologically misplaced? What if this part of the story actually refers to Ishmael, who was born 14 years before Isaac? Surely, the phrase “your only son, whom you love” only makes sense if it was referring to Ishmael. This suggests that the editors of Genesis altered the story as well as its place in the Bible and thus tried to deny Ishmael his rightful place as a legitimate son and heir of Abraham. To Jews and Christians, this may come as a shock, but given the undeniable history of the Bible’s editorial evolution, reasonable people would not be shocked at all. In closing, a careful analysis of the Genesis account reveals irreconcilable contradictions in the text. No doubt, Jewish and Christian apologists have gone to great lengths to explain these problems, but an objective analysis can only lead to one conclusion: these inconsistencies are real and cannot be explained by polemical gymnastics. Rather, the best explanation appears to be that the story has been edited by anonymous hands and passed off as “scripture”. It is just another sordid example of Biblical “myth-making”. The Islamic Story Compared to the Biblical story, the story as told by the Islamic tradition is consistent and more in line with the facts. Ishmael’s birth and his near-sacrifice are described in a beautiful passage in the Quran: “He (Ibrahim] said: “I will go to my Lord! He will surely guide me! O my Lord! Grant me a righteous (son)! So We gave him the good news of a boy ready to suffer and forbear. Then, when (the son) reached (the age of) (serious) work with him, he said: “O my son! I see in vision that I offer thee in sacrifice: Now see what is thy view!” (The son) said: “O my father! Do as thou art commanded: thou will find me, if Allah so wills one practicing Patience and Constancy! So when they had both submitted their wills (to Allah), and he had laid him prostrate on his forehead (for sacrifice), We called out to him “O Abraham! Thou hast already fulfilled the vision!” – thus indeed do We reward those who do right. For this was obviously a trial- And We ransomed him with a momentous sacrifice: And We left (this blessing) for him among generations (to come) in later times: “Peace and salutation to Abraham!”” It is well known that these verses deal with the birth of Ishmael (peace be upon him), as the vast majority of Quranic commentators have stated. This is clearly seen by the fact that following these verses, Allah (Glorified and Exalted be He) next mentions Isaac (peace be upon him), so verses 99-109 could not have referred to him: “And We gave him the good news of Isaac – a prophet,- one of the Righteous. We blessed him [Ibrahim] and Isaac: but of their progeny are (some) that do right, and (some) that obviously do wrong, to their own souls.” It should also be noted that Ishmael (peace be upon him) is described as a young man (possibly a teenager) when Ibrahim (peace be upon him) was ordered to sacrifice him, since he is described as having reached the age of “serious work”. But what about the incident of Ishmael and Hagar’s journey into the desert after Ibrahim (peace be upon him) was ordered by Allah (Glorified and Exalted be He) to send them out? As we saw above, the Biblical story is self-contradictory. It describes Ishmael (peace by upon him) as a helpless child yet we are supposed to believe that he was actually 16 years old. From the internal evidence, it is clear that he was indeed a very young child, possibly even an infant. According to a hadith, this is exactly what Ishmael (peace be upon him) was at the time of this incident: “Narrated Ibn ‘Abbas: The Prophet said, “May Allah bestow His Mercy on the mother of Ishmael! Had she not hastened (to fill her water-skin with water from the Zam-zam well), Zam-zam would have been a stream flowing on the surface of the earth.” Ibn ‘Abbas further added, “(The Prophet) Abraham brought Ishmael and his mother (to Mecca) and she was suckling Ishmael and she had a water-skin with her.’” Clearly, both the Bible and Islamic sources describe Ishmael (peace be upon him) as a helpless infant when he was sent out with his mother. The only difference is that the Biblical story is chronologically flawed and self-contradictory. Also, both the Bible and the Quran agree that Ishmael was the first-born son of Ibrahim (peace be upon them). However, the former contradicts itself by referring to Isaac (peace be upon him) as the “only son” of Abraham shortly before the incident of the sacrifice, even though he was the younger of the two sons. Before we conclude this article, it must be emphasized that even though the Holy Quran and the Ahadith make it clear that it was Ishmael (peace be upon him) who was the son that Allah (Glorified and Exalted be He) commanded Ibrahim (peace be upon him) to sacrifice (and in all probability, Isaac was not even born yet), it does not in any way suggest that Ishmael was somehow “superior” to his younger brother. Rather, Muslims revere both sons of Ibrahim (peace be upon them all) and do not believe that Allah (Glorified and Exalted be He) discriminated against either one of them. Muslims are taught to hold all the prophets in high regard and not to prefer one over another, praise be to Allah (Glorified and Exalted be He). As a matter of fact, if it turned out that Isaac (peace be upon him) was actually the son who was to be sacrificed, it would not have made any difference to the faithful Muslim. As Professor John Kaltner of Rhodes College states: “Both Ishmael and Isaac are esteemed equally in the Qur’an and each is held up as a model of faith for the reader, so it is inconsequential which one was almost killed by his father.” There can be no doubt that after reading the Biblical story (in its correct context despite the obvious editorial efforts) and the Islamic story, that Ishmael (peace be upon him) was not only a legitimate son of Ibrahim (peace be upon him) but was only an infant when he was sent out with his mother. This event would have occurred years before Isaac’s birth. It is also clear that Ishmael was the son whom God had ordered Ibrahim to sacrifice, and not Isaac (peace be upon them all), who would not have been born yet. Therefore, the Biblical version suffers from serious contradictions and can only be the result of textual tampering. However, despite these insidious attempts at altering the story, there are enough clues within the text itself that point the way to the truth. Thus, the unavoidable conclusion is that the Biblical story should be rejected as a biased account written by fraudulent hands, and that the Islamic version of the story is more deserving of acceptance and gives a faithful account of the story. And Allah knows best! Biblical scholars, including Christian scholars, are no doubt aware of these contradictions, but the majority of lay Jews and Christians are, in all likelihood, completely oblivious of them. Genesis 16:1-2 (New International Version). Genesis 16:16. For the story of Lot, see our article “The Biblical Story of Lot: An Analysis and Comparison with the Quranic Narrative”. Genesis 21:10. Genesis 21:13. Genesis 22:20. Genesis 25. Genesis 22:2. Genesis 22:17-18. Christians maintain that Jesus (peace be upon him) was the fulfillment of this promise, since his alleged death and resurrection was the path to salvation for all people, Jew and Gentile alike. As one Christian website puts it: “[Genesis 22:18] prophesied all nations would be blessed through one special descendant or seed of Abraham. Jews and Gentiles alike are blessed when they accept Jesus a descendant of Abraham as their Savior.” Genesis 21:8-13. 1 Samuel 3:1. Flavius Josephus, Antiquities of the Jews, 5:10. www.chabad.org/library/bible_cdo/aid/8216#showrashi=true; See verse 14. If this is true, we are supposed to believe that Hagar was forced to carry her 16-year old son through the desert! Genesis 21:14-21. Laurence B. Brown, MisGod’ed: A Roadmap of Guidance and Misguidance Within the Abrahamic Religions (Booksurge, 2008), p. 238. Kindle Edition. Exodus 2:3. This verse describes the well-known story of the mother of Moses placing the infant in a basket on the Nile River. Obviously, Moses was not a teenager as Genesis 21 would want us to believe regarding Ishmael! See also Exodus 2:6, 2:9 and 2:10, all of which describe the infant Moses. Ruth 4:16-17. 2 Samuel 12:15. This verse mentions how David’s son from his adulterous relationship with Bathsheba was struck with an illness as David’s punishment for his sin. The son was clearly still an infant and not a teenager. The general consensus is that there are four different sources that have been joined together: the “J” source (or “Yahwist” source), the “E” source (or “Elohist” source), the “D” source (or “Deuteronomist” source) and the “P” source (or the “Priestly” source). See The Collegeville Bible Commentary: Based on the New American Bible: Old Testament, Edited by Dianne Bergart. Collegeville: Liturgical Press, 1992, pp. 52-62. Genesis 21:14. Genesis 21:12. In addition, the word “han-na’ar” is also used in 1 Samuel 3:1 to refer to the prophet Samuel (peace be upon him), who as we previously mentioned, was only 12 years old when he became a prophet. Scholars attribute the version of Isaac’s near-sacrifice to the “Elohist” (E) source. See The Collegeville Commentary, op. cit., pp. 60-61. Genesis 21:13. Emir F. Caner and Ergun M. Caner, More Than a Prophet: An Insider’s Response to Muslim Beliefs About Jesus and Christianity (Grand Rapids: Kregel Publications, 2003), pp. 96-97. Genesis 21:13. Even though the verse uses different words to refer to Sarah as Abraham’s “wife” (’ê-šeṯ) and Hagar as his “wife” (lə-’iš-šāh), the latter usage is clearly established to mean a legitimate wife, as seen by other verses from Genesis, such as Genesis 20:12, which describes Sarah as “lə-’iš-šāh” as well! On an unrelated note, the Caner brothers have been discredited as liars and frauds after even their fellow Christians pointed out the discrepancies in their “Islamic” upbringing. For example, see the following: Ergun Caner has even been exposed by his fellow Christians for pretending to speak Arabic, when in reality, he was speaking gibberish: In fact, he has even been caught completely misstating the Shahada, which is the Islamic declaration of faith, despite the fact that he claimed to have been a “devout” Muslim before his conversion to Christianity: It is an undeniable fact that different versions of the same story were simply brought together later on, as previously mentioned. Elsewhere in Genesis, the Biblical authors concocted the story of the incestuous origins of Israel’s great enemies, the Moabites and Ammonites. See our discussion of this in the recently updated article on the Biblical story of Lot: Thus, myth-making is a common phenomenon in the Bible. Surah As-Saaffat, 37:99-109 (Yusuf Ali Translation). Regarding the phrase “حَلِيمٍ بِغُلَامٍ”, which Yusuf Ali translated as “…of a boy ready to suffer and forbear”, an alternative rendering is “…of a gentle boy”, as offered by Maulana Abdul Majid Daryabadi (d. 1977) in his translation (The Glorious Qur’an: Text, Translation and Commentary (Leicester: The Islamic Foundation, 2001), p. 805). Indeed, the Arabic word حَلِيمٍ (halimin) is from the root “م ل ح”, and the word “halim” is defined by Lane’s Lexicon (see p. 632) as: “…the quality of forgiving and concealing [offences]…or moderation; gentleness; deliberateness…patience…sedateness; calmness…” Hence, Maulana Daryabadi translated the word “halimin” as “gentle” instead of “forbearing” (although both are correct). However, the use of the word in the Arabic is significant given its meaning of both “gentle” and “forbearing” as it refuted the Biblical charge against Ishmael (peace be upon him) as being “wild” (Genesis 16:12), as explained by Maulana Daryabadi in his commentary: “The epithet contradicts the ferocity of temperament attributed to Ishmael by the Jews and Christians” (Ibid.) Surah As-Saaffat, 37:112-113. Sahih Bukhari, Book 55, Number 582. John Kaltner, Ishmael Instructs Isaac: An Introduction to the Qur’an for Bible Readers (Collegeville: Liturgical Press, 1999), p. 124.
<urn:uuid:c069658a-d16b-4554-9289-e744fb6a70c6>
CC-MAIN-2021-21
https://quranandbibleblog.com/2014/01/11/ishmael-and-isaac-in-the-bible-and-the-quran/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988966.82/warc/CC-MAIN-20210509092814-20210509122814-00136.warc.gz
en
0.976931
7,400
2.734375
3
America is the only country that went from barbarism to decadence without civilisation in between. America's history is a long one, but that's what you get when you make countries: history. Let us start, for now, at the beginning of time. The earth spun out and formed into a planet. Over time, continents and volcanoes formed. The Pacific Ocean was filled, and volcanoes lined the coasts around it, creating what is now called the Ring of Fire. Tectonic plates fought each other to form mountains. The North American plate slid into place, and though there are some discrepancies as to how human beings began to appear around the world, there were humans in almost every continent, and especially Europe, when Christopher Columbus was dispatched to find a route to Asia through the Pacific. However, he bumped into North America and sent back his news to Spain. The new continent was named 'America' after Amerigo Vespucci1, another explorer with a claim to the discovery of North America. In the blink of an eye, there were settlers along the east side of the Atlantic. They encountered a number of indigenous cultures that were scattered all around the new world. The British colony of Roanoke was a disaster, but the colony of Jamestown was organised to seek gold and a passage to Asia in 1607. Jamestown ended up finding huge success in producing tobacco, which would be the main cash crop of the Southern states for many many years to come. However, labour shortages caused some of the people there to use indentured servants as a source of free labour. When Nathaniel Bacon incited a rebellion, planters saw the potential in slavery: helpless, cheap people without arms. In 1620, the Pilgrims, who left England to seek religious freedom, landed their ship, the Mayflower, onto Plymouth Rock and set up the Mayflower Compact, in which they agreed to live by majority rule for the general good. A large portion of the settlers died in the first winter, but the survivors, or so legend has it, celebrated the first American Thanksgiving with neighbouring native people of the Wampanoag tribe. In 1628, a larger group of puritans who wanted religious freedom established a settlement in Salem, Massachusetts, and quite a few other settlements, such as Boston. Ships travelled across the Atlantic at a regular rate, and the colonies grew gradually. In 1630, a colonial assembly was created to share power with a governor who was appointed by Britain's king. All the grants England had issued accrued to become a significantly large area with a growing population. There were 13 colonies, stretching from Maine2 to Georgia. There were three distinct regions: New England included Massachusetts, New Hampshire, Rhode Island and Connecticut, including the large population centres of Providence and Boston. Farmland wasn't great, so whaling and fishing were the main sources of income. Naturally, with the large amount of fishing and whaling, shipbuilding was an important part of the economy too. The Middle Colonies were New York, Delaware, New Jersey and Pennsylvania. Major cities included New York and Philadelphia. The Middle Colonies had better farmland than New England, so wheat became an important product. The Southern Colonies were Virginia, Maryland, North Carolina, South Carolina and Georgia, including the ports of Baltimore, Charleston and Jamestown. In the South, plantation farming was the main industry. Tobacco was the most common export, closely followed by cotton and indigo. In the middle of the 18th century, the colonies were made up of about 1.5 million citizens. The people were generally situated along the Atlantic coast, and some states didn't even bother with establishing borders that were too far inland. People began to look back at the Virginia House of Burgesses and the Mayflower Compact, early forms of self-government. Each colony had a different type of government: some were controlled by the British King entirely, some mostly owned by individuals, and some were left to themselves. Basically, each state had a governor, a council and an assembly. For a law to pass, it had to be approved by the assembly, the governor and the British government. In New Hampshire, Massachusetts, New York, New Jersey, Virginia, North Carolina, South Carolina and Georgia, the governor was appointed by the King, and he in turn appointed his council; the assembly being democratically elected. In Maryland, Delaware and Pennsylvania, the owner or owners of the colony appointed the governor, who selected his council, and the assembly was elected. In Rhode Island and Connecticut, the governor, the council and the assembly were all elected by the citizens. Of course, in each state there were some restrictions on who could be elected. As the south grew, slavery grew as well. One of the worst chapters in the history of America is the slave trade and the continuing demand for new slaves in the South. None of the colonies were entirely blameless in this - though the South received most of the slaves, the slave trade was helped in part by the New England sailors. Another dark chapter of American history is the way the country handled the native Americans - forcefully ejecting them from their land to make way for the white settlers. Taxes and Rebellion Several small wars were fought in the colonies: King William's War, Queen Anne's War, and a somewhat more important conflict, the French and Indian War3. The rich and useful Ohio country was contested by the French and the English. In 1754, a young Virginian officer named George Washington led a group of his countrymen into the fight. After slaughtering a few too many Frenchmen, Washington set up Fort Necessity in anticipation of a French response. He was then forced to surrender the fort in the face of French agression. The war effort under Washington was not going very well, so the English pumped as much of their resources as possible into the war, and captured Forts Duquesne and Ticonderoga and the Canadian cities of Québec and Montréal. The war ended with the Treaty of Paris, in which the French gave up all their land east of the Mississippi except for the city of New Orleans. After the treaty, a group of native Americans under Pontiac set out to destroy the British control in the west, but their rebellion was put down. As this unfolded, the King of Britain issued the Proclamation of 1763 to end colonial expansion west of the Appalachian Mountains. Britain also set harsh taxes on the colonies to pay off the debts that had been caused by the Seven Years' War. Americans weren't happy with this, and the colonists began to dream of independence. With more taxes and unpopular decisions by the crown, rebellion started to grow more and more favourable to many Americans. The tight grip on the colonies was choking them: for instance, the Quartering Act forced colonists to provide their houses for the use of British soldiers. The Sugar Act and the Stamp Act raised taxes incredibly high on food imports and paper goods, respectively. Merchants couldn't pay the Sugar Act and turn a profit; people refused to allow soldiers to live in their homes and the Stamp Act was viciously attacked by many colonists. A group calling themselves the Sons of Liberty was organised, and they kept themselves busy by attacking tax collectors. A congress of delegates from nine of the colonies was assembled regarding the Stamp Act, and they sent a protest to King George III, believing that the colonies should tax the colonists. Colonial merchants boycotted London goods, which hurt Britain. The Stamp Act was repealed, but in its place came a demoralizing law saying that the colonies were subject to the authority of the British. After this, Charles Townsend became the British Chancellor of the Exchequer and England instituted duties on certain goods going into the colonies. Many Americans either smuggled the goods in or simply refused to buy them. Virginia and Massachusetts emerged as hotbeds of rebellion. In Boston, there was so much energy for rebellion that British troops were stationed to keep the peace and uphold the laws. Unfortunately, as Bostonians had to provide homes for the troops and so they were less than peaceful. On 5 March, 1770, as a group of people taunted British soldiers near the Customs House, soldiers fired on the taunters and five people were killed. This became known as the Boston Massacre. The British government responded by repealing all the taxes except for the tax on tea - after all, they couldn't be seen as surrendering their right to tax the colonies. One might think that with only a single tax, the colonists would leave well enough alone. But the Americans, still outraged by the Boston Massacre, and despite the repeal of most taxes, began organising. Samuel Adams founded the Committee of Correspondence, which was supposed to keep communications on British activities open between colonists and keep information flowing, as well as incite rebellion. In the Committee, there emerged several future leaders. In 1773, the Tea Act was passed, which threatened to destroy the profits of the tea merchants in the colonies. Naturally, the people responded. On 16 December, about 50 men of the Sons of Liberty group, disguised as Mohawks, boarded three ships in Boston Harbour and threw all the British tea in the ships into the water. Now known as the 'Boston Tea Party', it outraged the British government. In response to the Boston Tea Party, the British passed the 'Intolerable Acts' in 1774, which closed Boston Harbour, destroyed self-government in Massachusetts and restricted the right to assembly. It was believed that these would punish and demoralise Massachusetts, but they had the opposite effect, and the colonies banded together in protest. The Virginia House of Burgesses called for each of the colonies to send representatives to form a united protest. The first Continental Congress was made up of 56 delegates from all of the colonies but Georgia. In Philadelphia, they declared the Intolerable Acts void, declared a boycott against Britain, asked residents of Massachusetts to refuse to pay taxes, organised a militia, sent protests to King George and planned another meeting. In turn, Britain sent more troops into America. So You Say You Want A Revolution... The British learned of an arsenal at Concord, Massachusetts and sent about 700 men to raid it. However, the colonists had planned ahead and so were able to alert the 'minutemen'4 guarding the area. A light from the Old North Church alerted two men, Paul Revere and William Dawson, that the British were mobilising. They rode on horses to alert the people and militiamen of the British advance. The British met a force about a tenth its size at Lexington, and were able to move on to Concord. The Americans sent the British back, and the minutemen in the following towns were able to inflict considerable damage on the British army. There were about 300 British casualties. The Second Continental Congress was convened in Philadelphia on 10 May, 1775. It made several important decisions. George Washington, due to his experience in the French and Indian War, was appointed commander of American forces. The Congress asked Britain not to attack the colonies again and asked each of the colonies to send troops to assist those in Massachusetts. The next major military conflict was the Battle of Bunker Hill. The colonist soldiers were entrenched around Boston at Bunker Hill and Breed's Hill. The Americans lost the hills after three charges, but the British suffered heavy casualties. In July, America sent the 'Olive Branch Petition' to reach an agreement with Britain. The colonies didn't truly want independence yet, many people just wanted the taxes repealed. The King turned the petition away, and declared that the colonies were in revolt. We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, and that among these are Life, Liberty and the Pursuit of Happiness. -The Declaration of Independence The Second Continental Congress met again in May after the news from England, and moved to draw up a Declaration of Independence. Thomas Jefferson, Benjamin Franklin, John Adams, Robert Livingston and Roger Sherman were assigned to the task of drafting the document. On 4 July, 1776 the Congress adopted the Declaration of Independence, and declared the colonies to be independent from Britain. The American Revolution carried on. Ethan Allen and his Green Mountain Boys took Fort Ticonderoga and the British left Boston to move up into Canada. General Howe of the British took New York City. However, Washington scored victories at Trenton and Princeton after crossing the Delaware River. The British decided to bring all their forces together to control the Hudson River Valley, but this didn't quite work. Generals Burgoyne and Washington faced off at Saratoga, and Burgoyne surrendered, which ended up becoming the turning point of the Revolution. Soon after, the French signed a treaty with the Americans to support their cause. In 1781, Lord Cornwallis moved through the South to attack Virginia. The only forces to oppose him were under the French officer Marquis de Lafayette, and he managed to delay Cornwallis. Washington and the French general Rochambeau moved into Virginia and the French navy took control of the Chesapeake Bay, trapping Cornwallis. He was completely surrounded, and had to surrender at Yorktown. A few insignificant battles were fought after this, but the war was basically over and America was independent. The Treaty of Paris was signed between America and Britain in 1783, and the new country not only gained its independence but rights to the Ohio River Valley and fishing rights in Canadian waters. We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America. -The Preamble to the US Constitution A revolution isn't complete until a new government was established, and the first one was based around the Articles of Confederation. It gave most of the power to the states, and the national Congress's power was completely insignificant. The articles proved to be flawed, rebellion broke out and a new government had to be created. In May 1787, 55 delegates met in Philadelphia. The greatest political and war leaders were present. James Madison was an avid notetaker and full of ideas. George Washington was President of the convention. Benjamin Franklin was the most senior among them - into his eighties by this time. Alexander Hamilton, the foremost immigrant, was also there. The most notable absence was Thomas Jefferson, the author of the Declaration of Indepedence, who was serving as minister to Spain. The meetings were long and hot. They had to be conducted in secret, and the door was shut. Tension often ran high, and passion higher. Whenever the debate was becoming too heated, Ben Franklin, thought to be the wisest as he was the eldest, would calm the delegates with a funny anecdote. The presence of George Washington, the living legend, added credibility and dignity to the convention. They slowly wrote out a new document detailing the future government of the United States. James Madison contributed many of the ideas, but there were some difficult issues. States with large populations would prefer a Congress based on population - with more representatives from states with more people. Smaller states didn't want to be forgotten, so they wanted equal representation for each state. A great compromise was worked out, where there would be two houses of Congress: one with representatives determined by the population of the state and one where each state sent two representatives. Once this was agreed upon, another issue arose. Since each state wanted more representation than everyone else, and slaves made up a sizeable portion of the Southern population, the slave states wanted slaves to count in their population quota to determine how many representatives they received. It was decided that for every five slaves, three would count towards representation, now known as the infamous Three-Fifths Compromise. Many other elements of the American government were decided upon, and remained flexible because an amendment process was built into the fundamental workings of the government. In fact, this is just how the United States Constitution came into place. Each state eventually agreed to the Constitution, with most of the greatest leaders of the time supporting it. It was ratified once it was promised that it would be amended to contain a Bill of Rights. The first president was elected in 1789, and it was George Washington, with John Adams serving as vice president. Washington served for two terms, and the president after him was John Adams. In his farewell address, Washington called on America to avoid foreign treaties and political parties. During the presidency of John Adams, the Democratic-Republicans (known as the Republicans for short, but eventually becoming what are the modern Democrats) grew in power and in order to keep them from taking over the Federalists (the main opposing party), the infamous Alien and Sedition Acts were passed. These made it harder for immigrants (who would favour the Republicans) to vote and made it illegal to speak ill of the government. In the election of 18005, Thomas Jefferson, on the Republican ticket, became president. This was the first peaceful handover from one party to the other in American history. During Jefferson's presidency, the Louisiana Territory was bought from France at a light price, and Lewis and Clark explored America to the Pacific. Britain and France fought over the ports of America, and kept the new country from trading freely. Jefferson got tired of the continuing conflict, and passed the Embargo Act of 1807, which made it illegal for American ships to trade in foreign ports. Smuggling made the Act somewhat ineffective, but trade soon resumed with all countries except Britain and France. In 1809, James Madison assumed the presidency, and he was rather annoyed with Britain. They had been forcing American soldiers into their navy, encouraging Indian resistance in the west, and preventing trade with other countries. America declared war on Britain on 18 June, 1812. The War of 1812 The War of 1812 was probably an ill-advised conflict. The country wasn't nearly strong enough to win a war with Britain, and once Britain temporarily concluded its war with Napoleon Bonaparte in 1814 it was able to direct all its resources against America. The British destroyed the new capital city of Washington DC. Legend has it that the first lady6 herself saved many of the important documents as the British burned the White House. The most important American victory in the war was the Battle of New Orleans, in which US troops under General Andrew Jackson captured New Orleans and suffered far fewer casualties than the British. Remarkably, the battle was fought after the Treaty of Ghent ended the war, so it did not need fighting. The treaty stopped a number of New England states (where the war was unpopular) from seceding. Following the War of 1812, there was a general feeling of unity in the country. The Federalist party lost power, and the Republicans won the Presidency in 1816 and 1820 with James Monroe. His time as president is known as the 'Era of Good Feelings'. The country grew during this time. High tariffs were levied on British goods, internal improvements were made within the country and a national bank was set up to handle the country's money and issue national currency. Industry expanded, and in 1819 so did the country. In the Adams-Onis treaty, America gained Florida for five million dollars. Land from Canada was gained, the border with Canada was set, and a temporary solution to the dispute over the Oregon country was created. In 1823, James Monroe, on the advice of his Secretary of State John Quincy Adams, declared that the Americas were closed to further colonisation, an edict which became known as the Monroe Doctrine. In 1824, John Quincy Adams was elected as President, following one of the messiest elections ever. Andrew Jackson won the most votes, but lost the presidency. When there was no majority in the electoral college, the vote went to the House of Representatives. Henry Clay, the most powerful man in the House, convinced many people to vote for John Quincy Adams, and he was elected. Clay was appointed Secretary of State, and Jackson declared that there had been a 'corrupt bargain'. Adams was unpopular, and unable to get anything done during his time in office. With this election, a regional rift was widened. The South, North and West were all very different places culturally. Eventually, the North and South7 were trying to gain an advantage of power over each other. The Missouri Compromise established a tradition of entering a free and a slave state into the Union at the same time, in order to preserve the balance of power, and so no one had an advantage. The Jacksonian Democracy In 1828, Andrew Jackson was elected president. He was originally a common man and was also a war hero, so he won in a landslide. He instituted a spoils system, whereby people who supported him were appointed to government jobs. He also faced the Nullification Crisis - proponents of states' rights believed that a state had the ability to declare a law null and void. War nearly began in South Carolina, but Henry Clay, known as the 'Great Compromiser', pushed a successful compromise through Congress. Jackson left office, to be succeeded by his vice president Martin Van Buren. Van Buren had a problem, as Jackson had decentralized the currency of the US into dozens of small banks. Van Buren was blamed for the ensuing economic problems. After this, the Whig Party was born. Van Buren was defeated for reelection by William Henry Harrison, a war hero, who died about a month into office due to pneumonia, which he caught during his lengthly inaugural address. John Tyler, Harrison's vice president, took office. Our manifest destiny is to overspread the continent allotted by Providence for the free development of our yearly multiplying millions. - John Louis Sullivan, 1845 As Americans became a bit more tightly packed, its citizens looked to the Pacific. America shared rights of the Oregon country with Great Britain, but there were many more Americans than British in the area by the 1830s. America gained most of the territory up to the 49th parallel in 1846. As in Oregon, there were many Americans in the Mexican region of Texas. They were becoming increasingly unhappy with Mexican rule, and began to move for independence. They won their war of independence - notable for the famous Battle of the Alamo, where a small group of people held off a huge army. Texas became the Lone Star Republic, before it was annexed into the United States in 1845. Mexico broke off diplomatic relations with the US in response. The Mexican-American War would begin with the belief that the Rio Grande River was the southern border of Texas. In California, there were about 700 Americans by 1845. After James Polk was elected, he told the Americans there to rebel against the Mexican rule. They managed to gain their independence, and raised a flag with a bear on it, so this was called the 'Bear Flag Revolt'. As this was during the war with Mexico, the US declared California to be an American territory in 1846. Mexican troops were driven out of California. In the Mexican-American War, America hoped to gain Mexican lands between Texas and the Pacific. General Zachary Taylor won the war for America, and the nation gained the land that would eventually become California, Utah, Nevada, Arizona and parts of Wyoming, Colorado and New Mexico. With the Gadsen Purchase of 1853, what we now know as the Continental United States was complete. Meanwhile, tension was growing between pro-slavery and anti-slavery people. More and more people opposed slavery as time went on, and the Southerners who needed slavery because of the economic advantage felt more and more threatened. In 1850, Zachary Taylor became President. Henry Clay proposed a compromise that would satisfy both sides on several key issues and put off secession. He proposed that slavery be abolished in Washington, DC and that California be admitted into the Union as a free state, that the land from Mexico be divided into two territories which would decide for themselves if they wanted to be free or slave states and that the Fugitive Slave Law be passed, requiring people to help return escaped slaves to their owner. Intense debate followed, but Clay's compromise managed to pass. He had singlehandedly delayed the American Civil War for ten years. In 1852, Franklin Pierce was elected President. In 1854, Stephen A Douglas worked to pass (successfully) the Kansas-Nebraska Act, which repealed the Missouri Compromise and allowed each state to decide for itself whether it would allow slavery or not. This sparked the beginning of the Republican Party, a group dedicated to stopping the expansion of slavery. The Whig party didn't have a set policy on slavery, and it didn't have the ability to make compromises due to the death of Henry Clay in 1852, so it was destined to die out. The Republicans fielded John C Fremont as their choice for president, and he did pretty well for a new party's candidate, but James Buchanan won the election. In the famed Dred Scott Decision, the Supreme Court decided in 1857 that a slave did not have the constitutional right to sue for his freedom. This mobilized the Republicans more than ever. The Civil War Let us have faith that right makes might, and in that faith, let us, to the end, dare to do our duty as we understand it. - Abraham Lincoln In 1860, the Republicans nominated Abraham Lincoln for president. The Democrats were unable to present a united front against Lincoln, and the northern Democrats nominated Stephen Douglas, while the southern Democrats nominated John C Breckinridge. Lincoln won the electoral vote, despite winning only 40% of the popular vote. The South was afraid that if Lincoln was allowed to remain the president, he would do away with slavery. In defiance, they attacked an American military fort in South Carolina and, when met with resistance, seceded from the Union. The Civil War was fought, with more Americans dying in it than in any other war in the country's history. In the end, Lincoln and the North managed to win, and the country was united again - with the bonus of having slavery abolished. Lincoln was assassinated on 15 April, 1865 by John Wilkes Booth. He was succeeded by Andrew Johnson, who was largely unable to continue the post-war Reconstruction policies of Lincoln, and was the first president to be impeached. Eventually, each of the seceded states was allowed to return to the Union, and America became prosperous again. You have undertaken to cheat me. I won't sue you, for the law is too slow. I'll ruin you. - Cornelius Vanderbilt After the Civil War, a huge surge of industrial growth occurred. Huge companies were formed, and incredibly rich men such as John D Rockefeller and Andrew Carnegie held monopolies on entire industries. Many men made their wealth in railroads, which revolutionized industry in the latter part of the 1800s. Important inventions were made during the era, such as the telegraph, the telephone and the lightbulb. Railroads were laid quicker than one could imagine, and automobiles and even aeroplanes were constructed. While huge profits made philanthrophists out of the Rockefellers and Carnegies, the middle and lower class were suffering. As a result, unions were formed to make labour conditions better. Cities grew as industry became more important, though the government became more corrupt and reform was pushed. Spain ruled the island of Cuba at this time, treating the native Cubans badly. William McKinley sent the US Navy to Cuba to protect American property and citizens. On 15 February, 1898, the USS Maine exploded in Havana harbour, killing 266 Americans. The American public believed Spain was behind the explosion, and America declared war on Spain to ensure Cuban independence. The Navy used its great power to block off the island from Spain, and sent 17,000 soldiers to control the island - including the famous 'Rough Riders' led by Theodore Roosevelt. A peace treaty was signed on 10 December, 1898 in Paris. Spain gave up Cuba, Puerto Rico and Guam and sold the Philippine Islands to the US. The World Wars World War I World War I began in Europe well before America joined the Allied effort. President Woodrow Wilson kept the country neutral, but an attack on the ocean liner Lusitania on 7 May, 1915 by German ships made the Americans very angry. Germany said that it would stop sinking passenger boats without warning. However, Wilson was forced into war when Germany revoked its promise, and the Zimmerman Telegram was discovered - a message asking for Mexico's support in a war against America. The United States was forced to join the Allies. Within a quarter of a year, more than one million people in America joined the army. Huge amounts of supplies would be used in the war and many people would volunteer. Their patriotism helped Americans comply with rationing, drafts and war bonds. Women filled jobs left by men as men rushed into combat. Of course, the Allies prevailed over the Central Powers eventually and America helped rebuild the European nations hurt by war. After World War I, feminism swept the country and the suffrage movement gained momentum. Americans became more distrustful of foreigners, and immigration was slowed. They were also scared of communism and its influence on American society. Calvin Coolidge took office as President in 1923, and encouraged business by raising tariffs, lowering taxes and not enforcing monopoly and antitrust laws. This resulted in a period of great prosperity in the 1920s and a boom of industry. The Great Depression On 29 October, 1929, 16 million shares of stock were sold, but there was no one to buy them. The market crashed. More than a thousand banks failed, thousands of businesses failed and industry and agriculture was producing half their usual revenue. There were 12 million unemployed Americans. The President at the time, Herbert Hoover, refused to intervene with the economy, as he thought that it wasn't the government's place. Villages of shanties set up for people without homes would be called 'Hoovervilles'. In the election of 1932, Franklin D Roosevelt was elected President in a landslide, and he began instituting a 'New Deal'. Roosevelt began huge projects and made several regulations in an attempt to recover from the Depression. He tried to increase home ownership and bring banking back to prosperity. One important item on Roosevelt's agenda was the Social Security Act. Meanwhile, World War II was raging in Europe, and America wanted to stay out of the conflict. Neutrality dissolved, however, as America sent several ships to Britain in return for some bases near America. It also lifted an arms embargo. Roosevelt said that the US had to be an arsenal of democracy, and fight German Nazis... if indirectly at first. He sent even more weapons to Britain as a part of the Lend-Lease Act. On 7 December, 1941, however, Pearl Harbor, Hawaii was attacked by naval and air forces. This propelled America to war, and the country declared war on the Axis Powers. America notoriously sent 100,000 Japanese people to relocation camps away from the Pacific Coast. World War II Yesterday, December 7, 1941 - a date which will live in infamy - the United States of America was suddenly and deliberately attacked by naval and air forces of the Empire of Japan. - Franklin Delano Roosevelt, asking Congress to declare war on Japan The country rose to meet its enemies. They had to fight on two fronts: against the Japanese in the Pacific and against the Axis Powers in Europe. The most important attack for the United States was D-Day, where US, British and Canadian troops began the liberation of Europe. After many long years of battle, Germany surrendered on 8 May, 1945. Japan did not submit though, and it was known that a large scale invasion of Japan would be costly in lives, so two atomic bombs were dropped on the Japanese cities of Hiroshima and Nagasaki: a decision still controversial today. Japan surrendered on 14 August. All the production and employment in World War II pulled America together, and the world generally recovered. Peace was achieved quickly, and the armies soon came back to America. Franklin Roosevelt had died during the course of the war, and Harry Truman took the office. He instituted the Truman Doctrine, which essentially said that America should assist people trying to be free. The Cold War As the second World War ended, another war began. The Cold War between the US and Russia would occupy the minds of Americans for many decades. As the Soviet Union and the US both occupied Berlin, they confronted each other there often. Just like in Berlin, the country of Korea was divided into sectors following World War II. Neither side would agree to reunite these halves of Korea, and they stayed divided. North Koreans sent some of their soldiers into South Korea and America responded, because they felt that another communist country in the region8 would be a very bad idea. A difficult war ensued, and it demonstrated how the country would fight against the growth of communism. Back in America, many were afraid of communists in the government. Joseph McCarthy headed the House Un-American Activities Committee and charged that there were communists in high positions with the famous quote 'Are you or have you ever been a member of the Communist Party?' Julius and Ethel Rosenberg were executed for allegedly stealing atomic bomb secrets in 1953. Dwight D Eisenhower, a respected and heroic figure, was elected President in 1952, reigning over a period of conservatism and anti-communist feelings. The time was extremely prosperous. It was also under Eisenhower that the Soviet Union and America began an arms race. Ask not what your country can do for you, ask what you can do for your country. - John F Kennedy's Inaugural Address John F Kennedy was elected President in 1960, and helped bring a time of hope and prosperity. He was young and charismatic, and he inspired Americans. The Cuban Missile Crisis was the closest that the country really ever got to nuclear war with the Soviet Union. Kennedy proved to be a strong leader and dreamed of America landing on the moon by the end of the decade. He was assassinated on 22 November, 1963 in Dallas, Texas. Lyndon Johnson was his successor. Johnson pushed a 'Great Society' domestic plan, but escalated the infamous Vietnam War. Civil Rights were important in the 1960s, with great speakers like Martin Luther King, Jr stirring peaceful protest and civil disobedience. Johnson, an expert political craftsman, was able to push the Civil Rights Act of 1964 through Congress and make it law. Johnson was also chief executive during the height of the Vietnam War. Enormous numbers of troops were used to fight communism in the Asian country. The war was largely unpopular, especially with younger people such as university students. Richard M Nixon was elected to the high office in 1968, and attempted to lower the troop levels in Vietnam, but his move into Cambodia and Laos was unpopular. During his time in office, Americans made it to the moon in 1969. He resigned from office in disgrace following the huge political scandal of Watergate. His vice president Spiro Agnew having resigned, Gerald Ford took the office. Ford wasn't reelected, and Jimmy Carter took office in 1976. His presidency was noted mainly for the Iran hostage crisis, but also for the many liberal reforms that Carter undertook. In 1980, former California governor and actor Ronald Reagan was elected to the presidency. His Reagan-omics, ending of the Cold War, Iran-Contra Affair and unique politics defined an era. The Modern Era Reagan's vice president George HW Bush (George Bush Snr) was elected President after Reagan's second term ended and a recession hit America. Bush led the US and former USSR in dismantling nuclear weapons. He was popular for his successes in the Gulf War, but was unable to make the economy pick up, so he lost to Bill Clinton in the election of 1992. Clinton had several important accomplishments - the economy did improve, he mediated between the Palestineans and Israelis, and America entered the North Atlantic Free Trade Organisation (NAFTA). Clinton was reelected in 1996, and was remembered in his second term for his Whitewater 'Scandal' and the Monica Lewinsky Affair. In 2000, George W Bush was elected in one of the most contested election battles in history. On September 11, 2001, the famous World Trade Center Twin Towers were destroyed upon the impact of two aeroplanes, and another plane in Washington DC attacked the Pentagon. The disaster killed around 3,000 people, prompting a 'war on terror'. Bush's eight-year Presidency was generally judged to be a failure and contemporary historians who were polled consistently rated his Presidency as one of America's worst. His successor was a somewhat obscure African-American Senator from Illinois (the first non-white male to hold the office) named Barack Obama, elected in a landslide in 2008. History awaits the verdict on the Obama era. The future ain't what it used to be. - Yogi Berra
<urn:uuid:92b9c559-d876-4b1e-99e5-a6d616970709>
CC-MAIN-2021-21
https://h2g2.com/edited_entry/A3851066
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988741.20/warc/CC-MAIN-20210506053729-20210506083729-00138.warc.gz
en
0.982239
7,633
3.40625
3
The seed of Moringa oleifera (MO) is a well-known coagulant used in water and wastewater treatment, especially in developing countries. The main mechanism of MO seed extract in coagulation is the positive protein component for charge neutralization. The method for efficient extraction of MO seed is very important for high coagulation activity. In this study, the effects of extraction mixing speed and extraction time of MO on coagulation activity were evaluated using a distilled water extraction method. Although the rotation per minute for extraction did not affect the coagulation efficiency, the extraction time strongly affected the coagulation efficiency of the extract. To evaluate the characteristic change of MO extract by extraction time, the charge of MO extract and protein characteristic in MO extract were analysed. As the extraction time was short, more positive charge and higher protein content were observed. For detailed protein analysis, the fluorescence spectroscopic study (EEM analysis) was performed. The tryptophan-like peak increased at longer extraction times. For efficient extraction of MO seed, a short extraction time is strongly recommended. Developing countries are facing water safety problems due to the lack of affordable water treatment technology. To ensure availability and sustainable management of water and sanitation for everyone, the sustainable development goals developed by the United Nations (UN) include clean water and sanitation in developing countries. As one of the key water purification processes, coagulation using natural coagulants is an easy-to-use and cost-effective technology for developing countries. Some studies on natural coagulants have been carried out and various natural coagulants were produced or extracted from microorganisms, animals or plants (Okuda et al. 1999). Because the natural coagulant method relies on local materials and local labour, renewable resources and food grade plant materials, this technology can improve the goal of sustainable water treatment technologies (Miller et al. 2008). A water-soluble extract of the dry seeds of Moringa oleifera (MO) is well-known natural coagulant used in developing countries (Ndabigengesere et al. 1995). MO is a very widespread species and grows quickly at low altitudes in the whole tropical belt including arid zones (Morton 1991). Using this natural coagulant could help developing countries to alleviate their economic situation and allow further extension of water supply to rural areas (Ndabigengesere & Narasiah 1998). Various laboratory studies have so far shown that the MO seeds possess effective coagulation properties (Ndabigengesere et al. 1995; Ndabigengesere & Narasiah 1998; Kwaambwa & Maikokera 2007; Al-Anizi et al. 2014). Coagulation efficiency has been evaluated using various factors such as turbidity, pH, colour and Escherichia coli. According to previous research, turbidity removal was influenced by initial turbidity of raw water (Nkurunziza et al. 2009). Colour removal showed the same trend as turbidity removal, and E. coli removal was also observed as associated with turbidity removal, because the main mechanism of E. coli removal is precipitation in the coagulation process. The active agents of coagulation by MO are dimeric cationic proteins of molecular weight of approximately 13 kDa with isoelectric point between 10 and 11 (Ndabigengesere et al. 1995). In addition, another active component was purified from MO seeds by aqueous salt extraction: it was not a protein, polysaccharide or lipid, but an organic polyelectrolyte with molecular weight of about 3.0 kDa (Okuda et al. 2001). The seed extract works by adsorption of colloids and subsequent charge neutralization of the resulting compound, allowing for effective precipitation out of solution (Ndabigengesere et al. 1995; Miller et al. 2008). The crushed seed powder, when mixed with water, yields water-soluble proteins that possess a net positive charge. The solution acts as a natural cationic polyelectrolyte during treatment (Sutherland et al. 1990). The water soluble proteins have been proposed to bind to the predominantly negatively charged particles (silt, clay, bacteria, etc. suspended in a colloidal form) that make raw waters turbid (Kwaambwa & Maikokera 2007). The surface activity and fluorescence of the protein component in MO extract have also been studied (Kwaambwa & Maikokera 2007; Maikokera & Kwaambwa 2007). The fluorescence of proteins originates from tryptophan, tyrosine and phenylalanine residues. In aqueous media, the emission peaks of phenylalanine, tyrosine and tryptophan occur at 280, 305 and 348 nm, respectively. The emission of proteins is dominated by tryptophan which absorbs at the longest wavelength. The toxicity of MO seed has been studied (Ndabigengesere et al. 1995; Al-Anizi et al. 2014). Although the MO seed is well known as a non-toxic and biodegradable coagulant (Ndabigengesere et al. 1995), recently, cytoxicity and genotoxicity of MO was studied by using an Acinetobacter bioreporter (Al-Anizi et al. 2014). The powdered MO seed showed significant cytoxicity effects at concentrations from 1 to 50 mg/L. The insoluble fatty acidic components of MO seed mainly contributed to the cytoxicity, and limited dissolution of MO seed granule resulted in dominant genotoxicity. Based on these results, more research, such as toxicity to humans and the proper extraction method for reducing toxicity is required. The storage conditions and performance of MO extract were studied previously (Garcia-Fayos et al. 2016). MO extract stored at room temperature generally loses coagulation activity because coagulation protein present decreased. To maintain coagulation protein in extract, the MO extract should be stored at 4 to −18°C. Sodium chloride solution was used to improve the extraction efficiency of MO (Okuda et al. 1999). The MO extract in sodium chloride solution was found to have 7.4 times higher coagulation efficiency than that using distilled water. Because salt increases protein–protein dissociations, protein solubility for coagulation could be increased as the salt ionic strength increases. However, there is lack of studies on extraction condition such as extraction rotation per minute (rpm) and extraction time which could affect the coagulation capacity. In the present study, we tested the relationship between MO seed extract and extraction conditions. The aims of this study were: (1) to evaluate the effect of extraction rotation speed and extraction time for MO extract as a coagulant, (2) to characterize the MO extract depending on the extraction conditions and (3) to suggest the optimum MO extraction conditions for use of coagulant. MATERIALS AND METHODS MO seed extraction The MO seeds used in this study were brought from India. The MO seeds were stored with winged seed covers in our laboratory at room temperature. Prior to use, the winged seed cover was removed and the kernel was ground to fine powder by mortar and pestle and then sieved using a 1.18 mm sieve. The active agents of coagulation were then extracted from the powder using distilled water. A concentration of 5% (5 g/100 mL) was used based on previous research (Ndabigengesere & Narasiah 1998). In this study, we focused on revolutions per minute and extraction time for optimal extraction condition. The MO suspension was stirred using a magnetic stirrer by revolutions per minute from 100 to 800 rpm. The extraction time was varied from 1 to 120 min. The suspension was then filtered firstly through 10 μm nylon filter and then through a 0.45 μm membrane. The prepared MO seed extract was stored for 2 weeks at room temperature before the coagulation test. Preparation of turbid water Turbid water for coagulation tests was prepared by adding kaolin into wastewater from Yonsei University in Wonju, Korea. After adding kaolin (Ducksan, Korea) into distilled water, the suspension was stirred for 30 min to achieve uniform dispersion of kaolin particles, and then was allowed to settle for 24 h for complete hydration of the particles. The supernatant was carefully collected and mixed with wastewater. Table 1 shows the characteristics of the synthetic turbid water. |Parameter .||Value .| |Turbidity||325 ± 5 NTU| |Alkalinity||192 mg/L as CaCO3| |Parameter .||Value .| |Turbidity||325 ± 5 NTU| |Alkalinity||192 mg/L as CaCO3| The jar test has been widely used to evaluate coagulation efficiency (Hudson 1981; Ndabigengesere et al. 1995). Glass beakers each containing 1 L of turbid water were placed in the slots of a jar tester. This study consisted of batch experiments including rapid mixing, slow mixing and sedimentation. The MO seed extract was added to test beakers at various doses and agitated at 100 rpm for 2 min for rapid mixing. The mixing speed was then reduced to 40 rpm for 30 min. After sedimentation for 30 min, an aliquot of 10 mL was sampled from the mid depth of the beaker and residual turbidity was determined. The same coagulation test was conducted with no coagulant as a control. Turbidity was measured using a turbidimeter (2100QIS01 HACH, USA) and pH was determined using a pH meter (Thermo Fisher, USA). UV absorbance was measured in a Cary 50 spectrophotometer (Varian, USA). E. coli was enumerated on desoxycholate agar (BD Co., USA) which is a selective medium for E. coli. To measure total dissolved organic carbon (DOC), the collected sample was passed through a 0.45 μm filter and diluted 5-fold. The prepared sample was determined using a total organic carbon analyser (TOC-V CPH/CPN, Shimadzu). Zeta potential was measured with a particle electrophoresis apparatus (Photal Otsuka Electronics, ELSZ- 1000, Japan) after 5-fold dilution. To measure the concentration of organic compounds, protein was measured by the Bradford method (Thermo Fisher, USA) (Bradford 1976), which is a colorimetric protein assay based on an absorbance shift from red to blue using a dye Coomassie Brilliant Blue G-250 (Bradford reagent). To measure the spectrum of MO extract, the sample was passed through a 0.45 μm filter and diluted 20-fold. The range of the spectrum was measured from 200 nm to 800 nm. To analyse the characteristic of organic matter in the MO extract, the fluorescence excitation–emission (EEM) was measured by scanning over an excitation range of 240–440 nm in 10-nm increments and an emission range of 290–530 nm in 10-nm increments using a LS-55 (Luminescence Spectrometer, PerkinElmer, USA). The excitation and emission bandwidths were 5 nm each and the scanning speed was set at 1,000 nm min−1. RESULTS AND DISCUSSION Effect of extraction condition for MO extract as a coagulant The MO seed was ground and prepared as a powder for further use. To focus on the effect of extraction conditions, distilled water was used for extraction of MO seed. Solvent or other additives were not used as they would not be easy to apply in rural communities in developing countries. There are two important parameters for MO seed extraction: the rpm and extraction time. Mixing speed for MO seed extraction Mixing speed is related to energy consumption in the extraction procedure. It is difficult to find the optimum mixing speed for MO extraction. The effect of mixing speed was evaluated prior to the effect of extraction time experiment. In previous research, the extraction time was generally 30 min (Ndabigengesere et al. 1995; Ndabigengesere & Narasiah 1998; Baptista et al. 2015; Petersen et al. 2016), so the extraction time was fixed at 30 min in the test for mixing speed. The amount of MO seed powder was also fixed at 5 g in 100 mL of distilled water. The MO seed was extracted with extraction rotation speed of 100, 200, 400, 600 and 800 rpm. To evaluate the coagulation efficiency of MO seed extract with various extraction mixing speeds, synthetic wastewater of 327 NTU was prepared using public wastewater and kaolin (Ducksan, Korea). A general jar test was performed with injection of 10 mL extracted MO into 1 L of synthetic wastewater. Figure 1 shows turbidity removal efficiency by extraction mixing speed. In the case of no injection of MO seed extract, turbidity removal efficiency was only 10%. However, regardless of extraction speed, 95% turbidity removal average value was achieved in all case of MO seed extract injection, i.e. rotation speed did not affect turbidity removal. To optimize the energy consumption, 100 rpm was used to evaluate the effect of extraction time on MO coagulation efficiency. The MO seed extract was prepared by using distilled water and varying the extraction times from 1 min to 120 min. The stirrer speed was fixed at 100 rpm. For the coagulation test, the prepared MO extract was injected in to 1 L turbid water of 325 ± 5 NTU turbidity. The dose of MO extract solution (volume/volume) was 1, 5 and 10 mL/L. To evaluate the coagulation activity, the turbidity was measured after the coagulation test. Figure 2 shows turbidity removal efficiency using the MO seed extract of various extraction times as a function of MO seed extract dose. In the control test (no injection of MO seed extract), about 19% of turbidity removal by natural flocculation was found in raw turbid water. The results showed that the turbidity removal efficiency increased with increasing MO seed extract dose (1 mL/L < 5 mL/L < 10 mL/L). The optimum dose of MO extract was not observed in the seed extract dose range 1 mL/L to 10 mL/L. For the effect of extraction time of MO seed, it is very important to note that turbidity removal efficiency surprisingly decreased with increasing the seed extraction time at all doses of seed extract. When the MO seed extract dose was 1 mL/L, the extract with over 30 min extraction showed the same turbidity removal efficiency as the control (=no injection of MO extract). With <10 min extraction time, 50% turbid removal efficiency was observed. With 5 mL/L and 10 mL/L MO extract and 1 min extraction time, the turbidity removal efficiency was over 90%, which was similar to the removal efficiency at 5 and 10 min extraction time. However, the turbidity removal efficiency decreased with increasing extraction time from 30 min to 120 min. Thus, in these experimental conditions, the 5 mL/L of the MO seed extract with 1 min extraction time could be suggested as the optimum point. After coagulation, the effect of MO seed extract on pH and alkalinity was also evaluated with turbidity removal efficiency. The pH and alkalinity of initial turbid water was pH 7.7 and 192 mg/L as CaCO3, respectively. After coagulation, pH and alkalinity were unchanged, the same as shown in previous research (Ndabigengesere & Narasiah 1998). Characteristics of MO extract by extraction time The previous tests showed that the extraction time of the MO seed strongly affects coagulation efficiency. The characteristic change was evaluated depending on the extraction time of MO seed, zeta potential, protein type and concentration, and UV absorption. The dose of MO seed extract of 5 mL/L was selected based on turbidity removal tests. Charge neutralization is the main coagulation mechanism for MO seed extract (Ndabigengesere et al. 1995; Miller et al. 2008). Most particulates in natural waters are negatively charged in the natural pH range (pH 6 to 8). The turbid water in this study was negatively charged with −9.38 ± 0.87 mV. When the positively charged coagulant is added, the repulsive charges are neutralized (close to zero). The van der Waals force causes the particles to agglomerate and settle (Snodgrass et al. 1984; Jarvis et al. 2006; Li et al. 2006). The charge of MO seed extract was measured by zeta potential (Figure 4). All the MO seed extract was charged positively. As the extraction time increased, the charge of MO seed extract decreased. It is very interesting to note that the decreased positive charge had the same trend as turbidity removal efficiency. Since the positive charge of MO extract mainly affected the coagulation, the short extraction time is recommended for extraction of MO extract. Protein concentration and component Regarding the active component of MO extract for coagulation, researchers have mainly suggested cationic proteins acting through the mechanism of charge neutralization via the isoelectric point, and adsorption (Ndabigengesere et al. 1995; Ghebremichael et al. 2005; Miller et al. 2008; Joseane et al. 2013). To evaluate the protein change by extraction time and its effect on coagulation efficiency, the concentration of protein and the main component of MO seed extract were measured and compared with its coagulation efficiency by extraction time. The Bradford method was used to measure protein concentration. Figure 5(a) shows the protein concentration of MO seed extract at various extraction times from 1 min to 120 min. With increasing extraction time, the protein concentration decreased from 5 to 0.5 mg/mL, which shows same trend as turbidity removal efficiency shown in red triangles. This is consistent with the protein being the main mechanism for coagulation using MO seed extract. To better understand the characteristics of the MO seed extract, the excitation emission matrix (EEM) of the MO seed extract was measured (Figure 5(b)). The EEM depends on the excitation (y-axis) and emission (x-axis) wavelengths. There are three main fluorescent components depending on the excitation and emission wavelengths: protein-like, humic acid-like and fulvic acid-like. According to previous studies, there are two main peaks in the EEM results: tryptophan-like fluorescence (280 nm for excitation and 348 nm for emission) and tyrosine-like fluorescence (275 nm for excitation and 305 nm for emission) (Edelhoch 1967; Coble 1996; Lakowicz 2006). The two peaks showed completely different trends depending on the extraction time. In the case of MO seed extract with 1 min extraction time, a strong tryptophan-like peak is seen, indicated in red. The tryptophan-like peak decreased as extraction time increased as show in Figure 5(c), i.e. the same trend as turbidity removal efficiency (see Figure 2). Maikokera & Kwaambwa (2007) have previously been shown the tryptophan in coagulant protein from M. oleifera to be an active component in coagulation. As the stirring time for extraction increased, the important coagulation protein (tryptophan-like protein) decreased. Mechanical force, such as vigorous stirring, whipping, heating, or radiation can weaken or break down the protein structure causing it to unfold and lose its properties (Patel et al. 1988; Isralewitz et al. 2001). Since the stirring method for extraction could affect protein denaturation, protein concentration and zeta potential from extracted protein could begin to decrease. On the other hand, with increase of extraction time, the tyrosine-like peak appeared in EEM spectrum. It shows the opposite trend to turbidity removal efficiency depending on the extraction time, which means no effect on coagulation. The UV spectrum (250 to 300 nm) of the MO extract which contained many water-soluble components was measured (Figure 6). To compare UV absorbance depending on the extraction time, the stock of MO seed extract was diluted 20-fold. The UV spectrum showed the highest absorbance at about 270 nm and increased with longer extraction times. This is significantly different from protein concentration in MO extract measured by Bradford method in Figure 5(a), which is based on the formation of a complex between Brilliant blue G dye and proteins in solution (Bradford 1976; Roberts & Jones 2008). Because the UV absorbance reveals the total content of the MO seed extract, it is unsurprising that the results of the Bradford method were different. The MO seed extracted into water may include components that are efficient and inefficient for coagulation. Correlation of turbidity removal with MO extract characteristics To evaluate the main MO extract characteristic for coagulation, correlation of turbidity removal rate with MO extract characteristics is shown in Figure 7. Three MO extract characteristics were used: zeta potential, protein by Bradford method, and tryptophan-like protein from EEM analysis. R2 values of each factor were 0.8242, 0.923 and 0.9569, respectively. Although the positive zeta potential might contribute to coagulating turbid water (Figure 4), protein components contribute more to turbidity removal with over 0.9 of correlation value. In particular, the tryptophan-like protein in the MO extract strongly affects coagulation mechanism. Since protein could be denatured by physical stress (Patel et al. 1988; Isralewitz et al. 2001), it is assumed that the tryptophan-like protein of MO extract decreased with longer extraction times. In fact, the extraction time strongly affects the tryptophan-like protein concentration to decrease the coagulation efficiency. To reduce the energy consumption and increase the coagulation efficiency for MO extraction, the short extraction time should be strongly recommended. Turbidity and DOC caused by MO extract The water soluble extract of MO seed contains organic compounds, which can affect turbidity and DOC in the coagulation test. For the evaluation of increasing turbidity and DOC by MO extract, the MO extract was added to distilled water in the same conditions as the coagulation test. Figure 8(a) and (b) show turbidity and DOC in distilled water, respectively, which is only from the MO extract. The MO extract produced by 120 min extraction shows significantly high turbidity (9.0 NTU in 10 mL/L dose of MO extract). DOC concentrations of 3.1, 3.8, and 4.2 mg/L when only 1 mL/L dose of MO extract was used. At the optimum dose of MO (1 min of extraction time and 5 mL/L of MO extract dose), the extract caused low turbidity, which does not affect the water turbidity in the coagulation test. The optimum dose of MO extract increased DOC to 15 mg/L of, which could affect by-product formation during chlorination, e.g. trihalomethanes, haloacetic acids. When extraction time varied, there was no difference of DOC concentration. The DOC was only affected by extract dose. The seeds of MO have been widely studied for coagulation in water treatment. This study was performed to find efficient extract conditions of MO seed for coagulation. The results obtained in this study lead to the following main conclusions: The rotation speed for MO extraction did not affect the coagulation activity from 100 rpm to 800 rpm so 100 rpm was used for energy efficiency. The MO extract produced by a short extraction time showed higher coagulation activity in the range from 1 min to 120 min. The characteristics of the MO extract by extraction time were measured by zeta potential and protein content. As the extraction time increased, the positive charge of MO extract decreased, and it reduced the charge neutralization. Decreased protein concentration by the Bradford assay was observed with increasing the extraction time. The tryptophan-like peak in the MO extract, which is well known main mechanism for coagulation, was also observed to decrease. The extraction time for MO seed strongly affect the coagulation activity of MO extract. The short extraction time (1 min) could be suggested for efficient extraction condition of MO extract. The MO extract contains complex components including positive protein for coagulation. When added to raw water, the MO extract can produce turbidity and DOC. The organic compounds in MO extract could affect some steps in the water treatment process, which means that organic matter from MO extract can be a precursor of disinfection by-product. For safe use of MO extract, an appropriate amount of MO extract should be used to reduce the residual organic compound. This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2016S1A5B8925203).
<urn:uuid:f804f1a7-29ff-43e1-80d3-5c975f52309e>
CC-MAIN-2021-21
https://iwaponline.com/jwh/article/16/6/904/38949/Evaluation-of-Moringa-oleifera-seed-extract-by
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989812.47/warc/CC-MAIN-20210515035645-20210515065645-00614.warc.gz
en
0.944652
5,257
3.046875
3
Month: August 2019 Physicists performed a Bell experiment between the islands of La Palma and Tenerife at an altitude of 2,400 m. Starting with an entangled pair of photons, one photon was sent 6 km away to Alice, and the other photon was sent 144 km away to Bob. The physicists took several steps to simultaneously close the locality loophole and freedom-of-choice loophole. Image credit: Thomas Scheidl, et al. and Google Earth, ©2008 Google, Map Data ©Tele Atlas. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. (PhysOrg.com) — The latest test in quantum mechanics provides even stronger support than before for the view that nature violates local realism and is thus in contradiction with a classical worldview. By performing an experiment in which photons were sent from one Canary Island to another, physicists have shown that two of three loopholes can be closed simultaneously in a test that violates Bell’s inequality (and therefore local realism) by more than 16 standard deviations. Performing a Bell test that closes all three loopholes still remains a challenge, but the physicists predict that such an experiment might be “on the verge of being possible” with state-of-the-art technology. More information: Thomas Scheidl, et al. “Violation of local realism with freedom of choice.” 19708-19713, PNAS, November 16, 2010, vol. 107, no. 46. DOI:10.1073/pnas.1002780107 The physicists, who belong to the group of Rupert Ursin and Anton Zeilinger and were all at either the Austrian Academy of Sciences in Vienna or the University of Vienna when performing the experiments in 2008, have published their study on the new Bell test in the early edition of PNAS. As they explain in their study, local realism consists of both realism – the view that reality exists with definite properties even when not being observed – and locality – the view that an object can only be influenced by its immediate surroundings. If a Bell test shows that a measurement of one object can influence the state of a second, distant object, then local realism has been violated.”The question of whether nature can be understood in terms of classical concepts and explained by local realism is one of the deepest in physics,” coauthor Johannes Kofler told PhysOrg.com. “Getting Bell tests as loophole-free as possible and confirming quantum mechanics is therefore an extremely important task. From a technological perspective, certain protocols of quantum cryptography (which is entering the market at the moment) are based on entanglement and violation of Bell’s inequality. This so-called ‘unconditional security’ must in practice take care of the loopholes in Bell tests.”The physicists explained that, in experimental tests, there are three loopholes that allow observed violations of local realism to still be explained by local realistic theories. These three loopholes can involve locality (if there is not a large enough distance separating the two objects at the time of measurement), the freedom to choose any measurement settings (so measurement settings may be influenced by hidden variables, or vice versa), and fair sampling (a small fraction of observed objects may not accurately represent all objects due to detection inefficiencies).Previous experiments have closed the first loophole, which was done by ensuring a large spatial separation between the two objects (in this case, two quantum mechanically entangled photons) so that measurements of the objects could not be influenced by each other. Special relativity then ensures that the objects cannot influence each other, since no physical signals can travel faster than the speed of light. In these experiments, classically unexplainable correlations were still observed between the objects, indicating a violation of local realism. (The fair sampling loophole was closed in another earlier experiment using ions, where large detection efficiencies can be reached.) Debunking and closing quantum entanglement ‘loopholes’ Citation: Physicists close two loopholes while violating local realism (2010, November 30) retrieved 18 August 2019 from https://phys.org/news/2010-11-physicists-loopholes-violating-local-realism.html Explore further In the current experiment, the physicists simultaneously ruled out both the locality loophole and the freedom-of-choice loophole. They performed a Bell test between the Canary Islands of La Palma and Tenerife, located 144 km apart. On La Palma, they generated pairs of entangled photons using a laser diode. Then they locally delayed one photon in a 6-km-long optical fiber (29.6-microsecond traveling time) and sent it to one measurement station (Alice), and sent the other photon 144 km away (479-microsecond traveling time) through open space to the other measurement station (Bob) on Tenerife. The scientists took several steps to close both loopholes. For ruling out the possibility of local influence, they added a delay in the optical fiber to Alice to ensure that the measurement events there were space-like separated from those on Tenerife such that no physical signal could be interchanged. Also, the measurement settings were randomly determined by quantum random number generators. To close the freedom-of-choice loophole, the scientists spatially separated the setting choice and the photon emission, which ensured that the setting choice and photon emission occurred at distant locations and nearly simultaneously (within 0.5 microseconds of each other). The scientists also added a delay to Bob’s random setting choice. These combined measures eliminated the possibility of the setting choice or photon emission events influencing each other. But again, despite these measures, the scientists still detected correlations between the separated photons that can only be explained by quantum mechanics, violating local realism.By showing that local realism can be violated even when the locality and freedom-of-choice loopholes are closed, the experiment greatly reduces the number of “hidden variable theories” that might explain the correlations while obeying local realism. Further, these theories appear to be beyond the possibility of experimental testing, since they propose such things as allowing actions into the past or assuming a common cause for all events. Now, one of the greatest challenges in quantum mechanics is simultaneously closing the fair-sampling loophole along with the others to demonstrate a completely loophole-free Bell test. Such an experiment will require very high-efficiency detectors and other high-quality components, along with the ability to achieve extremely high transmission. Also, the test would have to operate at a critical distance between Alice and Bob that is not too large, to minimize photon loss, and not too small, to ensure sufficient separation. Although these requirements are beyond the current experimental set-up due to high loss between the islands, the scientists predict that these requirements may be met in the near future. “Performing a loophole-free Bell test is certainly one of the biggest open experimental challenges in the foundations of quantum mechanics,” Kofler said. “Various groups are working towards that goal. It is on the edge of being technologically feasible. Such an experiment will probably be done within the next five years.” Copyright 2010 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. Naked Molerat Heterocephalus glaber eating. Image: Wikipedia. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Explore further Perhaps even more interesting is the fact that because they live so close together underground, carbon dioxide builds up in their den to levels that would kill most other mammals, and because oxygen levels are low too, an environment exists that would prove painful for most animals due to acid buildup in tissues. Mole rats are impervious to pain from acid though, a fact that has intrigued scientists for years. Most assumed they simply had different types of nociceptors than other mammals. But that’s not the case, as Gary Lewin and his colleagues from the Max Delbrück Center for Molecular Medicine in Berlin write in their paper published in Science. Instead, it appears the mole rats have a species specific variant of a certain sodium channel.In order to feel things such as acid burn, animals have sensory neurons in their tissues, the tips of which have channels called nociceptors which control the flow of sensory information to the neuron, which is responsible for sending electrical signals to the brain. Channels can let things through, or slam shut blocking things off depending on the cause of the stimulation. In the case of acid, nociceptors for most mammals are stimulated and partially close, but let enough of the sensory information pass through to allow the brain to feel the pain acidic substances create. Oddly enough, the team found that to be the case with mole rats too, which meant they had to look elsewhere. In this case that meant looking at another type of sodium channel, called NaV1.7, which they found became blocked when exposed to acid.This new discovery by the team means they have discovered that an animal doesn’t have to have a unique type of nociceptor in order to be free from acid pain, all that’s necessary is a change in the NaV1.7 channel that directs the flow of information passed on to neuron below. This is quite a find because it could lead to ways to alleviate certain kinds of pain that people experience, such as inflammation from arthritis. Journal information: Science More information: The Molecular Basis of Acid Insensitivity in the African Naked Mole-Rat, Science, 16 December 2011: Vol. 334 no. 6062 pp. 1557-1560. DOI: 10.1126/science.1213760ABSTRACTAcid evokes pain by exciting nociceptors; the acid sensors are proton-gated ion channels that depolarize neurons. The naked mole-rat (Heterocephalus glaber) is exceptional in its acid insensitivity, but acid sensors (acid-sensing ion channels and the transient receptor potential vanilloid-1 ion channel) in naked mole-rat nociceptors are similar to those in other vertebrates. Acid inhibition of voltage-gated sodium currents is more profound in naked mole-rat nociceptors than in mouse nociceptors, however, which effectively prevents acid-induced action potential initiation. We describe a species-specific variant of the nociceptor sodium channel NaV1.7, which is potently blocked by protons and can account for acid insensitivity in this species. Thus, evolutionary pressure has selected for an NaV1.7 gene variant that tips the balance from proton-induced excitation to inhibition of action potential initiation to abolish acid nociception. Study: Why cold is such a pain © 2011 PhysOrg.com (PhysOrg.com) — Mole rats aren’t the prettiest things; living underground as they do, they more resemble Gollum from the Lord of the Rings trilogy than other rats or mice. But they’re interesting to scientists nonetheless because they have some interesting traits. They live for twenty years for example, and none of them ever get cancer. Citation: Researchers uncover reason why mole rats are oblivious to acid pain (2011, December 16) retrieved 18 August 2019 from https://phys.org/news/2011-12-uncover-mole-rats-oblivious-acid.html (Phys.org)—A team of researchers with the National Center for Nanoscience and Technology and Beihang University, both in China, has developed a biodegradable triboelectric nanogenerator for use as a life-time designed implantable power source in an animal body. In their paper published in the journal Science Advances the team describes their nanogenerator, its possible uses and the ways it can be tweaked for use in different applications. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Scientists have been working on developing internal devices for many years and several have been created and are now in use inside human patients—the pacemaker is the most well known. But to date, all such devices suffer from the same deficit—none run using an internal power source, which means they must rely on batteries. While batteries are convenient, they tend to run out of power, which means a patient must undergo a surgical procedure to have them replaced and surgical procedures by their very nature are risky because they open the body to possible infection. A better way, as the researchers with this new effort point out, would be to have implantable devices running off a power source that is generated inside the body, such as capturing heat or making use of the movement of blood. The new device they have created generates electricity via triboelectricity—where electricity is generated when two materials touch each other and then separate, one of the common ways that static electricity comes about.The new device consists of two strips of multi-layered material. One of the strips has a flat film outer layer, the other strip has nanometer sized protruding rods on its exterior—when the two strips meet and then pull away, a tiny amount of electricity is created. The layers are kept apart by blocks of a biodegradable polymer; electricity is generated as parts of the body moves in a way that causes the two strips to come into contact and then to pull apart—over and over.Testing of the device showed it was capable of producing a power density of 32.6 milliwatts per square meter, which they found was enough to power a neuron-stimulation device used to steer the way neurons grow. The team claims their device has paved the way for a new generation of internal devices, noting that not only is it biodegradable, but it can be tuned to self-destruct over days, months or even years. Similar devices, they note, could be made to work by utilizing the power from a person breathing or from their heart beating. Photographs from BD-TENG at various stages of the degradation time line suggest that devices encapsulated in PLGA were initially resistant to mass degradation. However, after 40 days, significant mass loss and structure disintegration was initiated. Near-total mass loss was observed at 90 days. Credit: Science Advances (2016). DOI: 10.1126/sciadv.1501478 Journal information: Science Advances Explore further Citation: Using tellurium nanoparticles to achieve plasmonic-like and all-dielectric properties when exposed to sunlight (2018, August 13) retrieved 18 August 2019 from https://phys.org/news/2018-08-tellurium-nanoparticles-plasmonic-like-all-dielectric-properties.html As the search for renewable resources continues, some in the field have turned to studying the possibility of adding materials to water to make it easier to produce steam for driving a turbine. Several years ago, one team of researchers discovered that adding nanoparticles to water could cause it to produce steam when exposed to sunlight. Since that time, scientists have continued experimenting with adding nanomaterials. Meanwhile, other experiments have suggested that plasmonics could play a role in photothermal conversion. In this new effort, the researchers have found a material that allows nanoparticles to offer the benefits of both approaches.The work by the team in China was straightforward. They created nanoparticles made out of tellurium and then mixed them into a container filled with water and tested the result to see what changes it might have wrought.The researchers report that adding the nanoparticles improved the evaporation rate by a factor of three. Testing showed that they could raise its temperature from 29°C to 85°C in just 100 seconds by shining sunlight on it. The researchers found that this improvement was possible because the nanoparticles behaved like plasmonic nanoparticles—but only when smaller sized nanoparticles (less than 120 nanometers) were involved. Nanoparticles that were larger than 120 nanometers behaved like an all-dielectric. Mixing nanoparticles of both sizes into the same container of water allowed the sample to take on both characteristics—the team claims the resultant material is the first to demonstrate both properties.The researchers acknowledge that commercialization of their technique would be problematic because of the difficulty in manufacturing the different sized nanoparticles in sufficient quantities. They note that they are looking into ways to make them using another approach. But they also note that if they succeed, the concept has other applications, such as creating new kinds of extremely small antennas or sensors. © 2018 Phys.org Explore further More information: Churong Ma et al. The optical duality of tellurium nanoparticles for broadband solar energy harvesting and efficient photothermal conversion, Science Advances (2018). DOI: 10.1126/sciadv.aas9894AbstractNanophotonic materials for solar energy harvesting and photothermal conversion are urgently needed to alleviate the global energy crisis. We demonstrate that a broadband absorber made of tellurium (Te) nanoparticles with a wide size distribution can absorb more than 85% solar radiation in the entire spectrum. Temperature of the absorber irradiated by sunlight can increase from 29° to 85°C within 100 s. By dispersing Te nanoparticles into water, the water evaporation rate is improved by three times under solar radiation of 78.9 mW/cm2. This photothermal conversion surpasses that of plasmonic or all-dielectric nanoparticles reported before. We also establish that the unique permittivity of Te is responsible for the high performance. The real part of permittivity experiences a transition from negative to positive in the ultraviolet-visible–near-infrared region, which endows Te nanoparticles with the plasmonic-like and all-dielectric duality. The total absorption covers the entire spectrum of solar radiation due to the enhancement by both plasmonic-like and Mie-type resonances. It is the first reported material that simultaneously has plasmonic-like and all-dielectric properties in the solar radiation region. These findings suggest that the Te nanoparticle can be expected to be an advanced photothermal conversion material for solar-enabled water evaporation. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Journal information: Science Advances Typical morphology and structure characterization results of Te nanoparticles prepared by ns-LAL. Credit: Science Advances (2018). DOI: 10.1126/sciadv.aas9894 How gold nanoparticles could improve solar energy storage A team of researchers at Sun Yat-sen University in China has created a material with dual solar properties by adding tellurium nanoparticles to water—it showed both plasmonic-like and all-dielectric properties when exposed to sunlight. In their paper published in the journal Science Advances, the group describes their material and its possible uses. © 2019 Science X Network Explore further A team of researchers affiliated with several institutions in Japan has found evidence that offers credence to a theory that subducted crust exists at the base of Earth’s upper mantle. In their paper published in the journal Nature, the group describes experiments they conducted in their lab involving pressurizing material believed to exist in the mantle, and what they found. Johannes Buchen, with the California Institute of Technology, has written a News & Views piece on the work in the same journal issue. Citation: Lab experiments offer credence to theory that subducted crust exists at the base of Earth’s upper mantle (2019, January 10) retrieved 18 August 2019 from https://phys.org/news/2019-01-lab-credence-theory-subducted-crust.html Credit: CC0 Public Domain More information: Steeve Gréaux et al. Sound velocity of CaSiO3 perovskite suggests the presence of basaltic crust in the Earth’s lower mantle, Nature (2019). DOI: 10.1038/s41586-018-0816-5 Journal information: Nature Prior research has suggested that as tectonic plates shift around, some of the material on the surface is pushed below. Prior research has also suggested that such material would likely sink deep into the mantle because it is denser than the pyrolite that is believed to make up most of the mantle. Researchers theorize that that the subducted material would likely settle in the bottom of the transition zone between the upper and lower mantle—but to date, there has been little evidence backing up this theory. The primary test has been analyzing seismic waves traveling through such material, but these readings have two possible explanations—the first is that differences in the speed of waves traveling through material in the area is due to dehydration melting. The other is that it is surface material that has drifted down into the mantle. In this new effort, the researchers report that they believe they have found evidence that supports the latter theory.The researchers started by noting that the crust beneath the oceans is made mainly of basalt. They also noted that prior research showed that when basalt makes its way into the crust, a mineral called calcium silicate perovskite (CaSiO3) is created. Thus, if crust material made its way to the transition zone, it would be in the form of CaSiO3. But CaSiO3 exists in two configurations depending on its environment—at high temperature and high pressure, it has cubic symmetry. At lower temperatures and pressure, such as on the surface, it has tetragonal symmetry. That meant the team had to subject a sample of the crystal to pressure and temperatures approximately equal to that found in the mantle to test it. Once they succeeded, they sent ultrasonic waves through it to see if they matched what theory had suggested. They found that the cubic form of CaSiO3 did, indeed, slow the waves in approximately the same way as they do when passing through parts of the mantle, suggesting that the material there is very nearly the same. And that suggests the material is, indeed, subducted crust. Subducting slabs of the Earth’s crust may generate unusual features spotted near the core This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. , Nature Materials © 2019 Science X Network The idea of forming a quasi-2-D superconducting layer at the interface between two different compounds has been around for several years. One past study, for instance, tried to achieve this by creating a thin superconducting layer between two insulating oxides (LaAlO3 and SrTiO3) with a critical temperature of 300mK. Other researchers observed the thin superconducting layer in bilayers of an insulator (La2CuO4) and a metal (La1.55Sr0.45CuO4), neither of which is superconducting in isolation.”Here we put forward the idea that thin charged layer on the interface between ferroelectric and insulator is formed in order to screen the electric field,” Viktor Kabanov and Rinat Mamin, two researchers who carried out the study, told Phys.org via email. “This thin layer may be conducting or superconducting depending on the properties of the insulator. In order to get a superconducting layer, we chose La2CuO4 – an insulator that becomes a high Tc superconductor when it is doped by carriers.”The heterostructure fabricated by Kabanov, Mamin and their colleagues consists of a ferroelectric magnetron sputtered on the surface of the parent compound of high Tc superconductor La2CuO4. At the interface between these two components, the researchers observed the appearance of a thin superconducting layer, which attains its superconductivity at temperatures below 30K. The researchers detected the layer’s superconducting properties by measuring its resistivity and via the Meissner effect. They found that a finite resistance is created when applying a weak magnetic field perpendicular to the interface, which confirms the quasi-2-D quality of the layer’s superconductive state. “The key advantage of our technique is the relative simplicity of the creation of the heterostructure, because the requirements for the roughness of the surface are not so stringent,” Kabanov and Mamin said. “On the other hand, the changing the polarization in the ferroelectric allows to control the properties of the conducting layer.” Kabanov, Mamin and their colleagues are the first ever to observe superconductivity on the interface between a ferroelectric and an insulator. In the future, their approach and the superconductors they fabricated could inform the design of new electronic devices with a ferroelectrically controlled superconductivity. “As far as plans for the future are concerned, we would like to learn how we can control the superconducting properties of the interface by rotating the polarization of the ferroelectric,” Kabanov and Mamin said. “Another idea is to try to control the properties of the interface by laser illumination. This is basically the direction we are working on now.” Researchers at the Zavoisky Physical-Technical Institute and the Southern Scientific Center of RAS, in Russia, have recently fabricated quasi-2-D superconductors at the interface between a ferroelectric Ba0.8Sr0.2TiO3 film and an insulating parent compound of La2CuO4. Their study, presented in a paper published in Physical Review Letters, is the first to achieve superconductivity in a heterostructure consisting of a ferroelectric and an insulator. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. More information: Dmitrii P. Pavlov et al. Fabrication of High-Temperature Quasi-Two-Dimensional Superconductors at the Interface of a Ferroelectric Ba0.8Sr0.2TiO3 Film and an Insulating Parent Compound of La2CuO4, Physical Review Letters (2019). DOI: 10.1103/PhysRevLett.122.237001 Jian-Feng Ge et al. Superconductivity above 100 K in single-layer FeSe films on doped SrTiO3, Nature Materials (2014). DOI: 10.1038/nmat4153High-temperature interface superconductivity between metallic and insulating cuprates. arXiv:0810.1890 [cond-mat.supr-con]. arxiv.org/abs/0810.1890 Journal information: Physical Review Letters The schematic structures of Ba0.8Sr0.2TiO3/La2CuO4 (a) with q2DEG (shown in red); AFM image of the La2CuO4 single crystal surface without the film (b) illustrates the inhomogeneity of the interface. The temperature dependence of the magnetic susceptibility (c), and the temperature dependence of the resistivity (d) of La2CuO4 single crystal (without ferroelectric film). Credit: Dmitrii P. Pavlov et al., arXiv:1804.05519 [cond-mat.supr-con] Electric-field-controlled superconductor-ferromagnetic insulator transition Citation: A new quasi-2D superconductor that bridges a ferroelectric and an insulator (2019, June 27) retrieved 18 August 2019 from https://phys.org/news/2019-06-quasi-2d-superconductor-bridges-ferroelectric-insulator.html Explore further Pavitra Bandhan is a journey of two strangers Aashima and Girish, who are not suppose to meet by any means but destiny brings them not only under the same roof but also binds them in a relationship of lifetime. Aashima is a small town girl, born and brought up in a middle class nuclear family. Her father is a mill worker and he had to struggle his entire life to make two ends meet. Aashima loves her family and follows her family values religiously. She is young, kind hearted and generous girl for whom her family is the priority. She wants to share the burden of her father and wants to build a home for him. Also Read – ‘Playing Jojo was emotionally exhausting’On the other hand Girish Roy Chowdhury is the owner of saree mill in which Aashima’s father is an employee. Girish hails from an influential family in Murshidabad and has built his empire with his own hard work and persistence. His principles are based on the experiences of the life. Though Aashima and Girish belong to different worlds , are different personalities but similarity between them is they live for their families. As the story unfolds we will see the journey of these two strangers, with their share of trials and tribulations , coming from different worlds, different mindsets but as destiny has it but will always be bonded by the thread of love , sacrifice and devotion that is why it will hold the title of being a Pavitra Bandhan- Do Dilon Ka. Yash Tonk, Hritu Dudhani, Yamini Thakur, Shabnam Sayeed, Shailley Kaushik, Munni Jha, Shalini Arora, Rajat Dahiya are main cast in this serial.The serial will be aired every Monday-Friday, 9 September onwards at 8.30 PM only on DD National. The 34th edition of India Trade Promotion Organisation is going to commence in the Capital. The event will be inaugurated by President of India, Pranab Mukherjee, on 14 November at Hamsadhwani Theatre, Pragati Maidan. The partner country this year is South Africa, Thailand is the focus country whereas Delhi is the focus state. The theme of the fair is ‘Women Entrepreneurs’. Over 25 women enterpreneurs from the partner country will be exhibiting their final products at the event. Also Read – ‘Playing Jojo was emotionally exhausting’Over 6500 participants from India and abroad are taking part in the fair. Countries like – Afghanistan, Bangladesh, Bahrain, China, Cuba, Egypt, Germany, Hong Kong, Iran, Indonesia, Japan, South Korea, Kuwait, Kyrgyzstan, Malaysia, Myanmar, Nepal, Pakistan, Sri Lanka, South Africa, Thailand, Tibet, Turkey, UAE and Vietnam are taking part in this grand event this year. There are 31 Central Government Ministries, Departments along with their agencies/PSUs, while all the States and UTs apart from leading private sector companies will bring up the domestic sector.‘Committed to the new mantra of Make in India, the fair will also focus on the PM Modi’s cleanliness drive, Swachh Bharat Mission’, an official said during a press meet.Various cultural programmes have been organised all throughout the fair. The first five days of the fair i.e 14-18 November will be exclusively for the business visitors only. Kolkata: The Directorate of Revenue Intelligence (DRI) has seized exotic birds which were smuggled from Bangladesh into West Bengal, an agency statement said today. Acting on a specific input, DRI officials intercepted a vehicle at a place along Kalyani Expressway near here and found three red and blue macaws, three eclectus parrots, eight pygmy falcons and seven white ducks, it said. The birds were found badly crammed up in plastic bags kept in the boot of the car, the statement said. Also Read – Heavy rain hits traffic, flights These birds were illegally brought into the country from Bangladesh through Indo-Bangla border in North 24 Parganas district, it said. The probe agency said it immediately contacted the office of the principal chief conservator of forests, West Bengal and also the director of Alipore zoo, Kolkata. The birds were handed over by the DRI to the zoo. In March this year, the agency had seized 214 Indian star tortoise in Kolkata. Less than a month back, two hollock gibbons, an endangered species under the Wild life Protection Act, 1972 and two palm civets, another endangered species, along with a variety of exotic birds, which had been smuggled into the country from Bangladesh seized by the DRI. “There is an urgent need to step up the fight against wild life crime, which has environmental, social and economic impact and a concerted effort is needed by all the law enforcement agencies in combating the same,” the agency said. It is often seen that vegetarians struggle to eat authentic dishes and very few places offer what suits their palate. To ease the struggle, Chef Veena Arora from the Imperial shares the recipie of Tauhu Nerng- chilled silky bean curd with basil which is a part of Spice Route’s Vegan Special at The Imperial, New Delhi, which will be on from March 12 till 20. The curated menu will offer a perfect tease to your senses with mock meat as a star ingredient and keeps you light, this season. Also Read – ‘Playing Jojo was emotionally exhausting’TAUHU NERNG- chilled silky bean curd with basilIngredients:• Bean curd silky- 1 no.• Lemon grass- ½ tsp• Basil- 5 gms• Light soya sauce- 5 ml• Gelatin- ¼ sheet• Kaffir leaves- 5 gmsMethod:Soak gelatin in water and heat it. Blend the beancurd. Blend the lemongrass. Put kaffir leaves into a very thin thread. Mix all the above ingredients and light soya. Take a mould and line it with clean wrap and pour the above mixture. Chill it for 3 hours and serve it on a bed of basil.
<urn:uuid:3ec07670-6683-4297-8ed2-9eb55fa6ce19>
CC-MAIN-2021-21
https://zslanto.com/qigcwdate/2019/08
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989874.84/warc/CC-MAIN-20210518222121-20210519012121-00418.warc.gz
en
0.939038
7,163
2.84375
3
W. D. Hamilton W. D. Hamilton William Donald Hamilton 1 August 1936 |Died||7 March 2000 (aged 63)| |Alma mater||University College London| London School of Economics St. John's College, Cambridge |Known for||Kin selection, Hamilton's rule| |Awards||Newcomb Cleveland Prize (1981)| Linnean Medal (1989) Kyoto Prize (1993) Crafoord Prize (1993) Sewall Wright Award (1998) |Academic advisors||John Hajnal| |Doctoral students||Laurence Hurst| Hamilton became famous through his theoretical work expounding a rigorous genetic basis for the existence of altruism, an insight that was a key part of the development of the gene-centered view of evolution. He is considered one of the forerunners of sociobiology. Hamilton also published important work on sex ratios and the evolution of sex. From 1984 to his death in 2000, he was a Royal Society Research Professor at Oxford University. Hamilton was born in 1936 in Cairo, Egypt, the second of seven children. His parents were from New Zealand; his father A. M. Hamilton was an engineer, and his mother B. M. Hamilton was a medical doctor. The Hamilton family settled in Kent. During the Second World War, the young Hamilton was evacuated to Edinburgh. He had an interest in natural history from an early age and spent his spare time collecting butterflies and other insects. In 1946, he discovered E. B. Ford's New Naturalist book Butterflies, which introduced him to the principles of evolution by natural selection, genetics, and population genetics. He was educated at Tonbridge School, where he was in Smythe House. As a 12-year-old, he was seriously injured while playing with explosives his father had. These were left over from his father making hand grenades for the Home Guard during World War II; he had to have a thoracotomy and fingers on his right hand had to be amputated in King's College Hospital to save his life, and he was left with scarring and needed six months to recover. Before going up to the University of Cambridge, he travelled in France and completed two years of national service. As an undergraduate at St. John's College, he was uninspired by the "many biologists [who] hardly seemed to believe in evolution". He was intrigued by Ronald Fisher's book The Genetical Theory of Natural Selection, but Fisher lacked standing at Cambridge, being viewed as only a statistician. Hamilton was excited by Fisher's chapters on eugenics. In earlier chapters, Fisher provided a mathematical basis for the genetics of evolution and Hamilton later blamed Fisher's book for his getting only a 2:1 degree. Hamilton enrolled in an MSc course in demography at the London School of Economics (LSE), under Norman Carrier, who helped secure various grants for his studies. Later, when his work became more mathematical and genetical, he had his supervision transferred to John Hajnal of the LSE and Cedric Smith of University College London (UCL). Both Fisher and J. B. S. Haldane had seen a problem in how organisms could increase the fitness of their own genes by aiding their close relatives, but not recognised its significance or properly formulated it. Hamilton worked through several examples, and eventually realised that the number that kept falling out of his calculations was Sewall Wright's coefficient of relationship. This became Hamilton's rule: in each behaviour-evoking situation, the individual assesses his neighbour's fitness against his own according to the coefficients of relationship appropriate to the situation. Algebraically, the rule posits that a costly action should be performed if: Where C is the cost in fitness to the actor, r the genetic relatedness between the actor and the recipient, and B is the fitness benefit to the recipient. Fitness costs and benefits are measured in fecundity. r is a number between 0 and 1. His two 1964 papers entitled The Genetical Evolution of Social Behavior are now widely referenced. The proof and discussion of its consequences, however, involved detailed mathematics, and two reviewers passed over the paper. The third, John Maynard Smith, did not completely understand it either, but recognised its significance. Having his work passed over later led to friction between Hamilton and Maynard Smith, as Hamilton thought Smith had held his work back to claim credit for the idea (during the review period Maynard Smith published a paper that referred briefly to similar ideas). The Hamilton paper was printed in the Journal of Theoretical Biology and, when first published, was largely ignored. Recognition of its significance gradually increased to the point that it is now routinely cited in biology books. Much of the discussion relates to the evolution of eusociality in insects of the order Hymenoptera (ants, bees and wasps) based on their unusual haplodiploid sex-determination system. This system means that females are more closely related to their sisters than to their own (potential) offspring. Thus, Hamilton reasoned, a "costly action" would be better spent in helping to raise their sisters, rather than reproducing themselves. In his 1970 paper Selfish and Spiteful Behaviour in an Evolutionary Model Hamilton considers the question of whether harm inflicted upon an organism must inevitably be a byproduct of adaptations for survival. What of possible cases where an organism is deliberately harming others without apparent benefit to the self? Such behaviour Hamilton calls spiteful. It can be explained as the increase in the chance of an organism's genetic alleles to be passed to the next generations by harming those that are less closely related than relationship by chance. Spite, however, is unlikely ever to be elaborated into any complex forms of adaptation. Targets of aggression are likely to act in revenge, and the majority of pairs of individuals (assuming a panmictic species) exhibit a roughly average level of genetic relatedness, making the selection of targets of spite problematic. Extraordinary sex ratios Between 1964 and 1977 Hamilton was a lecturer at Imperial College London. Whilst there he published a paper in Science on "extraordinary sex ratios". Fisher (1930) had proposed a model as to why "ordinary" sex ratios were nearly always 1:1 (but see Edwards 1998), and likewise extraordinary sex ratios, particularly in wasps, needed explanations. Hamilton had been introduced to the idea and formulated its solution in 1960 when he had been assigned to help Fisher's pupil A.W.F. Edwards test the Fisherian sex ratio hypothesis. Hamilton combined his extensive knowledge of natural history with deep insight into the problem, opening up a whole new area of research. The paper was also notable for introducing the concept of the "unbeatable strategy", which John Maynard Smith and George R. Price were to develop into the evolutionarily stable strategy (ESS), a concept in game theory not limited to evolutionary biology. Price had originally come to Hamilton after deriving the Price equation, and thus rederiving Hamilton's rule. Maynard Smith later peer reviewed one of Price's papers, and drew inspiration from it. The paper was not published but Maynard Smith offered to make Price a co-author of his ESS paper, which helped to improve relations between the men. Price committed suicide in 1975, and Hamilton and Maynard Smith were among the few present at the funeral. In 1966 he married Christine Friess and they were to have three daughters, Helen, Ruth and Rowena. 26 years later they amicably separated. Hamilton was a visiting professor at Harvard University and later spent nine months with the Royal Society's and the Royal Geographical Society's Xavantina-Cachimbo Expedition as a visiting professor at the University of São Paulo. From 1978 Hamilton was Professor of Evolutionary Biology at the University of Michigan. Simultaneously, he was elected a Foreign Honorary Member of American Academy of Arts and Sciences. His arrival sparked protests and sit-ins from students who did not like his association with sociobiology. There he worked with the political scientist Robert Axelrod on the prisoner's dilemma, and was a member of the BACH group with original members Arthur Burks, Robert Axelrod, Michael Cohen, and John Holland. Chasing the Red Queen Hamilton was an early proponent of the Red Queen theory of the evolution of sex (separate from the other theory of the same name previously proposed by Leigh Van Valen). This was named for a character in Lewis Carroll's Through the Looking-Glass, who is continuously running but never actually travels any distance: - "Well, in our country," said Alice, still panting a little, "you'd generally get to somewhere else—if you ran very fast for a long time, as we've been doing." - "A slow sort of country!" said the Queen. "Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!" (Carroll, pp. 46) This theory hypothesizes that sex evolved because new and unfamiliar combinations of genes could be presented to parasites, preventing the parasite from preying on that organism: species with sex were able to continuously "run away" from their parasites. Likewise, parasites were able to evolve mechanisms to get around the organism's new set of genes, thus perpetuating an endless race. Return to Britain In 1980, he was elected a Fellow of the Royal Society, and in 1984, he was invited by Richard Southwood to be the Royal Society Research Professor in the Department of Zoology at Oxford, and a fellow of New College, where he remained until his death. His collected papers, entitled Narrow Roads of Gene Land, began to be published in 1996. The first volume was entitled Evolution of Social Behaviour. The field of social evolution, of which Hamilton's Rule has central importance, is broadly defined as being the study of the evolution of social behaviours, i.e. those that impact on the fitness of individuals other than the actor. Social behaviours can be categorized according to the fitness consequences they entail for the actor and recipient. A behaviour that increases the direct fitness of the actor is mutually beneficial if the recipient also benefits, and selfish if the recipient suffers a loss. A behaviour that reduces the fitness of the actor is altruistic if the recipient benefits, and spiteful if the recipient suffers a loss. This classification was first proposed by Hamilton in 1964. Expedition to the Congo During the 1990s, Hamilton became increasingly interested in the controversial argument that the origin of HIV lay in oral polio vaccines trials conducted by Hilary Koprowski in Africa during the 1950s. A letter by Hamilton on the topic to the major peer-reviewed journal Science was rejected in 1996. Despite this rejection, he gave supportive declarations on the hypothesis to the BBC and wrote the foreword of a 1999 book, The River, by journalist Edward Hooper, who investigated the hypothesis. To look for indirect evidence of the OPV hypothesis by assessing natural levels of simian immunodeficiency virus, in primates, in early 2000, Hamilton and two others ventured on a field trip to the then-war-torn Democratic Republic of the Congo. However, none of the over 60 urine and faecal samples collected by Hamilton contained detectable SIV virus. He returned to London from Africa on 29 January 2000. He was admitted to University College Hospital, London, on 30 January 2000. He was transferred to Middlesex Hospital on 5 February 2000 and died there on 7 March 2000. An inquest was held on 10 May 2000 at Westminster Coroner's Court to inquire into rumours about the cause of his death. The coroner concluded that his death was due to "multi-organ failure due to upper gastrointestinal haemorrhage due to a duodenal diverticulum and arterial bleed through a mucosal ulcer". Following reports attributing his death to complications arising from malaria, the BBC Editorial Complaints Unit's investigation established that he had contracted malaria during his final African expedition. However, the pathologist had suggested the possibility that the ulceration and consequent haemorrhage had resulted from a pill (which might have been taken because of malarial symptoms) lodging in the diverticulum; but, even if this suggestion were correct, the link between malaria and the observed causes of death would be entirely indirect. A secular memorial service (he was an agnostic) was held at the chapel of New College, Oxford on 1 July 2000, organised by Richard Dawkins. He was buried near Wytham Woods. He, however, had written an essay on My intended burial and why in which he wrote: I will leave a sum in my last will for my body to be carried to Brazil and to these forests. It will be laid out in a manner secure against the possums and the vultures just as we make our chickens secure; and this great Coprophanaeus beetle will bury me. They will enter, will bury, will live on my flesh; and in the shape of their children and mine, I will escape death. No worm for me nor sordid fly, I will buzz in the dusk like a huge bumble bee. I will be many, buzz even as a swarm of motorbikes, be borne, body by flying body out into the Brazilian wilderness beneath the stars, lofted under those beautiful and un-fused elytra which we will all hold over our backs. So finally I too will shine like a violet ground beetle under a stone. The second volume of his collected papers, Evolution of Sex, was published in 2002, and the third and final volume, Last Words, in 2005. - 1978 Foreign Honorary Member of American Academy of Arts and Sciences - 1980 Fellow of the Royal Society of London - 1982 Newcomb Cleveland Prize of the American Association for the Advancement of Science - 1988 Darwin Medal of the Royal Society of London - 1989 Scientific Medal of the Linnean Society - 1991 Frink Medal of Zoological Society of London - 1992/3 Wander Prize of the University of Bern - 1993 Crafoord Prize of the Royal Swedish Academy of Sciences - 1993 Kyoto Prize of the Inamori Foundation - 1995 Fyssen Prize of the Fyssen Foundation - 1997 Honorary title of Academician of Science in Finland - Alan Grafen has written a biographical memoir for the Royal Society. - A biographical book has also been published by Ullica Segerstråle : Segerstråle, U. 2013. Nature's oracle: the life and work of W. D. Hamilton. Oxford University Press. Hamilton started to publish his collected papers starting in 1996, along the lines of Fisher's collected papers, with short essays giving each paper context. He died after the preparation of the second volume, so the essays for the third volume come from his coauthors. - Hamilton W.D. (1996) Narrow Roads of Gene Land vol. 1: Evolution of Social Behaviour Oxford University Press, Oxford. ISBN 0-7167-4530-5 - Hamilton W.D. (2002) Narrow Roads of Gene Land vol. 2: Evolution of Sex Oxford University Press, Oxford. ISBN 0-19-850336-9 - Hamilton W.D. (2005) Narrow roads of Gene Land, vol. 3: Last Words (with essays by coauthors, ed. M. Ridley). Oxford University Press, Oxford. ISBN 0-19-856690-5 - Hamilton, W. (1964). "The genetical evolution of social behaviour. I". Journal of Theoretical Biology. 7 (1): 1–16. doi:10.1016/0022-5193(64)90038-4. PMID 5875341. - Hamilton, W. (1964). "The genetical evolution of social behaviour. II". Journal of Theoretical Biology. 7 (1): 17–52. doi:10.1016/0022-5193(64)90039-6. PMID 5875340. - Hamilton, W. (1966). "The moulding of senescence by natural selection". Journal of Theoretical Biology. 12 (1): 12–45. doi:10.1016/0022-5193(66)90184-6. PMID 6015424. - Hamilton, W. (1967). "Extraordinary sex ratios. A sex-ratio theory for sex linkage and inbreeding has new implications in cytogenetics and entomology". Science. 156 (774): 477–488. Bibcode:1967Sci...156..477H. doi:10.1126/science.156.3774.477. PMID 6021675. - Hamilton, W. (1971). "Geometry for the selfish herd". Journal of Theoretical Biology. 31 (2): 295–311. doi:10.1016/0022-5193(71)90189-5. PMID 5104951. - Hamilton W. D. (1975). Innate social aptitudes of man: an approach from evolutionary genetics. in R. Fox (ed.), Biosocial Anthropology, Malaby Press, London, 133–53. - Axelrod, R.; Hamilton, W. (1981). "The evolution of cooperation". Science. 211 (4489): 1390–1396. Bibcode:1981Sci...211.1390A. doi:10.1126/science.7466396. PMID 7466396. with Robert Axelrod - Hamilton, W.; Zuk, M. (1982). "Heritable true fitness and bright birds: A role for parasites?". Science. 218 (4570): 384–387. Bibcode:1982Sci...218..384H. doi:10.1126/science.7123238. PMID 7123238. S2CID 17658568. - "Obituary by Richard Dawkins", The Independent, 10 March 2000. See also his eulogy by Richard Dawkins reprinted in his book A Devil's Chaplain (2003). - BBC Radio 4 – Great Lives – 2 Feb 2010 - Aaen-Stockdale, C. (2017), "Selfish Memes: An Update of Richard Dawkins' Bibliometric Analysis of Key Papers in Sociobiology", Publications, 5 (2): 12, doi:10.3390/publications5020012 - Brown, Andrew (2000). The Darwin Wars: The Scientific Battle for the Soul of Man. London: Touchstone. ISBN 978-0-684-85145-7. - The Red Queen Hypothesis at Indiana University. Quote: "W. D. Hamilton and John Jaenike were among the earliest pioneers of the idea." - Hamilton, WD; Brown, SP (July 2001). "Autumn tree colours as a handicap signal". Proc. R. Soc. B. 268 (1475): 1489–1493. doi:10.1098/rspb.2001.1672. ISSN 0962-8452. PMC 1088768. PMID 11454293. - "The Politics of a Scientific Meeting: the Origin-of-AIDS Debate at the Royal Society". Politics and the Life Sciences. 20 (20). September 2001. Retrieved 1 September 2020. - "'Scientists started Aids epidemic'". BCC News. 1 September 1999. Retrieved 1 September 2020. - Bozzi, Maria Luisa (29 September 2001). "Truth and Science: Bill Hamilton's Legacy" (PDF). Retrieved 1 September 2020. - Bliss, Mary (6 January 2001). "Origin of AIDS". The Lancet. 357 (9249): 73–4. doi:10.1016/S0140-6736(05)71578-6. PMID 11197392. S2CID 263972. Retrieved 1 September 2020. - "ECU Ruling: Great Lives, BBC Radio 4, 2 February 2010". BBC. Retrieved 24 June 2011. - Ullica Segerstrale (28 February 2013). Nature's Oracle: The Life and Work of W.D.Hamilton. OUP Oxford. pp. 383–. ISBN 978-0-19-164277-7. - Hamilton, W. D. (2000). "My intended burial and why". Ethology Ecology and Evolution. 12 (2): 111–122. doi:10.1080/08927014.2000.9522807. S2CID 84908650.[permanent dead link] - Grafen, A. (2004). "William Donald Hamilton. 1 August 1936 -- 7 March 2000" (PDF). Biographical Memoirs of Fellows of the Royal Society. 50: 109–132. doi:10.1098/rsbm.2004.0009. S2CID 56905497. - Edwards, A. W. F. (1998) Notes and Comments. Edwards, A. W. F. (1998). "Natural Selection and the Sex Ratio: Fisher's Sources". The American Naturalist. 151 (6): 564–569. doi:10.1086/286141. PMID 18811377. S2CID 40540426. - Fisher R. A. (1930). The Genetical Theory of Natural Selection. Clarendon Press, Oxford. - Ford, E. B. (1945) New Naturalist 1: Butterflies. Collins: London. - Maynard Smith, J.; Price, G.R. (1973). "The logic of animal conflict". Nature. 246 (5427): 15–18. Bibcode:1973Natur.246...15S. doi:10.1038/246015a0. S2CID 4224989. - Dawkins R. (1989) The Selfish Gene, 2nd ed. Oxford University Press. - Madsen E. A., Tunney R. Fieldman, G. Plotkin H. C., Robin Dunbar, and J. M. Richardson and D. McFarland. (2006) "Kinship and altruism: a cross-cultural experimental study". British Journal of Psychology: http://www.ingentaconnect.com/content/bpsoc/bjp/pre-prints/218320 |Wikiquote has quotations related to: W. D. Hamilton| - Obituaries and reminiscences - Royal Society citation - Truth and Science: Bill Hamilton's legacy - Centro Itinerante de Educação Ambiental e Científica Bill Hamilton (The Bill Hamilton Itinerant Centre for Environmental and Scientific Education) (in Portuguese) - Non-mathematical excerpts from Hamilton 1964 - "If you have a simple idea, state it simply" a 1996 interview with Hamilton - London Review of Books book review - W. D. Hamilton's work in game theory
<urn:uuid:69419756-988b-4e0c-9d54-2e119d7aebf9>
CC-MAIN-2021-21
https://en.wikipedia.org/wiki/W._D._Hamilton
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989705.28/warc/CC-MAIN-20210512193253-20210512223253-00177.warc.gz
en
0.931263
4,805
2.96875
3
Gasohol is a mixture of gasoline and alcohol (mostly ethanol). Historically, the use of such a blend dates back as early as 1920s, and has been promoted on and off. Its primary intention is to reduce the consumption (import) of gasoline. Today, in many countries, the use of gasohol is promoted or even mandated (notably Brazil). In Thailand, the first gasohol appeared on the market in 2001 and has been steadily promoted. The Thai Government, with its usual top-down approach, has set a rather aggressive schedule to replace all conventional gasoline with gasohol. At the time of writing, no other neighbouring countries (Burma, Laos, Cambodia, Malaysia and Singapore) produce gasohol. Regular 95 gasoline is to be completely replaced with gasohol 95 by January 2007, and is already increasingly difficult to find at gas stations. Regular 91 gasoline is allowed to survive some more years, but to be replaced with gasohol 91 by 2012. At the moment, both gasohol 95 and gasohol 91 contain 10% ethanol (E10), but this proportion is announced to be increased in the near future. 2. How does it affect? Unless a motor vehicle is specifically designed to cope with gasohol (and many motorcycles are NOT!), use of gasohol has no advantage on a personal level but incurs certain disadvantages (loss of power and mileage) or even damage (fuel system). a. loss of power and mileage This is a natural result of using gasohol and more or less inevitable. How much inconvenience one encounters depends on the engine and the fuel system, and varies from a subtle change in throttle response to outright engine trouble. You can find various discussions on carburettor modification for gasohol use on motorcycle forums in several countries (eg. U.S.). Roughly speaking, to keep the same performance as with the regular gasoline, you need to replace the carburettor jet to a bigger size to burn more fuel, and adjust the air-intake accordingly. This is a tedious chore even for an experienced mechanic b. damage to the fuel system While newer fuel injection models are reported to be designed to resist ethanol corrosion, older fuel injection models and, more seriously, carburettor models are subject to mechanical damage whether in the short term or long term. One characteristic situation in Thailand is that most big motorcycles are secondhand import from Japan where the use of gasohol is practically non-existent - thus little/no official comment from the manufacturers. At the other end, motorcycles manufactured in the U.S. - be it Harley Davidson or Japanese models - are claimed to be ethanol-resistant. (ref. E-10 Unleaded in Motorcycles) 3. What to do? Unless you know your bike can handle gasohol without incurring drastic loss of performance or damage, stay away from gasohol, and try to search for model-specific information/experience from other riders. Regular 91 gasoline will still be available for a couple more years, and most motorcycles manufactured in Japan are designed for regular 89 (or above) gasoline. Different gas stations use different brand names to market various fuels, and it's often difficult to tell which is what. Basically, fuels are tinted in following colors: |Regular 95||yellow||==||Gasohol 95||orange| |Regular 91||red||==||Gasohol 91||green| and fuel pumps often (but not always) carry same-colored stickers as with the color of the fuel. Generally speaking, gas station attendants are unreliable and irresponsible, and they tend to pump in gasohol even when you ask for regular gasoline. It is your responsibility to watch every move of the attendants to make sure that they pump in what you want. 4. Reported experience Honda VFR 750 |Stay away from mixed petrol!! I got a tankful of what i can only guess was mixed 95, and the bike ran like shit, I eventually siphoned out the remaining 12 litres and refilled somewhere else, it took a days running to clear it out and get the bike back to normal. Open the throttle and the bike just hesitates as it tries to accelerate.| |Yamaha V-Max 1200| |I made the mistake of putting some gasohol in it once that just destroyed the rubber in the carbies requiring a complete strip down by Siam Superbikes - a job which I'm reliably told "was a complete barstard!!!"| |Honda Africa Twin 650| |it would suck ...my 1988 Honda Africa Twin 650cc had a lousy performance with Gasohol and it felt like some technical problem in the engine or carbo!| Please send your trouble/non-trouble experience to firstname.lastname@example.org 5. Gasohol-Compatibility List Following table is an excerpt from Type of cars and motorcycles that are Shell Gasohol 95 compatible. This list is limited to Thai-made motorcycles - and its reliability unknown - but it should give you some idea that not all carburetor models break down instantly upon filling up with gasohol. |CELA||1990 - 1991| |CELA-L||1990 - 1991| |DREAM EXCES||1994 - 1996| |DREAM 125||2001 - 2003| |NICE UBOX||2000 - 2002| |NOVA-KS||1990 - 1991| |NOVA-R||1990 - 1991| |NOVA-RS||1990 - 1991| |NOVA-SP1||1994 - 1996| |PHANTOM 150||1997 - 1999| |PHANTOM 200||2000 - 2002| |WAVE 125 R||2003 - Present| |WAVE 125 S||2003 - Present| |FD 110 LOVE||1996| |SMASH 110, SMASH JUNIOR 110||2002| |SMASH D 110, SMASH 110 PRO, SMASH JUNIOR 110 PRO||2003| |SMASH 110 LIMITED||2005| |BEST 110 PRO||2001| |BEST 125 SPORT, SUPER BEST 125||2004| |BEST 125 LIMITED||2005| |STEP 125||Sept. 2005| |KATANA 125||Sept. 2005| |Belle 100, Belle R||1992| |Mate 100, Mate 111, Mate alfa||1992| |RX-Z, Speed, Tiara, Touch, Rainbow, X-1||1992| |TZM, TZR, TZR-R||1992| |Fresh, Fresh ll||1992| |Nouvo, Nouvo MX||1992| |Spark, Spark-135, Spark-R, Spark-Z||1992| |Boxer||200SAD||Sport||2004 - Present| |CX 125 A||125EAA||Enduro||2004 - Present| |CX 125 E||125EAE||Enduro||2004 - Present| |CX 125 SE||125EAA||Enduro||2005 - Present| |CX 125 SM||125EAE||Enduro||2005 - Present| |Joker 120||120MFD||Shopper||2002 - 2004| |Joker 125||125MFA||Shopper||2003 - Present| |Joker 125 F/0||125MFA||Shopper||2004 - Present| |Joker 125 (M)||125MFA||Shopper||2003 - Present| |Joker 125 (M) F/0||125MFA||Shopper||2004 - Present| |Ozone||110MFA||Family||2004 - Present| |S 120 SG 1.2||120MFC||Family / Side Car||2003 - Present| |Smart 120 A||120MFA||Family||2002 - 2004| |Smart 120 B||120MFB||Family||2002 - 2004| |Smart 120 C||120MFC||Family||2002 - 2004| |Smart 120 E||120MFE||Family||2002 - 2004| |Smart 110 S-C||110MAC||Family||2004 - Present| |Smart 110 S-E||110MAE||Family||2004 - Present| |Smart 110 S-E (M)||110MAE||Family||2004 - Present| |Smart 125 S-A||125MAA||Family||2004 - Present| |Smart 125 S-A (M)||125MAA||Family||2004 - Present| |Smart 125 S-C||125MAC||Family||2004 - Present| |Smart 125 S-E||125MAE||Family||2004 - Present| |Smart 125 S-E(M)||125MAE||Family||2004 - Present| |ST 200||200SAA||Sport||2004 - Present| |CHEER / 4 stroke||AN110J/L/W/Z| |KAZE / KAZE HIT/ 4 stroke||AN112| |KAZE 125 / 4 stroke||AN125| |KAZE ZX 130 / 4 stroke||AN130| |KSR110 / 4 stroke||KL110B| |KLX110 / 4 stroke||LX110A| |BOSS / 4 stroke||BN175A/E| |LEO / LEO STAR / 2 stroke||AS120C/D| |GTO / 2 stroke||KH125| |KRR-ZX / 2 stroke||KR150K| |KR-SSR / 2 stroke||KR150E *| |VICTOR H / 2 stroke||KR150H *| |VICTOR J / 2 stroke||KR150J *| |VICTOR S POLECE / 2 Stroke||KP150A *| |* Kawasaki Type KR150E. KR150H. KR150L and KR150A will need to change fuel gauge prior to using Gasohol 95| 6. Media Reports |Phuket Gazette - Issues & Answers| Gasohol in motorbikes? (October 17, 2005) Q. Can the new gasohol fuel be used in motorbikes? A. Gasohol can be used in all car engines manufactured in Thailand since 1995. However, we do not recommend it be used in motorcycles. The reason for this is that cars made in Thailand in the past 10 years are all equipped with fuel injectors, not carburetors. The Fuel Research Department of the Petroleum Authority of Thailand (PTT) researched the use of gasohol only in fuel-injected engines. We are therefore not sure whether the seals in carburetors - as in pre-1995 cars and as still fitted to most motorcycles - can handle the burning of ethanol. Carburetors contain plastic parts. If these are damaged and leak, an engine fire could result. Carburetor-aspirated engines will work on gasohol but, for the reason stated, we cannot recommend its use in these engines. ( - Vichitpong Cheanthongsub, PTT Phuket Oil Depot Manager) Gasohol in motorbikes (November 25, 2005) Q. Why do attendants at Petroleum Authority of Thailand (PTT) gas stations not tell customers that gasohol should not be used in motorbikes or cars older than 10 years? They put it in my motorbike and my 20-year-old car without saying a word. A. We have given the manager or owner of every gas station in Phuket a half-day of training about gasohol and which engines it is suitable for. It is the duty of the gas station manager to train staff how to serve customers. In addition, each PTT station has been given brochures - though in Thai only - explaining to customers which engines are suitable for gasohol use. ( - Vichitpong Cheanthongsub, PTT Phuket Oil Depot Manager) |Bangkok Post - Motoring (December 9, 2005)| Gasohol's not a simple cocktail to concoct Apparently, gasohol has achieved popularity among consumers, much to the delight of its local distributor, based on the assumption that it is 1.50 baht and 0.70 satang cheaper than octane 95 and octane 91 respectively. But all this is being done without educating the general public and motorists on how much fuel consumption will decrease or increase as a result, and at what rate. First, a basic understanding of ethyl alcohol is needed. Ethyl alcohol contains about half the amount of energy when compared to gasoline (petrol as it is called in the UK), which in official terms is its heating value. Engines that use ethyl alcohol and can still deliver the same performance as a gasoline engine will have to use a fuel pipe that has a section area twice as large and fuel injectors that are twice as fast in order to maintain similar performance figures. And the obvious thing is that fuel consumption will be twice that amount. If you want to compare it head-on, the price of ethyl alcohol per litre must be half that of gasoline. Therefore, mixing 10% ethyl alcohol with 90% gasoline to make gasohol will result in an energy value of only 95%. When compared to the conventional gasoline engine, fuel consumption will increase by roughly 5% depending on size, condition and type of engine. If you want to make up for any increase in fuel consumption, the price must be reduced. For example, gasoline 95, now at 25 baht per litre, must be 1.25 baht cheaper. In reality motorists aren't saving on fuel costs based on distance travelled. But they are helping the economy and the agricultural sector. I don't need to show you the calculations. Say, if it's gasoline 91 which is 0.70 satang cheaper, the consumer will have to pay more for fuel that will cover the same distance for sure. In times like this, who will want to pay more for the country? Gasohol isn't even the standard official name, but is the bringing together of benzine (called gasoline in the US) and alcohol. Hence, gasohol. It doesn't even tell us whether gasohol uses ethyl or methyl alcohol and doesn't indicate the proportion of mixtures. The US penned the gasohol moniker because it pioneered the 10% ethyl alcohol and 90% gasoline mixture. The 10% is only an approximate figure; the optimal rate might well be 7% or 8%. I believe that 10% is too much. Readers please take note: politicians responsible for our country's energy issues were naive when they announced that they will eventually increase the alcohol content in gasohol to 30%. This is ridiculous and will cause grave damage to the engine. Various components will suffer from wear and tear resulting therefrom. And, most importantly, how will the engine management system be able to compute the fuel mixture ratio and maintain the same performance at the same time? |Bangkok Post - Outlook (April 12, 2005)| FUEL CRISIS? WHAT FUEL CRISIS? With the price of Benzene 95 at an all-time high of 22.89 baht per litre, Oranuj, an accountant, decided it was time she did something to stretch the value of her baht. "I didn't know much about alternative fuels like gasohol, but I decided it was time I really gave them a try," she said, adding that she had been uncertain whether the new type of fuel would adversely affect her car's performance. "But it didn't. My car seems to run more smoothly and the fuel gauge drops more slowly. Perhaps it's just the good feeling that comes with it helping me save some money," she smiled. With gasohol she saves about 60 baht for 40 litres compared to the same quantity of petrol. Choke, a 36-year-old entrepreneur, was one step ahead of Oranuj. A few months ago he modified his Mercedes to allow it to run on natural gas, turning it into an NGV (Natural Gas Vehicle). "I couldn't continue spending about 3,000 baht on petrol each week. It was horrendous!" he said. "Since switching to natural gas I pay about 900 baht a week. Although the cost of the necessary modification is high, I think the investment is worth it in the long run." In the current situation, with the price of oil soaring and some experts predicting it could reach $60 (2,380 baht) per barrel, and incomes unchanged, motorists have few options for saving money. But switching to alternative fuels is one of them. Chavalit Pichalai, director of the Energy System Analysis Bureau at the Energy Policy and Planning Office (EPPO) said that turning to alternative fuels means more than savings made by individuals. It also means better air quality, fewer health risks, a boost to agriculture and huge national savings. "We are at the point of no return. Wishful thinking won't bring the price of oil down. Alternative fuels are here now, and will be the main players in the future," said Chavalit, citing the government's plan to increase the use of alternative fuels from the current 0.5 percent to eight percent of all commercial fuel usage within the next seven years. The EPPO and other energy-related agencies are coming up with ways to make cost-effective, locally produced alternative and renewable fuels possible. The so-called "soft energy" obtained from the sun, wind, biomass and domestic refuse recycling are among the potential solutions. Right now, as part of government policy, gasohol is available and becoming more popular. Currently, Banchak, PTT and Shell are offering gasohol at 700 petrol stations, mainly in Bangkok, and by the end of this year the fuel should be available at over 4,000 stations. Ultimately, gasohol is planned to replace Benzene 95 petrol within three years. The price difference of 1.50 baht per litre is a major incentive for motorists to switch to gasohol. But many motorists have only a vague idea about this cleaner alternative to petrol and are uncertain whether or not it will compromise their car's engine and performance. According to a gasohol expert from the PTT Research and Technology Institute, gasohol is still largely made up of petrol. It is a mixture of 90 percent petrol and 10 percent crop-derived ethanol (ethyl alcohol) _ 99.5 percent pure alcohol, by volume, made from cassava and sugarcane molasses. "Even better, ethanol helps a car's engine to burn fuel more completely and slowly, resulting in smoother engine running," said the PTT alternative fuel expert. The Thai Automotive Industry Association and many car manufacturers assure consumers that most cars produced since 1995, and with fuel injection systems rather than a normally-aspirated carburettor, can run safely on gasohol (with 10 percent ethanol) without requiring any adjustment or modification to the engine. However, there are some exceptions, and car owners are recommended to check with manufacturers. Cars with carburettors, normally those made before 1995, are not suitable for running on gasohol, though. "With older cars, engine modification has not yet proved safe or effective. It's likely that owners may need to change to cars that can run on gasohol or opt for natural gas," suggested Chavalit. Ethanol is not new to car manufacturers in Japan, Europe and North America. Canada and Brazil have used gasohol for over 25 years. Brazil in particular, has developed cars that can use up to a 20 percent mixture of ethanol, and gasohol was available in the US in the 1930s. As for Thailand, Chavalit said the government plans to launch a tax incentive scheme to persuade car companies to produce cars that can run on gasohol with more than 10 percent ethanol. "The higher the percentage of ethanol used, the more the country saves on imported crude oil," said Chavalit. Thailand imports 90 percent of its crude oil for domestic consumption, of which 60 percent is used by the transportation sector. A major cut here will help national savings and also save at least three billion baht on the import of methyl tertiary-butyl ether (MTBE), an octane-boosting additive that started to replace lead in petrol about 10 years ago. To promote gasohol use, the government aims to keep the price of gasohol below that of Benzene 95 petrol by about 70 satang to one baht per litre. CONCERNS ON POTENTIAL RISKS OF GASOHOL However, Dr Kanit Wattanavichien, head of the Internal Combustion Engine Laboratory, Faculty of Engineering, Chulalongkorn University, is concerned by potential problems caused by using gasohol. "The guarantees are so vague," he said. "You can't guarantee [a car for use with gasohol] just by the year of manufacture as many consumers may have repaired their cars or changed parts, which may not be original parts and thus cannot sustain contact with alcohol. This can cause mechanical problems and who is going to be responsible for that?" he said. "Cars can run with gasohol, but they need specially-designed parts to support the fuel." According to him all parts that come into contact with ethanol _ such as fuel filters, pipes, the fuel tank and the fuel injection system _ should be suitable for use with gasohol, otherwise, there may be problems such as rust, perishing rubber parts or clogging of the fuel injection system, reducing the car's performance. "Very few talk about the potential long term affects. Each car manufacturer should come out to guarantee auto parts for use with gasohol and set standards for these parts to ensure quality and safety for consumers' cars," he said. "Manufacturers should also be ready to take responsibility if motorists report adverse effects caused by the use of gasohol," he suggested. ENVIRONMENTAL AND HEALTH CONCERNS If auto parts are qualified for use with ethanol _ a "clean" fuel _ then its use will produce less air pollution than petrol, making it more environmentally friendly and less of a health risk. Ethanol, made by fermenting agricultural produce, often cassava and sugar cane, is used to replace the octane-boosting additive MTBE. Although environmentally better than lead, the US Environmental Protection Agency has classified MTBE as a "possible human carcinogen". Laboratory animals, exposed to high concentrations of MTBE have been shown to develop lymphomas and leukaemias, as well as cancers of the kidney, liver, testicles and uterus. However, no conclusive studies have been made on its effects in humans. MTBE, when evaporated, can contaminate the environment _ the atmosphere, groundwater and soil. Fortunately, a study by PhD candidate Charoensri Keepra-sertsaab from the Joint Graduate School of Energy and Environment has not yet found MTBE contamination at a dangerous level in Bangkok, despite it being used in petrol for over 10 years. In addition to the lack of MTBE, gasohol also helps reduce hydrocarbon and carbon monoxide emissions. "Complete combustion [of gasohol] reduces carbon monoxide emissions by up to 30 percent ... therefore the air is less polluted," said an oil expert from PTT. Generally, petrol-fuelled vehicles, when running, produce carbon monoxide _ a hazardous, colourless and odourless gas _ as a result of incomplete combustion. Carbon monoxide poisoning can cause flu-like symptoms, such as headaches, nausea, fatigue, shortness of breath and dizziness, and in a confined space ultimately leads to death. The use of gasohol can also help reduce greenhouse effects, said the PTT's gasohol expert. "The production of benzene produces carbon dioxide, known to cause global warming. But the production of ethanol from agricultural produce does not emit such gas into the air. So if we reduce the production of oil by 10 percent [to be replaced by ethanol], it means we help reduce carbon dioxide emissions by 10 percent," he explained. Opting for alternative fuels is also a boon to our farmers. "Thailand may not be a land of fossil fuels but our strength is in the fertility of our soil and in our skilled farmers. We still have plenty of vegetation that we can turn into alternative energy sources," said Chavalit. Cassava and sugarcane are the major raw materials used to produce ethanol, while palms can be used to produce an environmentally-friendly and cost-effective bio-diesel. "A rising demand for these crops will help stabilise prices and guarantee farmers' incomes and work," said Chavalit. However, many doubt whether the crop supply, which relies heavily on the climate, will be sufficient to supply the steady and growing demand. "As of now, there is no worry about that. We have quite a stock," he said. Ethanol stocks are enough to fuel future plans for producing three million litres of gasohol a day in two years. "But we have to be cautious too. Take the current drought situation for example _ our poor irrigation system could lead to a major setback," cautioned Chavalit. The popularity of natural gas may come second to gasohol. But it may be the bigger and brighter player in the near future. Why? NGV motorists say the gas is a big saver. "On average, my car runs at 60 to 65 satang per kilometre. I used to go to Pattaya, about 230km, and it cost me about 130 baht," claimed Sombat, who is among 1,300 taxi drivers now on the NGV programme. Taxis and buses are the first vehicles to be targeted by the government for conversion to NGVs, to help reduce the country's reliance on imported petrol. However, many private car owners are considering conversion to natural gas because the price is over 50 percent lower than petrol. Choke, for example, installed a natural gas tank in his Mercedes, and Nirut in his Grand Cherokee. According to government policy, the price of natural gas will be pegged at 50 percent of retail diesel prices until 2006. The price will then increase to 55 percent of Benzene 91 in 2007, and to 65 percent of Benzene 91 from 2009 onwards. Apart from its competitive pricing, this non-renewable fuel is considered environmentally friendly. Unlike liquefied petroleum gas (LPG), the gas used mainly for cooking, natural gas is hard to ignite and emits less hydrocarbons, carbon monoxide and carbon dioxide into the atmosphere than other fossil fuels. Natural gas comes mainly from the Gulf of Thailand and from Burma via the Yadana pipeline. A source in the PPT maintained that the supply should be sufficient to meet growing demand now and in the future. LOOKING AHEAD FOR RENEWABLE FUELS At present, the use of gasohol still relies largely on petrol supplies. Its price, although lower than regular petrol, still depends on oil and ethanol prices, which can fluctuate up and down _ but usually up. Energy experts from the Joint Graduate School of Energy and Environment predict a worst-case scenario of oil prices reaching $60 (2,280 baht) per barrel. If that happens, the economy could come to a virtual standstill, they said. However like petrol, natural gas is a form of fossil fuel and is thus non-renewable. It is currently found in abundance in the Gulf of Thailand and in neighbouring countries like Burma. But the supply will inevitably run out in the future, and exploring for new sources raises many environmental issues. Chavalit suggests that a sustainable solution to our energy needs is to rely on energy efficiency and sources of renewable energy, such as solar power, wind power, domestic refuse (which we have in abundance) and bio-mass. "Now we are researching and experimenting with projects to make these renewable fuels commercially viable. But we still have technological limitations that make the cost per unit too high for consumers," he said. As of now, consumers can help the country save energy by all means. And this will result not only in savings for themselves but for the country too, he added.
<urn:uuid:f10c76dc-ffef-4b67-a5c8-4dc15fb66240>
CC-MAIN-2021-21
https://justtryus.blogspot.com/2007/11/gasohol.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988696.23/warc/CC-MAIN-20210505203909-20210505233909-00054.warc.gz
en
0.924831
6,009
2.515625
3
A man said to the universe:— Stephen Crane “Sir, I exist!” “However,” replied the universe, “The fact has not created in me A sense of obligation.” 1. Meet the Tarweeds There is a plant called the Livermore tarweed. Why would anyone care about a plant called Livermore tarweed? is one reasonable question you might ask. Let’s just get the bad news out of the way first: the plant is drab. Its flowers are pedestrian. It is hairy, it smells of paint-thinner, and it exudes a sticky resin that, in the unlikely event of your walking through a field of Livermore tarweeds, would accompany your pants home. One person told me that sheep are rumored to rub their faces in tarweed to detox. Cows, though, find it loathsome. The Livermore tarweed is also exceedingly rare. There are three known populations of it on earth. All three live within a few miles of each other in the Livermore Valley, a narrow east-west crease in the Diablo range on the boundary between the San Francisco Bay Area and the Central Valley. A botanist named Heath Bartosh recently spent 300 hours over three years combing the valley and nearby foothills for more, but didn’t find any. While he was searching, a road was built over one of the populated fields, destroying the plants there. A figure-eight-shaped BMX course appeared in the middle of another field. If a Livermore tarweed sprouted in your garden, you’d pull it. It is a weed, after all, and it looks like one. Since it grows instead in the wild — or at least, in an undeveloped field in the suburbs — it is much easier to ignore it. Which is precisely what the vast majority of people, me included, have done. “We will conserve only what we love,” environmentalist Baba Dioum began one of the most-quoted lines in conservation history; in the not-improbable case that the Livermore tarweed goes extinct, how much could you say it was its own fault for being born unlovely? Heath Bartosh, a friend to underdogs, has come to love the Livermore tarweed. At the tail end of summer he took me to view the plant’s largest remaining population. We met in a dry field in the Livermore Valley on the morning of a 105-degree day. The rare tarweed grows behind barbed-wire fence, on the edge of a dusty area called an alkali scald. It is surrounded by more common tarweeds and by the withered carcasses of annual grasses in varying shades of brown. Suburban development, in the form of ranch-style houses and immaculately paved streets named after horse breeds, looms on the horizon. Bartosh pointed to an orange fence shimmering at the far end of the field – the boundary of a former landfill, he said. All around us at knee height were the unspectacular dots of the Livermore tarweed in bloom, cheerful yellow flotsam rising on the crests of a sea of plant death. “It’s got a little public opinion problem because it’s not a healthy forest,” Bartosh had warned me in advance. Bartosh wore jeans, boots, and a cherry red floral-print long-sleeve shirt. His head, neatly shaved and bare, radiated the hot sunshine. He has spent a lot of time in this particular field and, as is his nature, he was enthusiastic. “I like to refer to this as a poor man’s Carrizo Plain,” he said, referring to the national monument that explodes with wildflower fireworks and rare animals every spring. “You just don’t have the antelopes and the sandhill cranes and stuff. But you have at least a lot of the same genera of plants.” In 2014 Bartosh petitioned the California Department of Fish and Wildlife to protect the Livermore tarweed in this field under the California Endangered Species Act. This was more audacious a request than I realized when the press release first crossed my desk: the state of California has not listed a plant since 2007, and not out of a lack of rare plants. Only six plants have made it onto the state list in the 21st century. (The federal government, only marginally better, has listed five California plants since 2005.) Last April the Fish and Game Commission granted the Livermore tarweed a hearing, and found there was “sufficient scientific information to indicate listing may be warranted.” Until the commission makes a final decision this year about whether or not to list it, the Livermore tarweed enjoys the only protection it has ever had. Livermore tarweed champions like Bartosh consider it both a keystone species and a template for managing thousands of other imperiled-yet-unprotected plants. If the tarweed petition succeeds, Bartosh says, there will be many more petitions like it. It was the idea of tarweed-as-template that drew me to that field in the Livermore Valley, but I had come to think of it more in a narrative sense. We learn to love nature by telling stories about it, and I wanted to explore the kind of stories you could tell to convince someone to love a rare, ugly plant. It seems at first glance like a Cinderella: an unappreciated tarweed and its band of rag-tag allies wins the acceptance of the wider world. (“Rare plants, rare people,” Bartosh told me. “You’ve got a lot of weirdos out there in the plant world, for sure.”) But fairy tale isn’t the right genre for weeds. In the fairy tale, Cinderella is beautiful all along. The ugly duckling turns into a swan. The frog becomes a prince. The audience is always aware of the hero’s true worth; the tension is just whether the other characters will figure it out, too. The Livermore tarweed is no Cinderella. It desires no transformation. It desires no appreciation. Its staunchest defenders make little effort to convince anyone of its intrinsic beauty. “Happily ever after,” in this case, means our learning to accept the tarweed’s fidelity to its own ugliness. 2. Heroic Tarweeds A problem you run into right away in telling stories about rare ugly-smelly-hairy weeds is that they’re difficult to anthropomorphize. Take the example of the tree as a botanical protagonist a person can comprehend: You’re an individual, you’re tall, and things move slowly, which is how JRR Tolkein imagined the Ents in The Lord of the Rings, or how Ursula K. Leguin imagined her heroic oak tree in “The Direction of the Road.” “The ancient trees,” Zach St. George wrote recently in Guernica, “always seem to inspire this measuring of their lives against ours.” My three-year-old daughter likes to peer up into the canopy and conclude, “That tree is bigger than me.” The annual plant, meanwhile, is the red-jacket character in Star Trek. By design tarweeds exist as the expendables in our stories: they’re not meant to be empathized with, or thought of as individuals. They’re a process, a waveform. You might as well try to anthropomorphize the tides. The Livermore tarweed is a better facilitator than it is hero. It is probably the host plant for a species of moth in the genus Heliothodes. No one has actually been able to find or name the moth. If the Livermore tarweed goes extinct no one ever will. So far, a lepidopterist named Terry Sears has carefully searched one field for evidence of the moth’s existence without success. He and other scientists haven’t had a chance to search the other. One of Charles Darwin’s famous validated predictions was that scientists would find a moth with a pollinating proboscis to match a uniquely shaped Madagascar orchid; this is a similar potential story happening right here, and how exciting you find it depends on how much psychological baggage you have about tarweeds not being orchids and Livermore not being Madagascar. “Tarweeds are immensely drought-tolerant, beautiful, smell lovely, … and much more. And yet, thanks to an unfortunate name, they get no respect.”Billy Krimmel The tarweed is also a host for predatory insects like assassin bugs. Several years ago, a then-UC Davis graduate student named Billy Krimmel looked closely at common tarweed and saw a lot of dead insects — “carrion” — stuck to the plant. He wondered how all these dead bugs might benefit the plant, so he went through the trash cans at the UC Davis fruit fly lab, and collected a bunch of dead fruit flies, and stuck the fruit flies to plants. He then watched the plants with the dead flies on them grow quickly into massive, thriving bug cities. Most of the new bugs were predators, and in addition to eating the carrion they were eating up all the herbivorous biting pests. Krimmel had an idea that maybe the plant was doing this on purpose. Its sticky hairs were serving as a “tourist trap,” he says, for all kinds of insects, and the carrion was attracting predators that would then also clean up any undesirables. Some predators seemed to have evolved with sticky plants, because they had specific adaptations to avoid getting stuck themselves. This study became Krimmel’s dissertation, and a foundation for his enduring respect for tarweeds. “It’s so intricate,” he told me. “That’s what brought me to the whole thing, these intricate stories you can miss if you’re not paying attention.” Krimmel, who now runs a native plant landscaping business out of Davis, is one of a handful of humans who is excited to play the gene-spreading role for tarweed. He encourages clients to plant Madia elegans, a tarweed known as common madia. He agrees it’s a tough sell for a homeowner worried about what the neighbors will think. The story he’s settled on telling is, “A plant can be more than a decoration. It can have function.” Krimmel’s tolerant definition of “function” encompasses a range of answers, from “it has fruit to eat” to “it reminds me of my grandmother.” But there’s no mistaking, he says, the wide variety of services a tarweed can provide. “Tarweed has been valued by wildlife for millions of years,” he wrote in a blog post in which he explains how to grow common tarweed from seed. “It was a staple to the Pomo Indians, who ate its seeds for protein. Tarweeds are immensely drought-tolerant, beautiful, smell lovely, provide natural, chemical-free pest control, feed birds, protect pollinators, and much more. And yet, thanks to an unfortunate name, they get no respect.” If we all told stories about the world the way the Pomo do, the tarweed would make sense to us. If we valued what wildlife values the tarweed might be irreplaceable. Why even call it a weed in the first place, Kimmel wants to know. Just because it grows in the wild? As Ralph Waldo Emerson once called it, a weed is just “a plant whose virtues have not yet been discovered.” Furthermore, Krimmel says, tarweeds don’t smell like tar to him. “I describe it as more a buttery lemon smell,” he says. “The name is derogatory.” 3. Displaced Tarweeds On any California hike you take, the botanical backdrop will be constructed mainly of plants introduced by people in the last two centuries. The Livermore tarweed story is plenty familiar in this context: it’s a victim imperiled because one essence of California is that everyone wants to move here, and everyone when they move here wants “here” to look more like wherever they came from. California is our untenanted dreamscape, and the tarweed is inherited furniture. For native plant defenders this version of the narrative is a tragedy — and the tragedy is ours; we might realize too late that the furniture was uniquely suited to the setting. Stanford biologist Paul Ehrlich opens his book Extinction with this parable, with species as the rivets in an airplane: A dozen rivets, or a dozen species, might never be missed. On the other hand, a thirteenth rivet popped from a wing flap, or the extinction of a key species involved in the cycling of nitrogen, could lead to a serious accident. In most cases an ecologist can no more predict the consequences of the extinction of a given species than an airline passenger can assess the loss of a single rivet. Natural California runs on a logic apart from the one that botanical and human transplants have brought with them from abroad. Large swaths of undeveloped California are populated with all variety of tarweeds, because tarweeds have that logic in their DNA. For UC Berkeley botanist Bruce Baldwin, one of the world’s leading experts on tarweeds, the first step in teaching an appreciation for the plants is to teach an appreciation for the unique features of the state. There are something like 90 tarweed species here, all of which evolved from a single common ancestor in what botanists call the “California Floristic Province” — a global biodiversity hotspot for plants. The Livermore tarweed is limited to a particular habitat within that province, growing only in relatively rare alkali grasslands and meadows on the edge of barren scalds and vernal pools. Its setting, and the blazing late-summer day we went to look for it, are proof of its clever adaptation to the Mediterranean climate; while other plants wither away in September after months of dry heat, the Livermore tarweed blooms. Baldwin grew up exploring around Arroyo Grande, in central California. He remembers the hills coming alive in the summer with the yellow flowers of grassland tarweed, Deinandra increscens. As a young botanist he became fascinated by the diversity of the tarweed tribe, and by its evolutionary origins in California. The common ancestor of the tarweeds probably lived in cooler, mid-elevation spots like its sister group, the medicinal plant genus Arnica. As California’s climate became drier and warmer, the tarweeds moved downslope and diversified into all the different habitats they enjoy today. Perhaps because that radiation occurred recently, though, tarweeds are notoriously difficult to distinguish from each other and identify in the field. Baldwin teaches weekend-long courses solely on tarweed identification, and he says botanists are always hungry for more. Thin and soft-spoken, Baldwin is a legend in California botany — “our generation’s Jepson,” Bartosh called him. Baldwin is the curator of the 96,000-specimen native-plant-centric Jepson Herbarium, an editor of The Jepson Manual, the definitive guidebook to California plants, and the first to formally describe 74 plant species, among them the Livermore tarweed. Twenty years ago he demonstrated the value of using a particular DNA region to understand evolutionary relationships between closely related plants, a technique that is still widely used. His proof of concept was the tarweeds, making tarweeds, “like, fruit fly special,” Krimmel told me, as a foundational genetic case study. In 1992 Baldwin used his new approach to confirm an earlier finding that one of the most charismatic conservation cases on the planet, the Hawaiian silversword, is a direct descendant of California tarweed. “It’s not so much about trying to sell people on whether or not that’s a beautiful plant, which could be a tough sell,” Baldwin said. “Looking at them microscopically, under a dissecting scope, there are a lot of different features that excite students.” Baldwin doesn’t have to make a moral case for conservation for the Livermore tarweed, because he has seen its DNA, fit it into the universe, and found it worthy. “We don’t have a big emotional connection to plants, as much as a furry critter,” Baldwin says. “But it’s important to realize that plants in some ways are more foundational in the ecosystem than furry critters.” 4. Disrespected Tarweeds Why not just list the tarweed as endangered and let the scientists tell their stories, while we resume blissful ignorance? In part because the federal Endangered Species Act provides dramatically weaker protections for plants than it does for animals. Unlike animals, an endangered plant receives protection from the law only on federal land. If the very last federally endangered Livermore tarweed grows on your private property, you are free to bulldoze it to build a better BMX course. The government, one recent report says, spends 25 times more on the recovery of endangered animals than it does on endangered plants. Such unequal treatment, says California Native Plant Society soil ecologist Emily Roberson, might trace back to English common law: Animals move across the land, so they’re the property of the state. Plants grow on the land, so they’re private property. The California Endangered Species Act is stronger for plants – it’s just exceedingly difficult to actually get one on the list. When I mentioned to Roberson that the Livermore tarweed was under consideration for state listing, she guffawed. “I can’t remember the last time that happened,” she said. Since 2002, Roberson has directed a Native Plant Conservation Campaign meant to remedy the status of plants as “second class conservation citizens.” Her two main goals are to make the Endangered Species Act stronger for plants, and to increase funding for federal plant staff. “There is no one to whom I have explained this problem with the Endangered Species Act that has said anything other than, ‘That’s crazy,’” Roberson told me. “Congressional staff, scientists, environmentalists who are not plant people, climate activists, wilderness and wildlife people, even conservatives. It makes no sense to anybody.” Anybody might appreciate Roberson’s petition. And yet, Roberson says, no one expects action. The petition is more an attempt to succor botanists as they wait for cultural change. “Plants in general play a very negligible role in the public discourse about extinction,” says Ursula Heise, a professor at UCLA’s Institute of the Environment who’s working on a book about cultural values and extinction. “Biodiversity conservation is at bottom a cultural and political issue, not a scientific one.” “Plants in general play a very negligible role in the public discourse about extinction. Biodiversity conservation is at bottom a cultural and political issue, not a scientific one.”Ursula Heise It’s not out of the question that people embrace a new narrative and learn to appreciate non-charismatic plants, UC Santa Cruz botany professor emeritus Lincoln Taiz says. After all, cultures change. Drought has convinced gardeners to forego beautiful flowers in favor of “more humble” native plants that cope with less water, he said. Rachel Carson convinced people that DDT didn’t epitomize modern progress. Taiz recalled, more to the point, a Volkswagen ad campaign from the 1960s that deliberately embraced ugliness, with a photo of the front of the van and the tagline “A face only a mother could love.” “We don’t want to make enemies of beautiful flowers,” Taiz told me, “but to sell the tarweed as a kind of Volkswagen.” (For the record, “The tarweed is kind of a pretty plant,” Taiz said.) If stories are our way of understanding the world, our lack of an easy story to tell means we also lack some important understanding of what’s happening. It will be hard to drive cultural change – an imperative change for the thousands of critically rare plants and animals hanging on around the edges of human society – without the ability to move people. “There’s something insidious about the narrative template,” Heise said. “Narratives are indispensable tools in relating to nature, but they also block out certain parts of it.” 5. Happy Ever After Tarweeds When Heath Bartosh and his wife were first dating, he wanted to convince her of the merit of his love for California native plants. He thought about it carefully and selected, for her introduction to botanizing, Fritilaria — “the chocolate lily.” It is “kind of weird” to go looking for native plants, Bartosh agrees. But you just need to get people out in the field. Once you’re there, he says, a plant — even the chocolate lily — fades into its setting. “Being out there, it’s more about seeing a place,” Bartosh told me. “It’s more about thinking about time, and looking at the plants and thinking about the time they’ve been there evolving, or how they got there.” The Livermore Valley has been a basin for millions of years, since the Miocene Era, and scientists think the Livermore tarweed diverged from its nearest relative in the more recent Pliocene. In other words, several million years ago the Livermore tarweed first took root as a species in the same region where it lives today – and never left. Mount Diablo rose and eroded, an ice age came and went, a valley of gentle hills flooded and became the San Francisco Bay, people arrived, and arrived again with cows, and always in the background the short-lived Livermore tarweed bloomed anonymously. To connect with the tarweed is to connect with that staggering amount of time. To protect it is to protect the result of a process beyond our emotional comprehension. The Germans, Ursula Heise told me, don’t have a direct equivalent to the Endangered Species Act. Instead they have an act animated by the preservation of Landschaft, landscape — which, Heise says, includes human-transformed landscape. “You protect species so as to preserve these landscapes,” she says. Norway has a “Nature Diversity Act” that protects rare habitats in addition to rare species. Ecuador and Bolivia have tied biodiversity to indigenous cosmology and enshrined conservation in their constitutions. Native Californians encouraged tarweed growth as part of a productive grassland ecosystem. You might always find the Livermore tarweed itself unappealing yet appreciate it as a functioning piece in an exceedingly rare landscape. “I guess I ended up developing a soft spot for tarplants,” Bartosh says. “When everything else is out there is dry, you can come into places where you see big swaths of yellow. I’d love to see the city adopt Livermore tarplant as its official plant.” Which brings up one last story I’d like to tell you about the Livermore tarweed. That field Bartosh and I have been standing in for an hour — the city owns it. Livermore controls the land on which grows the last stand of the Livermore tarweed. So what’s the problem? Bartosh laughs. The city wants to dig up the land so it can receive mitigation credit for creating wetland habitat for threatened red-legged frogs. Postscript: The California Fish and Game Commission voted on Aug. 25, 2016 to designate the Livermore tarweed as an endangered species. Six months later the city of Livermore declared the Livermore tarplant its official city flower.
<urn:uuid:fec8dc3d-c7c4-4875-8cb4-ee84ea55a72e>
CC-MAIN-2021-21
https://baynature.org/article/the-livermore-tarweed-lives-happily-ever-after/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988724.75/warc/CC-MAIN-20210505234449-20210506024449-00418.warc.gz
en
0.953743
5,160
2.59375
3
It is always extremely difficult to be objective about the life of the founder of a Belief System such as Muhammad as his personality is inevitably blurred by an aura of the miraculous. The early biographers were preoccupied, not with historical facts, but with glorifying in every way the memory of one they believe to have been a Messenger of Allah. Consequently, there is a rich accretion of myth and miracle, mysterious portents and heavenly signs, of copying & plagiarizing from other religious beliefs and traditions.It is in fact the propaganda of an expanding faith. One of the sources that we frequently quote from is Ibn Ishaq's biography of Muhammad. So we decided that our readers should get to know something about such a prolific and honest author. The original book has not been found but its contents were traced through other contemporaneous authors who copied his book such as Ibn Hisham. Early Muslim historical writing was primarily concerned with the biography of Muhammad (Sirat Rasul Allah) and the first wars of Islam (Al-Maghazi). Muhammad Ibn Ishaq related the first known biography (Sira). This work no longer exists in its original form, but has been preserved in at least two recensions, one of these recensions being authored by Ibn Hisham (with many revisions), as well as by Al Bakka'i, al Tabari, Yunus b. Bukayr, al Athir, Al Qarawayoun (in Fez, Morocco) manuscript, etc.; thus Ibn Hisham's work represents one of the best existing authorities on the life of Muhammad. Behind the legendary Muhammad, there lies very important stories that give us a more authentic picture of Muhammad the man with all his ambitions, fears, anger, lust, jealousy, love, revenge, deception, aggression, etc. Although very little is known about his early years - the first certain date being that of the migration from Mecca to Medina, called the Hijra, which took place in AD 622 - it is still possible to build up the events of his real life as distinct from his symbolic one based upon the reports of the SIRA. The most comprehensive biography of Muhammad, called "Sirat Rassool^Allah" was written decades after his death by Muhammad Ibn Ishaq (d.767). It is a fact that there exist no documents describing Muhammad and his formative years contemporaneous with him. All the 'relevant details' were written with the benefit of hindsight and with the purpose of creating a distorted image of a man of almost mythic and superhuman qualities: sinless (Isma); divinely inspired; faultless; fearless; political genius etc. His life had to be made perfect to reflect the alleged miracle of the Quranic 'revelations'. It became the compulsory (Sunna) for all Muhammadans; a way of daily life to be emulated by them in every detail since it was the copy of the most perfect man, Muhammad. This doctrine has fixed the mentality and traditions of the Muhammadan Muslims in a Time Warp forever stuck in Muhammad's seventh century Arabia. Ibn Ishaqwas the first and nearest in time to the stories about Muhammad since he wrote the biography about 100 years after his death; unlike many later authors of Muhammad's alleged sayings and doings gleaned from the eighth or tenth mouths of reporters, almost 200 to 300 years after his death. Muhammad ibn Ishaq ibn Yasar was born in Madina about 75 years after the death of Muhammad in 632/3 AD. His grandfather, Yasar fell into the hands of Khalid ibn al Walid when he captured A'yn al Tamr in 634 AD having been held there as a prisoner by the Persian king. He was freed when he accepted Islam. Yasar's children Ishaq & Musa became traditionists thus paving the way and preparing the author's life even before he reached manhood. Muhammad Ibn Ishaq was associated with the second generation of traditionists called Al Tabieen/ Followers who saw some of the Sahabah / Companions but not Muhammad; notably, al Zuhri, Asim b Umar b Qatada & Abdullah b Abu Bakr. All these authors were so near to the events they recorded, that they needed no ISNAD/ Chain of Transmittersto cite their authority or to prove the veracity of their collections. ISNAD was needed much later on when the traditions were removed SEVERAL centuries after the death of Muhammad and tens of thousands of MADE to ORDER stories were concocted to fit in with the agenda of a one sect or a ruler or another. His study of Muhammad's Sunna (Muhammad's alleged deeds, sayings and traditions) must have started very early for at the age of 30, he went to Egypt to attend the lectures of Yazid b abu Habib. There, he was regarded as an authority, for the same Yazid, his tutor, afterwards related traditions onIbn Ishaq's authority. On his return to Madina, he went on with the collection and arrangement of the material he collated. Al Zuhri, who was in Madina in 123AH, is reported to have remarked that " the Madina would NEVER lack ILM (religious knowledge) as long as Ibn ishaq was there", and he eagerly gathered from him the details of Muhammad's wars. Although Ibn Ishaq's Sira was preceded by several Maghazi/ books of Conquestsof unknown dates, none the less, there is no doubt that his biography of Muhammad had no serious rival. Muhammad bin Ishaq (who died in 150 or 151AH), is unquestionably the principal authority on the Sira (Muhammad's biography) and Maghazi (Conquests) literature. Every writing after him has depended on his work, which though lost in its entirety, has been immortalised in the wonderful, extant abridgement of this pioneering work by Abu Muhammad 'Abd al-Malik bin Hisham or Ibn Hisham (d. 833) Ibn Ishaq's work is notable for its excellent rigorous methodology and its literary style is of the highest standard of elegance and beauty. This is hardly surprising when we recall that he was an accomplished scholar not only in the Arabic language but also in the science of hadith. For this reason, most of the isnad (chains of narration) that he gives in his Sira are also to be found in the authentic books of hadith. Ibn Ishaq, like Bukhari and Muslim later on, travelled very widely in the Muslim world in order to authenticate the isnad of his hadith. It is reported that he saw and heard Saeed bin Al-Musayyib, Aban bin Uthman bin Affan, Az-Zuhri, Abu Salamah bin Abdur-Rahman bin Awf and Abdur-Rahman bin Hurmuz Al-Araj. It is also reported that Ibn Ishaq was the teacher of the following outstanding authorities among others: (a) Yahya bin Saeed Al-Ansari (b) Sufyan Ath-Thawri (c) Ibn Jurayh (d) Shu'bah bin Al-Hajjaj (e) Sufyan bin Uyainah (f) Hammad bin Zaid The second most authoritative book on Sira is that of Al-Maghazi/ Conquests by Muhammad bin Umar Al-Waqidi Al-Aslami (who lived from 130 to 207AH and is buried in Baghdad). This book was widely read in various parts of the Muslim world. The third authoritative work on Sira is Ibn Sad's Tabaqat-ul-Kubara (nine volumes). Ibn Sad was both the student and the scribe/secretary of Al-Waqidi. The quality and scholarly excellence of his Tabaqat-ul-Kubara say a great deal about the academic competence of his teacher and patron. Ahmad bin Jafar bin Wahb, (died 292AH) called Al-Yaqubi, his work is unique for its examples of Muhammad's sermons,not to be found elsewhere, especially those containing instruction and admonition. Ahmad bin Yahya bin Jabir, died in 279AH called Al-Baladhuri, the work of this early historian is valuable for the texts it contains of certain important agreements which Muhammad concluded with some groups and individuals- among others, the texts of his agreements with the Christians of Najran, his agreement with the people of Maqna, his book to Al-Mundhir bin Sawi and to Akaydar Dawmah. Ibn Jareer, died in 310AH called Al-Tabari, authored a monumental world history Tareekh-ul-Umam wal Muluk. Al-Tabari was not merely a historian, but also an unrivalled authority on the Arabic language and grammar, on hadith and fiqh, and on the tafseer (exegesis) and interpretation of the Quran. Evidence of the excellence of his scholarship, his prodigious and untiring intellectual genius, is provided by his major works which run into many lengthy volumes each. Abul-Hasan Ali bin Al-Husain bin Ali Al-Masudi, died in 346AH. He is a very well-known Arab historian, descendent of one of the Companions of Muhammad, Abdullah bin Masood, author of two books on history including long sections on Sira, both mentioned above. All the above LUMINARIES inIslamic history and exegesis refer toIbn Ishaq's Sira in one way or another; a TESTAMENT to his authority and the veracity of his reporting. I would like our readers to know that a lot of the information that I record in this chapter is gleaned from the most outstanding translation of Ibn Ishaq's SIRA by Alfred Guillaume in his monumental "The Life of Muhammad" which should be a MUST read by any of you, whether so called Believers or Unbelievers. I have quoted from this book many instances in my thesis. The Arabic text was published at Gottingen in three volumes by F. Wustenfeld, 1858-60, and a German translation by G. Weil, The Historian of the Caliphate, appeared at Stuttgart in 1864. It is this latter work which is perhaps better known in the West, and is now more conveniently read in the English translation of the late Alfred Guilaume. Alfred Guillaume's English translation is a masterful attempt at the reconstruction of Ibn Ishaq's work. This was produced largely by translating what Ibn Hisham reports from Ibn Ishaq, adding quotations from the latter that are included by al-Tabari (mainly the material that Ibn Hisham omitted) and placing Ibn Hisham's comments on Ibn Ishaq's work at the end of the translation in a section called "Ibn Hisham's Notes" (pp. 691-798). The page numbers suggest that Ibn Hisham's comments constitute about 15% of his recessionsof Ibn Ishaq's work. Ibn Hisham's (d.833) work contains information concerning the creation of the world, Biblical Prophets, and the advent of Islam. The actions and deeds of Muhammad are meticulously noted, and his battles described in great detail. Ibn Hisham's Sirat Muhammad rasul Allah is considered by Dunlopas one of the best existing authorities on the life of Muhammad. We do not know if Ibn Ishaq ever wrote a "book" in the ordinary sense of books. What has come down to us seems to be from the notes taken by his pupils. The standard source is now the "Sirat al-Nabi" ("Life of the Prophet") of Abd al-Malik ibn Hisham (died 830, 835 or perhaps much later) which is a systematic presentation of Ibn Ishaq's material with a commentary by Ibn Hisham. This should be supplemented by the extracts in al-Tabari and other authors. For example, the story about the Satanic Verses was not reported by Ibn Hisham. But it was repeated by al-Tabari and others. Ibn Hisham makes no secret - in the Introduction to his book - of the fact that he omitted some of the material Ibn Ishaq included that reflected negatively upon Muhammad's character. The part of Ibn Hisham's work due to Ibn Ishaq is now usually called the "Sirat Rasul Allah" ("Life of Allah's Messenger"). Ibn Ishaq's work originally consisted of three almost equal parts. The first was a history of the world up until the beginning of Muhammad's ministry. The second was an account of Muhammad's work in Mecca and the third was an account of his work in Madina and his death. The first part, the Mubtada' (Mabda'), one has to go to the Tafsir and History, which is actually based upon the Hebrew Bible, from Genesis (In the Beginning/ Mubtada'), the beginning of Creation story. Unfortunately, Ibn Hisham was not interested in these stories and jumped directly to the story of Abraham, presumed by the followers of Muhammad as the ancestor of Muhammad. Much of this part it is lost. What remains is based on Arabic traditions and the Jewish scriptures. Al Azraqi for example, quotes some passages from the missing section in his Akhbar Mecca. The second part, which is often called al-Mab'ath, begins with the birth of Muhammad and ends when the first fighting from his base in Madina takes place. It is a collection of prophetic hadiths, especially about the events behind the revelation of one or another verse in the Quran (the division between Meccan and Madinan suras), lists of significant persons (for example, the earliest Muslims) and poetry. Ibn Ishaq does not attempt a chronology, but he does arrange his material in a logical sequence. The third part consists of a careful month-based chronology (which falls apart at the end) and the campaigns Maghazi (Ibn Ishaq counts 27, but he stretches the meaning of campaign) made by Muhammad from his base of operations in Madina are carefully embedded in this chronology. But before this campaign literature there is a copy of the document called the Constitution of Madina and an extensive section of Tafsir and Hadiths. Tafsir also occurs several times embedded in the campaign literature. The campaign literature itself includes extensive poetry and lists of persons involved as well as description of battles or why no battle took place. The Tafsiris among the earliest in Islam and the American Quran scholar John Wansbrough classifies it as Haggadic in his most primitive subset of the Tafsir. That is, it is primarily devoted to passing on a narrative. The campaign literature is followed by an appendix describing campaigns made by other Muslims under Muhammad's directions and a relatively brief account of his death and succession by Abu Bakr. There are about 600 Hadiths in Ibn Ishaq's collection and most of them have what appears to be acceptable isnads. But the later hadith collectors rarely used any material from the Sira (because of sectarian differences). There are almost as many poems as hadiths, but later commentaries tend to view them as worthless because they feel so many of them were forged (by Muslims). Any one having read and studied this book must come out ENLIGHTENED & AFFECTED by the depth of detail and the honesty of its reporting. It is in Ibn Ishaq's Sira that BOTH the Night Journey and the Satanic Verses controversies are first reported. Ibn Ishaq'sreporting was so honest that Ibn Hisham had to DELETE several stories that were too offending upon the character of Muhammad. Several Fundamentalist Muhammadan exegetes condemned Ibn Ishaq for having included reports gleaned from first or second generation descendants of converted Jews and Christians. They condemned him because they deliberately perpetuated the falsehood that the Jews and Christians of Arabia were foreign nationals when in fact they were aboriginal and indigenous Arabians who had converted to the monotheistic religions without force or coercion, unlike those who were forced or terrorised into following the CULT of Muhammad. This hatred of the People of the Book is institutionalized in Muhammas's Quran in numerous unforgiving and unambiguous verses. Ibn Ishaq's reports are as objective as any one of his times could have been and the proof resides in the fact that he mentions numerous objective and enlightening stories about Jews and Christians that are UNBIASED and show them in a singularly benevolent manner contradicting their distorted and hatemongering portrayals in Muhammas's Quran and by the later Muhammadan theologians and the Hadiths. (Pages: 10 - 16, for example). Although he was the nearest of the traditionists to the events that pertained to the time of Muhammad and hence to the 'truthfulness' of what he wrote, several of the Muhammadan theologians reject his authority for several reasons: (a) That he was a Shi'i favouring Ali over all the other contenders to the Khilfa (b) That he held the view that Man has free will, which is of course contrary to the Quranic perception (c) That his Isnads were defective, ie not 'iron' tight by naming all the reporters, which of course is a totally irrelevant objection since he was reporting on events that were so recent that they did not require a chain of reporters. He was after all no different from all the other traditionists of his own period, since they too did not require Isnad to 'prove' their reports. (d) He used reports of traditions gathered from Jewish and Christian sources which is unacceptable in the perverted psyche of fundamentalist Muhammadans. (e) He was generally so balanced in his views and reports that he had several very complimentary reports upon the Jews of Arabia which is again held against him by the fundamentalists who would rather have only one sided and extremely complimentary reports uponMuhammad and all his followers. (f) Most important of all, his report about Laylat al Qadr (the first 'revelation'), contradicts all the later versions that were DOCTORED and ALTERED to suit the diverse SECTARIAN conditions. (g) Two other important and significant reports that diminish the concepts of infallibility and sagacity of Muhammad are revealed in the versions given by Ibn Ishaq. On the other hand, among the most important Muhammadan traditionists who thought very highly of him were: I. I.I. al Zuhri: " Knowledge will remain in Medina as long as Ibn Ishaq lives" II. Abu Zur'a: " When tested by traditionists he was found truthful" III. Abu Hatim: "His traditions are copied by others" IV. Al Shaf'i: "He who wants to study al Maghazi deeply must consult Ibn Ishaq" V. Asim b Umar b Qatada: "Knowledge will remain among men as long as Ibn Ishaq lives" VI. Ahmad b Hanbal: "Excellent in tradition" dissolute degenerate depraved It is not difficult to understand why the name of Ibn Ishaq has been held in low esteem by the Classical Traditionists of the Third Islamic Century. They were reluctant to, and in a total state of intellectual denial to accept Muhammad's potrayal by Ibn Ishaq, which is, to put it charitably, extremely unfavourable and unpleasant. When one truly STUDIESIbn Ishaq's biography of Muhammad, it reveals the degenerate character of a man who is utterly without mercy or compassion; he incites his followers to commit mass murder and assassinations against individuals - invariably UNARMED and in the depth of night - and tribes who either displeased him, opposed him or because of jealousy or he wanted to acquire their wealth and women. He allowed and encouraged his gullible, superstitious and generally illiterate followers, to break every single rule of decency and chivalry to gain his ends. Their lives were totally dispensible. They were cunningly, deviously and inhumanly misled to their deaths with the dissoluteMuhammad's promises of Eternal Sexual and Sensual Pleasures in Muhammad'sWHOREHOUSE version of Paradise, as long as they fought and died for his 'belief system'. This abysmal picture of a depraved Muhammad was not painted by, and cannot be dismissed as the rantings of an enemy of'Islam'. That is why, in spite of the fact that the Classical Traditionists did their worst to ignore his work, they also did not attack or try to discredit those portions of the biography that showed Muhammad in the most disagreeable manner. In conclusion, one must insist that if Ibn Ishaq is found wanting because of the lack of Isnads in his reports, then one must cast aspersions on ALLthe earliest reports which themselves also were without Isnads rendering the whole field of the earliest traditions null and void making the later ones even more suspect.
<urn:uuid:8d2e5ee1-941d-4c49-8d54-f7d841ca2f1e>
CC-MAIN-2021-21
http://inthenameofallah.org/Sirat%20Rassool%20Allah.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989614.9/warc/CC-MAIN-20210511122905-20210511152905-00377.warc.gz
en
0.976024
4,471
2.75
3
• Pain is an unpleasant sensory and emotional experience. • Acute pain results from disease, inflammation, or injury to tissues and comes on suddenly. The cause of acute pain can usually be diagnosed and treated. The pain is confined to a given period and severity. • Chronic pain persists over a more extended period than acute pain and is resistant to most medical treatments. It often causes severe problems for patients. • There are hundreds of types of pain. Common pain syndromes include arthritis, back pain, central pain syndrome, cancer pain, headaches, head and facial pain, muscle pain, myofascial pain syndromes, neuropathic pain, reflex sympathetic dystrophy syndrome (RSDS), sciatica, shingles, and other painful disorders of the skin, sports injuries, spinal stenosis, surgical pain, temporomandibular disorders, trauma, and vascular disease or injury. • No test can measure pain intensity, no imaging device can show pain, and no instrument can locate pain precisely. The patient’s description of the type, duration, and location of pain may be the best aid in diagnosis. • Tests used to determine the cause of pain include electrodiagnostic procedures such as electromyography (EMG), nerve conduction studies, and evoked potential (EP) studies; imaging, especially magnetic resonance imaging (MRI); neurological examination; or X-rays. • The goal of pain management is to improve function, enabling individuals to work, attend school, or participate in day-to-day activities. • The most common treatments for pain include analgesic pain relievers (aspirin, acetaminophen, and ibuprofen), acupuncture, anticonvulsants, antidepressants, migraine headache medicines, biofeedback, capsaicin, chiropractic, cognitive and behavioral therapy, counseling, COX-2 inhibitors, electrical stimulation, exercise, hypnosis, lasers, magnets, nerve blocks, opioids, physical therapy and rehabilitation, R.I.C.E. — Rest, Ice, Compression, and Elevation, and surgery. • It is believed that pain affects men and women differently. This may be due to hormones, psychology, and culture. Pain is a feeling triggered in the nervous system. Pain may be sharp or dull. It may come and go, or it may be constant. You may feel pain in one area of your body, such as your back, abdomen, or chest, or you may feel pain all over, such as when your muscles ache from the flu. Pain can help diagnose a problem. Without pain, you might seriously hurt yourself without knowing it, or you might not realize you have a medical problem that needs treatment. Once you take care of the problem, the pain usually goes away. However, sometimes pain goes on for weeks, months, or even years. This is called chronic pain. Sometimes chronic pain is due to an ongoing cause, such as cancer or arthritis. Sometimes the cause is unknown. Fortunately, there are many ways to treat pain. Treatment varies depending on the cause of pain. Pain relievers, acupuncture, and sometimes surgery are helpful. SOURCE: NIH: National Institute of Neurological Disorders and Stroke It may be the fiery sensation of a burn moments after your finger touches the stove. Or it’s a dull ache above your brow after a day of stress and tension. Or you may recognize it as a sharp pierce in your back after you lift something heavy. It is a pain. It warns us that something isn’t quite right in its most benign form that we should take medicine or see a doctor. At its worst, however, pain robs us of our productivity, well-being, and, for many of us suffering from extended illness, our very lives. Pain is a complex perception that differs enormously among individual patients, even those who appear to have identical injuries or illnesses. In 1931, the French medical missionary Dr. Albert Schweitzer wrote, “Pain is a more terrible lord of mankind than even death itself.” Today, pain has become a universal disorder, a serious and costly public health issue, and a challenge for family, friends, and health care providers who must support the individual suffering from the physical and emotional consequences of pain. Ancient civilizations recorded on stone tablets accounts of pain and the treatments used: pressure, heat, water, and sun. Early humans related pain to evil, magic, and demons. Relief of pain was the responsibility of sorcerers, shamans, priests, and priestesses, who used herbs, rites, and ceremonies as their treatments. The Greeks and Romans were the first to advance a theory of sensation, the idea that the brain and nervous system have a role in producing pain perception. But it was not until the Middle Ages and well into the Renaissance-the 1400s, and 1500s-that evidence began to accumulate in support of these theories. Leonardo da Vinci and his contemporaries believed that the brain was the central organ responsible for sensation. Da Vinci also developed the idea that the spinal cord transmits sensations to the brain. In the 17th and 18th centuries, the body’s study-and the senses-continued to be a source of wonder for the world’s philosophers. In 1664, the French philosopher Rene Descartes described what is still called a “pain pathway.” Descartes illustrated how particles of fire, in contact with the foot, travel to the brain. He compared pain sensation to the ringing of a bell. In the 19th century, the pain came to dwell under a new domain-science-paving the way for advances in pain therapy. Physician-scientists discovered that opium, morphine, codeine, and cocaine could be used to treat pain. These drugs led to aspirin’s development, to this day, the most commonly used pain reliever. Before long, anesthesia-both general and regional-was refined and applied during surgery. “It has no future but itself,” wrote the 19th-century American poet Emily Dickinson, speaking about pain as the 21st century unfolds. However, pain research advances create a less grim future than that portrayed in Dickinson’s verse. This lot includes a better understanding of pain, along with significantly improved treatments to keep it in check. What is pain? The International Association for the Study of Pain defines it as An unpleasant sensory and emotional experience associated with actual or potential tissue damage or described in terms of such damage. It is useful to distinguish between two basic pain types, acute and chronic, and differ significantly. • Acute pain, for the most part, results from disease, inflammation, or injury to tissues. This type of pain generally comes on suddenly, for example, after trauma or surgery, accompanied by anxiety or emotional distress. The cause of acute pain can usually be diagnosed and treated. The pain is self-limiting; that is, it is confined to a given period and severity. In some rare instances, it can become chronic. • Chronic pain is widely believed to represent the disease itself. It can be made much worse by environmental and psychological factors. Chronic pain persists over a more extended period than acute pain and is resistant to most medical treatments. It can—and often does—cause severe problems for patients. A person may have two or more co-existing chronic pain conditions. Such conditions can include chronic fatigue syndrome, endometriosis, fibromyalgia, inflammatory bowel disease, interstitial cystitis, temporomandibular joint dysfunction, and vulvodynia. It is not known whether these disorders share a common cause. Hundreds of pain syndromes or disorders make up the spectrum of pain. There are the most benign, fleeting sensations of pain, such as a pinprick. There is the pain of childbirth, the pain of a heart attack, and the pain that sometimes follows a limb’s amputation. There is also pain accompanying cancer and the pain that follows severe trauma, such as that associated with head and spinal cord injuries. A sampling of common pain syndromes follows, listed alphabetically. Arachnoiditis is a condition in which one of the three membranes covering the brain and spinal cord, called the arachnoid membrane, becomes inflamed. Several causes, including infection or trauma, can result in inflammation of this membrane. Arachnoiditis can produce disabling, progressive, and even permanent pain. Arthritis. Millions of Americans suffer from arthritic conditions such as osteoarthritis, rheumatoid arthritis, ankylosing spondylitis, and gout. These disorders are characterized by joint pain in the extremities. Many other inflammatory diseases affect the body’s soft tissues, including tendonitis and bursitis. Back pain has become a high price paid by our modern lifestyle. It is a startlingly common cause of disability for many Americans, including both active and inactive people. Back pain that spreads to the leg is called sciatica and is a prevalent condition. Another common type of back pain is associated with the spine’s discs, the soft, spongy padding between the vertebrae (bones) that form the spine. Discs protect the spine by absorbing shock, but they tend to degenerate over time and sometimes rupture. Spondylolisthesis is a back condition that occurs when one vertebra extends over another, causing pressure on nerves and, therefore, pain. Also, damage to nerve roots (see Spine Basics in the Appendix) is a severe condition called radiculopathy, which can be extremely painful. Treatment for a damaged disc includes drugs such as painkillers, muscle relaxants, and steroids; exercise or rest, depending on the patient’s condition; adequate support, such as a brace or better mattress and physical therapy. In some cases, surgery may be required to remove the disc’s damaged portion and return it to its previous condition, especially when it is pressing a nerve root. Surgical procedures include discectomy, laminectomy, or spinal fusion. Burn pain can be profound and poses an extreme challenge to the medical community. First-degree burns are the least severe; with third-degree burns, the skin is lost. Depending on the injury, pain accompanying burns can be excruciating. Even after the wound has healed, patients may have chronic pain at the burn site. Central pain syndrome-see “Trauma” below. Cancer pain can accompany the growth of a tumor, cancer treatment, or chronic problems related to cancer’s permanent effects on the body. Fortunately, most cancer pain can be treated to help minimize discomfort and stress to the patient. Headaches affect millions of Americans. The three most common types of chronic headaches are migraines, cluster headaches, and tension headaches. Each comes with its telltale brand of pain. • Migraines are characterized by throbbing pain and sometimes by other symptoms, such as nausea and visual disturbances. Migraines are more frequent in women than in men. Stress can trigger a migraine headache, and migraines can also put the sufferer at risk for stroke. • Cluster headaches are characterized by excruciating, piercing pain on one side of the head; they occur more frequently in men than women. • Tension headaches are often described as a tight band around the head. Head and facial pain can be agonizing, resulting from dental problems or disorders such as cranial neuralgia. One of the nerves in the face, head, or neck is inflamed. Another condition, trigeminal neuralgia (also called tic douloureux), affects the largest cranial nerves (see The Nervous Systems in the Appendix) and is characterized by a stabbing, shooting pain. Muscle pain can range from an aching muscle, spasm, or strain to the severe spasticity that accompanies paralysis. Another disabling syndrome is fibromyalgia, a disorder characterized by fatigue, stiffness, joint tenderness, and widespread muscle pain. Polymyositis, dermatomyositis, and inclusion body myositis are painful disorders characterized by muscle inflammation. They may be caused by infection or autoimmune dysfunction, and are sometimes associated with connective tissue disorders, such as lupus and rheumatoid arthritis. Myofascial pain syndromes affect sensitive areas known as trigger points, located within the body’s muscles. Myofascial pain syndromes are sometimes misdiagnosed and can be debilitating. Fibromyalgia is a type of myofascial pain syndrome. Neuropathic pain can result from injury to nerves, either in the peripheral or central nervous system (see The Nervous Systems in the Appendix). Neuropathic pain can occur in any part of the body and is frequently described as a hot, burning sensation, which can be devastating to the affected individual. It can result from diseases that affect nerves (such as diabetes) or trauma, or because chemotherapy drugs can affect nerves, it can be a consequence of cancer treatment. Among the many neuropathic pain, conditions are diabetic neuropathy (which results from nerve damage secondary to vascular problems that occur with diabetes); reflex sympathetic dystrophy syndrome (see below), which can follow injury; phantom limb and post-amputation pain (see Phantom Pain in the Appendix), which can result from the surgical removal of a limb; postherpetic neuralgia, which can occur after an outbreak of shingles; and central pain syndrome, which can result from trauma to the brain or spinal cord. Reflex sympathetic dystrophy syndrome, or RSDS, is accompanied by burning pain and hypersensitivity to temperature. Often triggered by trauma or nerve damage, RSDS causes the affected area’s skin to become characteristically shiny. In recent years, RSDS has come to be called complex regional pain syndrome (CRPS); it was often called causalgia in the past. Repetitive stress injuries are muscular conditions that result from repeated motions performed in the course of normal work or other daily activities. They include: • writer’s cramp, which affects musicians and writers and others, • compression or entrapment neuropathies, including carpal tunnel syndrome, caused by a chronic overextension of the wrist and • tendonitis or tenosynovitis, affecting one or more tendons. Sciatica is a painful condition caused by pressure on the sciatic nerve, the main nerve that branches off the spinal cord and continues down into the thighs, legs, ankles, and feet. Sciatica is characterized by pain in the buttocks and can be caused by several factors. Exertion, obesity and poor posture can all cause pressure on the sciatic nerve. One common cause of sciatica is a herniated disc (see Spine Basics in the Appendix). Shingles and other painful disorders affect the skin. Pain is a common symptom of many skin disorders, even the most common rashes. One of the most vexing neurological disorders is shingles or herpes zoster. This infection often causes agonizing pain resistant to treatment. Prompt treatment with antiviral agents is essential to arrest the infection, which, if prolonged, can result in an associated condition known as postherpetic neuralgia. Other painful disorders affecting the skin include: • vasculitis, or inflammation of blood vessels; • other infections, including herpes simplex; • skin tumors and cysts, and • tumors associated with neurofibromatosis, a neurogenetic disorder. Sports injuries are common. Sprains, strains, bruises, dislocations, and fractures are all well-known words in the language of sports. Pain is another. In extreme cases, sports injuries can take the form of costly and painful spinal cord and head injuries, which cause severe suffering and disability. Spinal stenosis refers to a narrowing of the canal surrounding the spinal cord. The condition occurs naturally with aging. Spinal stenosis causes weakness in the legs, and leg pain usually felt while the person is standing up and often relieved by sitting down. Surgical pain may require regional or general anesthesia during the procedure and medications to control discomfort following the operation. Control of pain associated with surgery includes presurgical preparation and careful monitoring of the patient during and after the procedure. Temporomandibular disorders are conditions in which the temporomandibular joint (the jaw) is damaged. The muscles used for chewing and talking become stressed, causing pain. The condition may result from several factors, such as an injury to the jaw or joint misalignment. It may give rise to various symptoms, most commonly pain in the jaw, face, and or neck muscles. Physicians reach a diagnosis by listening to the patient’s description of the symptoms and by performing a simple examination of the facial muscles and the temporomandibular joint. Trauma can occur after injuries in the home, at the workplace, during sports activities, or on the road. Any of these injuries can result in severe disability and pain. Some patients who have had an injury to the spinal cord experience intense pain ranging from tingling to burning and, commonly, both. Such patients are sensitive to hot and cold temperatures and touch. A touch can be perceived as intense burning for these individuals, indicating abnormal signals relayed to and from the brain. This condition is called central pain syndrome. The damage is in the thalamus (the brain’s center for processing bodily sensations), thalamic pain syndrome. It affects as many as 100,000 Americans with multiple sclerosis, Parkinson’s disease, amputated limbs, spinal cord injuries, and stroke. Their pain is severe and is extremely difficult to treat effectively. A variety of medications, including analgesics, antidepressants, anticonvulsants, and electrical stimulation, are available to central pain patients. Vascular disease or injury — such as vasculitis or inflammation of blood vessels, coronary artery disease, and circulatory problems-all have the potential to cause pain. Vascular pain affects millions of Americans and occurs when communication between blood vessels and nerves is interrupted. Ruptures, spasms, constriction, or obstruction of blood vessels and a condition called ischemia in which blood supply to organs, tissues, or limbs are cut off can also result in pain. There is no way to tell how much pain a person has. No test can measure pain intensity, no imaging device can show pain, and no instrument can locate pain precisely. Sometimes, as in headaches, physicians find that the best aid to diagnosis is the patient’s description of the pain’s type, duration, and location. Defining pain as sharp or dull, constant or intermittent, burning, or aching may give the best clues to the cause of pain. These descriptions are part of the physician’s pain history during the preliminary examination of a patient with a problem. Physicians, however, do have several technologies they use to find the cause of pain. Primarily these include: • Electrodiagnostic procedures include electromyography (EMG), nerve conduction studies, and evoked potential (EP) studies. Information from EMG can help physicians tell precisely which muscles or nerves are affected by weakness or pain. Thin needles are inserted in muscles. A physician can see or listen to electrical signals displayed on an EMG machine. With nerve conduction studies, the doctor uses two sets of electrodes (similar to those used during an electrocardiogram) placed on the skin over the muscles. The first set gives the patient a mild shock that stimulates the nerve that runs to that muscle. The second set of electrodes is used to record the nerve’s electrical signals. From this information, the doctor can determine if there is nerve damage. EP tests also involve two sets of electrodes — one set for stimulating a nerve (these electrodes are attached to a limb) and another set on the scalp for recording the speed of nerve signal transmission to the brain. • Imaging, especially magnetic resonance imaging or MRI, provides physicians with pictures of the body’s structures and tissues. MRI uses magnetic fields and radio waves to differentiate between healthy and diseased tissue. • A neurological examination in which the physician tests movement, reflexes, sensation, balance, and coordination. • X-rays produce pictures of the body’s structures, such as bones and joints. Acetaminophen is the essential ingredient found in Tylenol® and its many generic equivalents. It is sold over the counter, in a prescription-strength preparation, and combination with codeine (also by prescription). Acupuncture dates back 2,500 years and involves the application of needles to precise points on the body. It is part of a general category of healing called traditional Chinese or Oriental medicine. Acupuncture remains controversial but is quite popular and may one day prove useful for a variety of conditions as it continues to be explored by practitioners, patients, and investigators. Analgesic refers to the class of drugs that includes most painkillers, such as aspirin, acetaminophen, and ibuprofen. The word analgesic is derived from ancient Greek and means to reduce or stop the pain. Nonprescription or over-the-counter pain relievers are generally used for mild to moderate pain. Prescription pain relievers, sold through a pharmacy under a physician’s direction, are used for more moderate to severe pain. Anticonvulsants are used to treat seizure disorders but are sometimes prescribed for the treatment of pain. Carbamazepine, in particular, is used to treat several painful conditions, including trigeminal neuralgia. Another antiepileptic drug, gabapentin, is being studied for its pain-relieving properties, especially as a treatment for neuropathic pain. Antidepressants are sometimes used to treat pain and, along with neuroleptics and lithium, belong to a category of drugs called psychotropic drugs. Also, anti-anxiety drugs called benzodiazepines act as muscle relaxants and are sometimes used as pain relievers. Physicians usually try to treat the condition with analgesics before prescribing these drugs. Antimigraine drugs include the triptans — sumatriptan (Imitrex®), naratriptan (Amerge®), and zolmitriptan (Zomig®) — and are used specifically for migraine headaches. They can have serious side effects in some people and, therefore, should be used only under a doctor’s care as with all prescription medicines. Aspirin may be the most widely used pain-relief agent and has been sold over the counter since 1905 as a treatment for fever, headache, and muscle soreness. Biofeedback is used for the treatment of many common pain problems, most notably headache and back pain. Using a particular electronic machine, the patient is trained to become aware of, follow, and gain control over certain bodily functions, including muscle tension, heart rate, and skin temperature. The individual can then learn to effect a change in his or her responses to pain, for example, by using relaxation techniques. Biofeedback is often used in combination with other treatment methods, generally without side effects. Similarly, the use of relaxation techniques in treating pain can increase the patient’s feeling of well-being. Capsaicin is a chemical found in chili peppers that is also a primary ingredient in pain-relieving creams (see Chili Peppers, Capsaicin, and Pain in the Appendix). Chemonucleolysis is a treatment in which an enzyme, chymopapain, is injected directly into a herniated lumbar disc (see Spine Basics in the Appendix) to dissolve material around the disc, thus reducing pressure and pain. The procedure’s use is minimal, in part because some patients may have a life-threatening allergic reaction to chymopapain. Chiropractic care may ease back pain, neck pain, headaches, and musculoskeletal conditions. It involves “hands-on” therapy to adjust the relationship between the body’s structure (mainly the spine), and it’s functioning. Chiropractic spinal manipulation includes the adjustment and manipulation of the joints and adjacent tissues. Such care may also involve therapeutic and rehabilitative exercises. Cognitive-behavioral therapy involves a wide variety of coping skills and relaxation methods to help prepare for and cope with pain. It is used for postoperative pain, cancer pain, and the pain of childbirth. Counseling can give a patient suffering from pain much needed support, whether it is derived from family, group, or individual counseling. Support groups can provide an essential adjunct to drug or surgical treatment. Psychological treatment can also help patients learn about the physiological changes produced by pain. COX-2 inhibitors may be useful for individuals with arthritis. For many years, scientists have wanted to develop a drug that works and morphine without its adverse side effects. Nonsteroidal anti-inflammatory drugs (NSAIDs) work by blocking two enzymes, cyclooxygenase-1 and cyclooxygenase-2, which promote the production of hormones called prostaglandins, causing inflammation, fever, and pain. The newer COX-2 inhibitors primarily block cyclooxygenase-2 and are less likely to have the gastrointestinal side effects sometimes produced by NSAIDs. In 1999, the Food and Drug Administration approved a COX-2 inhibitor — celecoxib — for use in chronic pain cases. The long-term effects of all COX-2 inhibitors are still being evaluated, especially in light of new information suggesting that these drugs may increase heart attack and stroke risk. Patients taking any of the COX-2 inhibitors should review their drug treatment with their doctors. Electrical stimulation, including transcutaneous electrical stimulation (TENS), implanted electric nerve stimulation. Deep brain or spinal cord stimulation is the modern-day extension of age-old practices. The nerves of muscles are subjected to a variety of stimuli, including heat or massage. Electrical stimulation, no matter what form, involves a major surgical procedure and is not for everyone, nor is it 100 percent effective. The following techniques each require specialized equipment and personnel trained in the specific procedure being used: • TENS uses tiny electrical pulses, delivered through the skin to nerve fibers, to cause changes in muscles, such as numbness or contractions. This, in turn, produces temporary pain relief. There is also evidence that TENS can activate subsets of peripheral nerve fibers that can block pain transmission at the spinal cord level, in much the same way that shaking your hand can reduce pain. • Peripheral nerve stimulation uses electrodes placed surgically on a carefully selected area of the body. The patient can then deliver an electrical current as needed to the affected area, using an antenna and transmitter. • Spinal cord stimulation uses electrodes surgically inserted within the epidural space of the spinal cord. The patient can deliver a pulse of electricity to the spinal cord using a small box-like receiver and an antenna taped to the skin. • Deep brain or intracerebral stimulation is considered an extreme treatment and involves the brain’s surgical stimulation, usually the thalamus. It is used for a limited number of conditions, including severe pain, central pain syndrome, cancer pain, phantom limb pain, and other neuropathic pains. Exercise has come to be a prescribed part of some doctors’ treatment regimens for patients with pain. Because there is a known link between many types of chronic pain and tense, weak muscles, exercise — even light to moderate exercise such as walking or swimming — can contribute to an overall sense of well-being by improving blood and oxygen flow to muscles. Just as we know that stress contributes to pain, we also know that exercise, sleep, and relaxation can help reduce stress, thereby alleviating pain. Exercise has been proven to help many people with low back pain. It is essential, however, that patients carefully follow the routine laid out by their physicians. Hypnosis, first approved for medical use by the American Medical Association in 1958, continues to grow in popularity, especially as an adjunct to pain medication. In general, hypnosis is used to control physical function or response; that is, the amount of pain an individual can withstand. How hypnosis works is not fully understood. Some believe that hypnosis delivers the patient into a trance-like state. In contrast, others feel that the individual can simply concentrate and relax or is more responsive to suggestion. Hypnosis may result in relief of pain by acting on chemicals in the nervous system, slowing impulses. Whether and how hypnosis works involve greater insight — and research — into the mechanisms underlying human consciousness. Ibuprofen is a member of the aspirin family of analgesics, the so-called nonsteroidal anti-inflammatory drugs (see below). It is sold over the counter and also comes in prescription-strength preparations. Low-power lasers have been used occasionally by some physical therapists as a treatment for pain. Still, like many other treatments, this method is not without controversy. Magnets are increasingly popular with athletes who swear by their effectiveness to control sports-related pain and other painful conditions. Usually worn as a collar or wristwatch, magnets’ use as a treatment dates back to the ancient Egyptians and Greeks. While it is often dismissed as quackery and pseudoscience by skeptics, proponents offer the theory that magnets may effect changes in cells or body chemistry, thus producing pain relief. Nerve blocks employ drugs, chemical agents, or surgical techniques to interrupt the relay of pain messages between specific areas of the body and the brain. There are many different names for the procedure, depending on the technique or agent used. Types of surgical nerve blocks include neurectomy, spinal dorsal, cranial, trigeminal rhizotomy, and sympathectomy also called sympathetic blockade (see Nerve Blocks in the Appendix). Nonsteroidal anti-inflammatory drugs (NSAIDs) (including aspirin and ibuprofen) are widely prescribed and sometimes called non-narcotic or non-opioid analgesics. They work by reducing inflammatory responses in tissues. Many of these drugs irritate the stomach and, for that reason, are usually taken with food. Although acetaminophen may have some anti-inflammatory effects, it is generally distinguished from the traditional NSAIDs. Opioids are derived from the poppy plant and are among the oldest drugs known to humankind. They include codeine and perhaps the most well-known narcotic of all, morphine. Morphine can be administered in a variety of forms, including a pump for patient self-administration. Opioids have a narcotic effect; that is, they induce sedation and pain relief, and some patients may become physically dependent upon them. For these reasons, patients given opioids should be monitored carefully; in some cases, stimulants may be prescribed to counteract the sedative side effects. In addition to drowsiness, other common side effects include constipation, nausea, and vomiting. Physical therapy and rehabilitation date back to the ancient practice of using physical techniques and methods, such as heat, cold, exercise, massage, and manipulation, to treat certain conditions. These may be applied to increase function, control pain, and speed the patient toward full recovery. Placebos offer some individuals pain relief, although whether and how they affect is mysterious and somewhat controversial. Placebos are inactive substances, such as sugar pills, or harmless procedures, such as saline injections or sham surgeries, generally used in clinical studies as control factors to determine the efficacy of active treatments. Although placebos have no direct effect on the underlying causes of pain, evidence from clinical studies suggests many pain conditions such as migraine headache, back pain, post-surgical pain, rheumatoid arthritis, angina, and depression sometimes respond well to them. This positive response is known as the placebo effect, which is defined as the observable or measurable change in patients after administering a placebo. Some experts believe the effect is psychological and that placebos work because the patients believe or expect them to do. Others say placebos relieve pain by stimulating the brain’s analgesics and setting the body’s self-healing forces in motion. A third theory suggests that the act of taking placebos relieves stress and anxiety — which are known to aggravate some painful conditions — and, thus, cause the patients to feel better. Still, placebos are considered controversial because, by definition, they are inactive and have no actual curative value. R.I.C.E. — Rest, Ice, Compression, and Elevation — are four components prescribed by many orthopedists, coaches, trainers, nurses, and other professionals for temporary muscle or joint conditions, such as sprains or strains. While many common orthopedic problems can be controlled with these four simple steps, especially when combined with over-the-counter pain relievers, more severe conditions may require surgery or physical therapy, including exercise, joint movement or manipulation, and stimulation of muscles. Although not always an option, surgery may be required to relieve pain, especially pain caused by back problems or serious musculoskeletal injuries. Surgery may take the form of a nerve block (see Nerve Blocks in the Appendix), or it may involve an operation to relieve pain from a ruptured disc. Surgical procedures for back problems include discectomy or, when microsurgical techniques are used, microdiscectomy, in which the entire disc is removed; laminectomy, a procedure in which a surgeon removes only a disc fragment, gaining access by entering through the arched portion of a vertebra; and spinal fusion, a procedure where the entire disc is removed and replaced with a bone graft. In a spinal fusion, the two vertebrae are then fused. Although the operation can cause the spine to stiffen, resulting in lost flexibility, the procedure serves one critical purpose: protecting the spinal cord. Other pain processes include rhizotomy, in which a nerve close to the spinal cord is cut, and cordotomy, where bundles of nerves within the spinal cord are severed. Cordotomy is generally used only for the pain of terminal cancer that does not respond to other therapies. Another operation for pain is the dorsal root entry zone operation or DREZ. Spinal neurons corresponding to the patient’s pain are destroyed surgically. Because surgery can result in scar tissue formation that may cause additional problems, patients are well-advised to seek a second opinion before proceeding. Occasionally, surgery is carried out with electrodes that selectively damage neurons in a targeted area of the brain. These procedures rarely result in long-term pain relief. Still, both physician and patient may decide that the surgical procedure will be sufficient to justify the expense and risk. In some cases, the results of an operation are remarkable. For example, many individuals suffering from trigeminal neuralgia who are not responsive to drug treatment have had great success with a microvascular decompression procedure, in which tiny blood vessels are surgically separated from surrounding nerves. By reading this website, you acknowledge that you are responsible for your own health decisions. The information throughout this medical website is not intended to be taken as medical advice. The information provided is intended for general information regarding Pain Management symptoms and services. If you are interested in finding out more, avoid worrisome self-diagnosis, please contact our Pain Management specialist for a personal consultation. No information on this site should be used to diagnose, treat, prevent, or cure any disease or condition.
<urn:uuid:f4878585-e086-4e0b-a48f-8b2c491ebfe8>
CC-MAIN-2021-21
https://www.painmanagementbrooklyn.com/conditions/chronic-pain/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.25/warc/CC-MAIN-20210507060253-20210507090253-00017.warc.gz
en
0.940638
7,357
3.421875
3
Date: Sat, 06 Nov 1999 10:34:25 -0600 Subject: the attack of the little ones In once wrote here about the "Damocles" group. They are like the more aggressive centaurs of high eccentricity (Pholus, Nessus, Asbolus...), the main difference being that they come much closer to the earth, reaching the distances of Mars or the asteroid belt. The more aggressive centaurs are mild in comparison. Before Damocles was discovered in 1991, the only "outer planet crosser" known --apart from comets, of which I want to speak-- was 944 Hidalgo, discovered in 1920, which traveled between the Mars/asteroid-belt realm and Jupiter, crossing it briefly and coming back with a period of 13.8 years, very close to that of Jupiter. Then Chiron was found in 1977, as an augur of what was to come. 12 years after its discovery (1989) it was found that Chiron had a giant comet's tail, or coma, making it the largest comet known in the solar system... except that it was "bounded" by Saturn and Uranus and never left their region. But then came Damocles in 1991 (lost soon after), one year before the discovery of the Kuiper belt and of Pholus. If the centaurs are "wild", think about Damocles: it "shoots" itself from Uranus to Mars, crossing everything in its path, including the unsurmountable Jupiter realm... yes, but... this is exactly what comets do. What is the difference between objects like Damocles and a comet? Before trying to answer, let's see the list of Damocles-like objects discovered after this: Damocles 1.581 22.060 Mars--Uranus Of all these only Damocles has been observed for a whole synodic cycle (2 oppositions), so all these orbits are more or less preliminary, and we must not forget that Damocles was lost in 1992, soon after its discovery, so the present ephemeris may show significant errors for positions before the 1980's. Consideration of this "family" is interesting to me for 2 reasons: it helps me understand the nature of the centaurs, as these are also crossers, more aggressive crossers indeed than centaurs, but they are linked to the inner solar system, like comets, while centaurs always remain within the sphere of the giant outer planets. The second reason why they are interesting to me, is because they make me wonder about comets, since the only difference I perceive between comets and these bodies, is that comets come within the earth region and they don't. Of course, there is the obvious reasons of not having a "tail", and of comets being always smaller in size. Now: we never interpret comets in classical astrology, because we don't know how to. Many comets have parabolic orbits that come from the absolute unknown of the Ooort region, and their apparition is quite unpredictable, therefore falling out of the astrological corpus, which is based on the concept of cyclic "return". But there are short period comets whose returns are predictable and well-known, like comet Halley, which never leaves the region of the main planets, "our region". The period of comet Halley, for example, is about 76 years, the same as Asbolus. When Halley reaches its aphelion around 19 Pisces, it is way "behind" Neptune, in the Neptune/Pluto region. But here is the difference, when Halley comes, it reaches the earth-realm, almost "touching" us with his coma. Asbolus will never do that, but it is easy to perceive how they are related: both are "cometary". In other words, what we are learning about centaurs, is teaching us about the astrological meaning of comets, and the age-old powerful symbolism and folklore about comets, helps us understand the nature of the centaurs. The problem, astrologically speaking, is one of methodology, of learning how to deal with all these bodies without falling into triviality and meaninglessness. This is a real challenge, but it is not unsurmountable. It simply means that the old paradigms of astrology are beginning to be substituted by new ones, as is happening in everything else in the world. No dogma will survive "the attack of the little ones", "the orbit crossers", in every facet of life, of human knowledge and of experience. They are destroying all the barriers. If we resist them, they will destroy us, too. Date: Sun, 21 Nov 1999 07:59:29 -0600 Marsden stated that comets 39P/Oterma and 29P/Schwassmann-Wachmann could be classified as centaurs, so I looked at their orbits to see how much they could be centaurs. Comet Schwass... whatever (period = 14.7 yrs) has a semi-major axis of 5.999, and its perihelion distance is 5.7. This means that its eccentricity is little (e=0.045), and that it is always very close to Jupiter's orbit. Personally, I would discard it as a potential centaur. But Oterma has some interesting potential, when compared to SG35 and to LE31: Semi-major axis (mean solar distance): SG35 = 8.22 period of revolution in years: SG35 = 24.5 SG35 = 5-10 Apparently, Oterma approaches Jupiter and Saturn barely "touching" them, while SG35 "touches" Saturn all the way and barely approaches Jupiter. LE31crosses both. The 3 orbits are very similar. I wonder how others feel or think about this... Date: Thu, 13 Jan 2000 07:33:46 -0600 >- 944 Hidalgo and 1181 Lilith Lilith is a regular main-belt asteroid, but Hidalgo is a crosser. Discovered in 1920 and considered by many to be a comet (an identity situation present in many crossers including centaurs), it has a period of 13.8 years and crosses Jupiter and Saturn. At aphelion its distance is the same as Saturn, whereas at perihelion it comes to the region between Mars and the asteroid belt. We could consider it a Mars-Saturn linker. >- Heracles and Damocles 5143 Heracles, discovered in 1991, is an Apollo-type object. It has a period of 2.5 years and crosses Venus, Earth, Mars, and the asteroid belt. When farthest, it is a little beyond the main asteroid-belt (like, for example, Hygeia), and when closest to the Sun comes near Mercury. It is a Mercury-asteroids linker (from now on by "asteroids" I will mean the main-belt asteroids). 5335 Damocles, also found in 1991, is placed by the Minor Planet Center among the "other unusual objects" (OUO) category , but it clearly defines a type, the "Damocles Group", which was first brought to my attention by Russian astrologer Roman Brol. When closest to the Sun, it reaches the mean Mars distance, and when farthest, it is a little beyond Uranus. It is a Mars-Uranus linker. Its period is 40.6 years It moves retrograde, in a direction contrary to the rest, with a period of 23 years. The only reason it is not classified a centaur like SG35 is because it crosses Jupiter, while SG35 does not. They both cross Saturn and are Jupiter-Saturn linkers. More information on it can be found in my site in the "methodology" and "naming" collected posts. As mentioned in one of those posts from a note by Dr. Marsden, comet P/Oterma is in this same category and I will soon add it to the list. To my knowledge this is the object with the longest period that exists (5800 years). It was seen during 2 oppositions in 1996-97, when it was approaching its perihelion, at the distance between Mars and the asteroids. The orbit computed gives an aphelion distance of more than 600 AU (compare with farthest distance reached by Pluto of 50 AU!). It is quite extraordinary, although with such an orbit 2 oppositions is not enough to have an accurate orbit determination. It is classified as OUO by the Minor Planet Center. This object is classified as a potentially hazardous asteroid (PHA), and is being very closely watched since its discovery December 2, 1999. Its minimum distance possible to the Earth is 0.000 (zero!), but this time it missed us: it was closest to the Earth around November 6, at 0.045 AU. With the trajectory calculated so far, it crosses the Earth and goes back to beyond Neptune, in the Neptune-Pluto realm. Its period is 77 years. >- 1999LD31. period 121 yrs. asteroids-beyond Pluto linker This is the Damocles group. I will leave them for another post. ... and here is the complete list of TL-66 companions (all are from 1999): ( TL66 . period 789 yrs.) - CY118. period 1086 yrs. Date: Sat, 15 Jan 2000 19:21:46 -0600 1997 MD10. - Discovered June 29, 1997, it was observed for 137 days until November 13th, and its orbit has an uncertainty of "1", better than Hylonome and Chariklo. When closest to the Sun, it is at the distance of Mars, but when farthest it is beyond even the aphelion of Pluto, so we could call it a Mars-super-Pluto crosser. It also has a very high inclination (60 degrees, like Damocles). Its period is 140 years. As with most of its class, it is probably very small (2-4 Km in diameter), so they are really like "comets without a tail". 1998 QJ1. - Discovered August 17, 1998, it was observed for 2 months until October 18 and has an orbital uncertainty of "3" --better than SG35 and QM107--. When closest to the Sun, this object reaches the distance of all the other main-belt asteroids, and when farthest, it crosses Uranus and stays close behind, so we can call it and Asteroids-Uranus linker (or rather, "breaker" or "cutter"). Its inclination is 23 degrees, and its period 39 years. It is probably as small as MD10, being the average size of a comet. Do we discard them because they are so small? Would they be powerful because they come much closer to us than the centaurs? Is their power measured by the "sharpness" of their acute and abysmal orbit or by their size? Why are some centaurs so powerful when compared with other asteroids? Is it because of their slow "long-wave" motion? 1998 WU24. - Discovered November 25, 1998, it was observed for 3 months until February 20th, 1999, and has an uncertainty of "3" like QJ1 and TL66. When closest to the Sun it comes a little closer than the average distance of Mars, and when farthest it comes very close to Neptune, almost crossing it, so it is clearly a Mars-Neptune "cutter" with an orbital inclination of 43 degrees and a period of 59 years, in the same size range as the others (3-6 Km diameter). These objects all have cometary eccentricities larger than that of the centaurs, so they are probably different astrologically. They also come closer, crossing the "Jupiter border" and coming into the neighborhood of Mars. Perhaps Mars is a clue, since the pointing arrow in its glyph describes their orbits well... they are more aggressive and invasive than centaurs. These distinctions are not trivial. Considering them and how they are similar and different from centaurs helps us to define better the limits of centaurean symbolical attributions, the same as when studying the differences between centaurs and main-belt asteroids or between centaurs and the earth-crossing near asteroids. It is my opinion that if these distinctions are not made, the astrology of asteroids falls into triviality and mere name-playing. 1999 LD31. - This one was found June 8th, 1999, and was recently observed again after 210 days January 4th, so its orbit is now quite accurate, with an uncertainty of "2", like Chariklo and Hylonome. When closest to the Sun, it reaches the main asteroid belt, and when farthest, it goes almost to the farthest possible distance of Pluto at aphelion, so we can call it an Asteroids-super-Pluto cutter or crosser. It was found simultaneously with LE31/Melanippe?, and both have the peculiarity of moving in a direction contrary to all the other objects and planets (i.e., they always move retrograde heliocentrically). Its inclination is 160 degrees (20 degrees in the opposite direction) and it has a period of 121 years. and now the last of the group so far: 1999 RG33. - Discovered September 4, 1999, has just been re-calculated after having found older photographs that cover a period of almost 3 years, giving its orbit an uncertainty of "1" like, like Nessus, Asbolus, and MD10. When closest to the Sun it passes the asteroid belt in the direction of Mars, and when farthest, it comes close to Uranus without crossing it, so we can call it a Mars-Uranus linker or cutter. It has an orbital inclination of 35 degrees and a period of 29 years. It also has the mildest eccentricity of the group, but even then, it is larger that the eccentricity of any of the centaurs. Months ago, On June 6th, Zane forwarded a message from Arno Schlick suggesting the name "Eris" for 1992QB1, and more recently, on November 20th, Grug daViking suggested "Eris" for TL66. After these suggestions, I checked my (small and very humble --I'm no expert in mythology) dictionary and found "Eris" to be synonym with "Discordia", the twin sister of Ares and daughter of Zeus and Hera. My dictionary says that "Eris" is variously described as sister, mother, wife, and daughter of Ares, therefore sister of Fear, Panic, Terror, and Trembling, the four sons of Ares. Based on this, I commented that "Eris" could not be used as a name for a trans-Neptunian planet, since the primordial deities have been given the prerogative by astronomers, but "Eris" seems a perfect name for one of these bodies to me... Date: Sat, 21 Apr 2001 23:54:35 -0600 I found the coordinates of Colleville Sur Mer: 49n21 / 00w51 This apparently is a few hundred meters from Omaha beach and is where the cemetery of American soldiers is found. Using these coordinates, the exact time of sunrise is Since the operation was scheduled at 6:30, this means 25 minutes after sunrise. According to the International Atlas, Bayeux was 2h ahead of Greenwich, the same as England. This was the biggest full-force invasion of all times, a very literal description of the vertical orbit-crossing of very eccentric orbits, specially the damocloids. Damocloids are particularly war-like in their motion, linking Mars with Uranus, Neptune, or Pluto. Pallas and Asbolus on the Ascendant are self-descriptive, e.g., the military strategy of Pallas and the mist of Asbolus. Recall some of my keywords for Asbolus: <<mist, fog, hiding, secrets, mystery, conspiracy, torment, punishment, anguish, bewilderment, bleeding hearts, oven, igneous, ash and smoke...>> I thinks this describes well the scenario at Omaha beach that morning. But let's check the Ascendant at the scheduled time: Pallas = 20Ge13 There is a 1-degree configuration that is also very descriptive: Mars = 8Le29 Date: Sun, 22 Apr 2001 11:32:21 -0600 << ...This was the biggest full-force invasion of all times, a very literal description of the vertical orbit-crossing of very eccentric orbits, specially the damocloids. Damocloids are particularly war-like in their motion, linking Mars with Uranus, Neptune, or Pluto.>> We saw RG33 (Mars-Uranus linker, inclination 35 degrees) conjunct Pallas and the Ascendant. I personally feel that although Asbolus describes well the "atmosphere" of the invasion (all the nightmare), the extreme concentration of violence and the direct full-force invasion is more related to the Damocloids. So let's see the other foci of the chart: Damocles = 6Pi42 DG8 = 14Sa05 QJ1 = 18Le19 High-inclination objects are "divers", they "criss-cross" the sky like fighting planes and artillery shells. The damocloids have the record of high inclinations, and is the closest you can get to periodic comets. They are "exaggerated" centaurs, clearly more invasive, poignant, and martial than all of them in orbital terms. They extend the centaurean paradigm and "invade" the inner Solar System. This crossing of the Jupiter gravitational barrier makes them catalysts and transformers in a very literal way, because they do something none of the centaurs can do: they transform themselves from slow-moving objects (Jupiter and Saturn onwards) to fast-moving objects (main asteroid-belt and Mars), as if completely changing their nature. They tend to move retrograde and reach extreme latitudes. They are more "comets" than anything else in the asteroid world, the ultimate guerrilla fighters and revolutionaries. They are agents of reform with a very wide "reservoir" of possibilities and transformations. The centaurs are transitional in this respect. The above is speculative, but I hope there will be a time when the different categories: transneptunians, centaurs, damocloids, apollos... are all given their proper place in the structure and their characteristics as a group can be visualized when seen in terms of the whole. These distinctions between the levels or domains in which the different groups operate are the basis of work with asteroids. Date: Sun, 22 Apr 2001 17:01:40 -0600 The asteroid belt between Mars and Jupiter, considered collectively, establishes a clear and obvious separation between the world of the gas giants, the transcendences of existence (slow-moving), where by definition all the centaurs have their home, and the much faster world of the inner or terrestrial planets. The physical and dynamical differences between the two worlds are fundamental. We have a set of small planets whose motion is almost totally controlled by the Sun, i.e. their mutual perturbations or gravitational interactions are small, then the asteroid belt, like a colony of the world of the fixed stars inside the Solar System, and then, beyond the belt, we are in a world of giants that are able to gravitationally push and pull the Sun enough to make it swing around the Barycenter of the Solar System. If we are to make the analogy between a human being and the solar system, we could imagine the asteroid belt like a sort of navel dividing the higher from the lower part of the body, or the higher brain and consciousness functions (the transcendencies) and the bodily or vital functions (the terrestrial planets). Of course, we can reverse this and say the slow planets represent our "unconscious" lower functions and the faster planets (Sun, Mercury, etc.) our "conscious" part, which is the more superficial or obvious way one would think. But this doesn't matter. My point is simply that structurally the Solar System is clearly divided between inner and outer, and that this notion is important to understand the nature of the damocloids. The Damocloids are the only dynamical group among minor planets that traverses or "joins" the two worlds. At perihelion they are inner planets, at aphelion they are outer planets. They are actually dead or inactive comets, very small in size (5 or 10 Km, like comets), with a tendency for moving in retrograde and highly inclined orbits, also like comets. When I wrote "The Centaurs and Passion", I was unaware of the existence of the damocloids or damoclians. Today, I consider my thoughts there as being particularly descriptive of damocloids. I believe that they apply to centaurs of course, but the line of reasoning contains the essence of how to understand the damocloids and comets. As I mentioned, I feel that centaurs are transition objects that have cometary characteristics but they are shown in a milder, less extreme way. Date: Mon, 23 Apr 2001 00:30:46 -0600 When examining charts like --for example-- that of D-Day (6 Jun 1944), and if one considers the many asteroid contacts that can de descriptive of the event in the chart, the need arises to distinguish between levels or domains, to articulate the structure of those contacts, i.e., to analyze in terms of layers of meaning. All contacts, assuming that they are tight enough (orbs around 1 degree, critical aspects to the chart's main foci: angles, Sun, Moon, Node, etc.), are significant, but if one does not have a clear picture of the whole, it is easy to get lost into triviality and the flat interpretation that are so common in astrology when focal determination is not taken into account. Let's take, for example, 2 sets of contacts at the landing in Omaha Beach: Damocles = 6Pi42 the Moon is here being crossed by the 2 damoclians. I feel that this represents the waves of young soldiers clashing with the German defenses at the Beach, in accordance with the almost perpendicular orbital crossing of the 2 damoclian pseudo-comets. This could mean the crucifixion of many of them dying like insects... There is something swift and quick that I associate not with orbital velocity in this case but with 1-) the diving represented by the very high latitudes, and Which, considered together, gives me the feeling of something that gets loose but at the same time is very concentrated and intense. The intense concentration comes from the small size of the damoclians united with their very acute, truly "aerodynamic" motion allowed by very high latitudes, their exttreme eccentricity, and their Martial nature. On the other hand, we have in the same chart something like: Mars = 8Le29 Here we have also something that can be associated with the invasión, but here we are at another level, with BU48 and TD10. The Mars opposition is a clear indication of the landing at Normandy, the full-blown confrontation, but this is being described in Martial terms (the military operation) instead of Lunar terms (the people, the sacrifice of thousands of human lives). Another difference, which is the point I want to make about the damoclians, is that BU48 and TD10 are also "cometary" but in a different scale, a "macro-scale". BU48 is very Plutonian, almost trans-Neptunian, like an extension of Pluto that reaches into the world of the centaurs (hence the image of Alastor, the black stallion of Pluto's chariot), while TD10 is a cosmic incrustation into the centaur world (hence the image of Ixion, the father of the centaurs crucified on a rotating wheel. RZ215 is a scattered-disk object.) The inner solar system, from the asteroid belt inwards (the terrestrial planets, the world of Apollos, etc.) is the "micro" scale, while the centaurean world is "macro" (and the transneptunian is a sort of "cosmic" or *historical* scale). Then, the "Jupiter group" (SG35, LE31, GM137) is transitional, and the asteroid belt is the "meso" scale, like a sort of very open market of possibilities and multitudinous variations of centering, stabilizing, adaptive, assimilative activity, like a ring that holds together and at the same time separates the 2 worlds, like a barrier, a membrane, a protective ring which in society is represented by social institutions and the divisions of labor. The Damocloids, then, must be seen in the light of these structural "spaces" of the solar system. I do not want to speculate more on their nature, and I do not pretend that you follow the interpretations I have given of the meaning of the different levels or structural spaces. You can develop your own meanings. You can consider this "picture" of the Solar System purely speculative, but the fact is that it is based on very real and obvious facts that must be taken into account when working with asteroids, and represents something that is not yet formed, a picture that is only beginning to emerge from the slow but steady research into the astrological nature of the slow-moving, "macro-cosmic" asteroids. Date: Mon, 23 Apr 2001 09:43:36 -0600 I have given the list of known damocloids several times in the past. These messages represent my effort in getting familiar with them and seeing them in a coherent and ordered --layered-- perspective of the whole, i.e., the place they hold in the solar system. Let's see first the complete list of Damoclians or pseudo-damoclians known so far; they are ordered from the "most ancient" to the "youngest": 1996PW, period 3600 years Of these, only HO121 and TT12 have badly-determined orbits. The orbit of TT12 is so badly determined that it is not a damoclian according to the MPC elements, while it is according to the Astorb elements. It may disappear from the list in the near future. Now let's see the main characteristics of Damoclians as a group: 1- They are dynamically like comets: a- very high orbital inclination. Examples of maximum geocentric latitudes, in degrees: AB229=78, MD10=78, WU24=73, Damocles=79 (it actually can reach more than 85 degrees!), DG8=65, RG33=53, Hidalgo=62. The inclination acquires a dramatic dimension in damocloids, it becomes "very eloquent", essential to their nature. b- frequent retrograde orbits. The following in the list are retrograde: 2000DG8, 1999LE31 (Melanippe), 2000HE46, #20461=1999LD31. Retrograde heliocentric motion, found only in comets before damoclians were discovered, is a unique characteristic of them among the whole population of asteroids. c- very small size. Typical assumed sizes, based on a theoretical albedo, are 7, 3, 8, 6, 4, 5, 10 (Damocles), 2, 13, 18, 16, 12 (kilometers/diameter)... This is another clear characteristics of comets, and dramatizes the difference in nature between damoclians and centaurs, which are typically 10 times larger. d- they come from the outer part of solar system, entering the inner part. With the exception of the "Jupiter group" (like Melanippe/LE31), almost all of them come from the realms of Uranus, Neptune, Pluto, and trans-Plutonian space. Check out their aphelion distance: ------------------>Saturn = 9. but unlike many comets, besides the obvious lack of a coma: 2- They do not cross the Earth's path, but remain near the aphelion-perihelion range of Mars and the asteroid belt. I already commented that this characteristic ties their meaning to that of Mars, since Mars is their "lower" limit. The only exception is 1999XS35, which was identified at the MPC as a "potentially hazardous asteroid" and can now be found among the Apollo (Earth-crossing) asteroids; considering the orbit of XS35, this classification is somewhat meaningless (remember that for the MPC/IAU the "damocloids" dynamic group does not officially exist). Let's see the perihelion distance of the group: Consider these thoughts like successive approximations, where it is necessary for me to repeat the same thing many times. Date: Wed, 25 Apr 2001 09:39:50 -0600 In the previous messages we saw that the Damoclians are the asteroid group that most resembles comets. It is evident that an exploration of the meaning of damocloids can clarify the possible or probable meaning of comets, and viceversa, and that they are unique in several respects: 1- In the official full list of more than 120,000 asteroids (121,432), there are only 4 that are retrograde: 1999LD31, 1999LE31, 2000DG8, and 2000HE46. All 4 are damocloids. 2- In the list of the 20 (out of 121,000) asteroids with highest orbital inclination, 7 of them are damocloids. 3- they are the only group that traverses from the outer solar system to the inner solar system. I would like to explore further here characteristic "3", which I have suggested could be seen as objects that can completely transform their nature from "outer" planet to "inner" planet. To avoid misunderstandings, I reiterate that a-) this idea is based exclusively on dynamics, on their motion and frequency, their "gesture", their "music", and b-) "outer" and "inner" is a technical term that refers to the structure of the solar system, and has nothing to do with "outer world" or "inner world"; the only implications are "slow", long-wave, "very open", and "fast", short-wave. It is up the each astrologer's personal symbolical codes to interpret the meaning of this astronomical fact and distinction. These changes from "macro" (the Saturn-Pluto world of the centaurs) to "meso" and "micro" (the asteroid belt and Mars), characteristic of the damocloids and unique to them, are expressed in extreme differences in velocity when at aphelion and when at perihelion. On a mammoth, superlative scale, we have that unique object, 1996PW, which at perihelion at the asteroid belt moves 3700 faster than when at aphelion, far beyond virtually every other asteroid orbit (sometimes it is called an "oortoid" because of this). When working with centaurs, one has become accustomed to these changes. But in the centaur world they are quite moderate when compared with the damocloids, which present these changes in a higer order of magnitude. The largest difference between aphelion and perihelion velocity among the centaurs (excluding the badly-determined orbits), is found in the motion of Asbolus, which is 8.8 times faster in aphelion that when in perihelion, followed by Pholus (6.9), and Nessus (5.6). But check out the largest differences among the Damocloids: AB229 = 297.5 This seems to confirm the idea that their properties are different; seeing the numbers, one can "feel" that "something different is going on"... One analogy that always pops in my mind is that of homeopathy. Is any of you an expert in homeopathy? (I am not). What is the difference between a "potency" of 10 and a "potency" of 100 and one of 1000? What happens when one "jumps" from 10 to 100? Homeopathy is also a good analogy in terms of the minuscule size of asteroids, they come in "homeopathic doses". And in the comparison of centaurs with damocloids, we have seen that this "factor of ten" is also evident in their size (Damoclians are 10 times smaller). In other words, the nature of Damoclians is to move from "high" to "low", in a vertical way. This verticality is seen in their orbits of extreme eccentricity (the perpendicularity, crossing angles close to 90), and in their very high inclinations, which also tend to approach 90 degrees. They evidently are "super-dynamical", almost "aerodynamical". Other much faster asteroids, such as Icarus, Talos, or Phaethon, are also "aerodynamical", but in this case the symbolism is tied to the Sun, while the symbolism of Damoclians is tied to Mars, besides the fact that the Apollos --or any other dynamical group-- never cross the navel of the solar system (the asteroid belt) and go to the other half of the body. I have always associated the very open and eccentric centaurean orbits with very large wings, but the wings metaphor doesn't come to my imagination when the eccentricity reaches these extreme levels. This is why now I think that the *some* (not all) of the images in "The Centaurs and Passion" may apply more to the damoclian group and to comets. I already suggested several images that come to me, such as the diving, the criss-crossing, the guerrilla fighting (see my brief comments on Omaha Beach), guided by the Martian overtones of the orbits (they even resemble the arrow in the symbol of Mars). I also suggested already that the Damoclians may be true revolutionaries and agents of transformation... Another image that comes is that of digestion. The digestive process exemplifies the transformation of one thing into the other, the "potentiation" mentioned regarding homeopathy, the "travel" of food from upper to lower as it is transformed into energy and substance, etc. Date: Wed, 30 Oct 2002 11:24:18 -0600 Any orbit-crosser that goes up to Saturn or beyond at aphelion, and up to at least this side of Jupiter (or, more typically, the asteroid belt) at perihelion is a Damoclian to me: With the exception of LE31/Melanippe, they all cross "the navel" (the asteroid belt) and travel from the inner to the outer solar system. None other of the dynamical groups does that. I will add RP120 soon to Riyal in place of the Apollo 2001OG108, which was reclassified as a comet. NOTE: some recent thoughts on the damoclian WU24 can be found in my examination of Gorecki.
<urn:uuid:bca3c936-13e7-4153-95a8-3d61bfdbf2b6>
CC-MAIN-2021-21
https://www.expreso.co.cr/centaurs/posts/notes/damocles.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989526.42/warc/CC-MAIN-20210514121902-20210514151902-00014.warc.gz
en
0.944132
7,498
2.59375
3
Received Date: May 07, 2016; Accepted Date: May 28, 2016; Published Date: May 31, 2016 Citation: El-Araby DA, El-Didamony G, Megahed MTH (2016) New Approach to Use Phage Therapy against Aeromonas hydrophila Induced Motile Aeromonas Septicemia in Nile Tilapia. J Marine Sci Res Dev 6:194. doi:10.4172/2155-9910.1000194 Copyright: © 2016 El-Araby DA, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Visit for more related articles at Journal of Marine Science: Research & Development Infections with Aeromonas hydrophila is a progressive problem in aquaculture. The use of antibiotic such as Ciprofloxacine has contributed to the rapid and effective treatment of disease cause by this organism. However the Fast-paced increase of resistance to the Said antibiotics has posed problems and there is now a new approach to look for alternative method to control this bacterial pathogen. Phage therapy comes in as a new method to respond to these growing problems. This study demonstrated the promising action of isolated bacteriophage ΦZH1 and ΦZH2 for therapy against Motile Aeromonas Septicemia in Nile tilapia caused by Aeromonas hydrophila. Phage therapy; Aeromonas hydrophila; Motile Aeromonas Septicemia; Fish; Nile tilapia Egyptian aquaculture has developed rapidly in recent years, where there are many problems facing fish, one of them is bacterial infection for fish, which constitutes a huge menace for aquaculture farming, leading to disastrous economic loss and health risks for the consumer . Aeromonas hydrophila a gram negative, rod shaped enterobacterium and distributed widely in aquatic environments . It is one of the most important agents of the outbreaks in fresh water fish. The main problem involving the use of antibiotics against Aeromonas infections is the development of resistance by these bacteria . A bacteriophage is a virus that infects bacteria and can either instantly kill a bacterial cell or integrate its DNA into the host bacterial chromosome . If the phage DNA is integrated into the host, the phage can then stay within the bacteria causing no harm. This pathway is called the lysogenic cycle. On the other hand, the phage can also cause eventual lysis and death of the host after it reproduces inside the host and escapes with numerous progeny through the lytic cycle . Phages are effective against multidrug resistant pathogenic bacteria because the mechanisms by which they induce bacteriolysis differ completely from those antibiotics. Moreover, phages have self-limitation, meaning that the number of phages remain in very low level after killing the target bacteria . The role of bacteriophages in the environment has been the subject of intense investigation over the past several years . The development of techniques to study natural viral populations in situ has progressed tremendously. Various aspects of bacteriophage ecology in natureincluding abundance, role in microbial mortality and water column trophodynamics, viral decay rates, repair mechanisms and lysogeny are gradually being understood . Much of research has focused on using phages to control diseases caused by a variety of human pathogenic bacteria including Salmonella , Listeria and Campylobacter species. In addition, the current attempts to apply phages in the control of human pathogens, aquatic animal pathogens have also been investigated as a target for phage therapy. A number of phages have been isolated for potential use in phage therapy against important aquatic animal pathogens such as Aeromonas salmonicida in brook trout (Oncorhynchus fontinalis) , Vibrio harveyi in shrimp (Penaeus monodon) , Pseudomonas plecoglossicida in ayu (Plecoglossus altivelis) and Lactococcus garvieae in yellowtail (Seriolaquin queradiata) . The efficiency of an Aeromonas hydrophila bacteriophage isolated from ponds of Abbassa was compared to that of the antibiotic Ciproflixacine for the treatment of “Motile Aeromonas Septicemia” (MAS) in Oreochromis niloticus . Hence, this study aims to isolate and identify of A. hydrophila of lytic phages and efficiency of phages to control A. hydrophila in aquria. Isolation of Aeromonas hydrophila bacteriophages Bacteriophages were isolated from sewage samples by the specific enrichment method of Adams. The supernatant were filtered through a 0.45 μm pore size syringe filter and assayed for phage activity by double layer agar technique. The presence of phage in filtrate was detected by spot test and plaque assay methods as described by Eisenstark . Phages were propagated and purified from single-plaque isolates according to Adams . Plaques were distinguished by differences in plaque morphology, size and turbidity and were purified by successive single plaque isolation using the propagating host strain. Afterward, Phage suspensions of high titer lysates were prepared in two ways first Confluent plate lysates were prepared according to the method of Eisenstark . Physical characterization of isolated bacteriophages Effect of temperature on isolated phage: The effect of temperature on the viability of phage was studied by the method described by Clokie . Phage suspension was incubated at 30, 40, 50, 60, 70, 80 and 90ºC in water bath for 10 min. Phages survival was determined. Effect of irradiation by ultra violet light on the isolated bacteriophages: The effect of UV light on the viability of phages was studied by method described by Clokie . UV sensitivity was determined by exposing 5 ml of phage lysate (4.8×1011 pfu/ml and 5.0×1011 pfu/ml) for phage ΦZH1 and ΦZH2 respectively (diluted 0.1 in saline solution) in an uncovered small Petri dish to UV-light at distance 20 cm from Cosmolux UVA, A1-11-40 W, PREHEAT- BIPIN, Mode In W-Germany, lamp was used as a UV source for the following times: 20, 40, 60, 80,100 and 120 minutes. Phages survival was determined by plaque assay technique . Effect of different MOI on bacterial growth Multiplicity of infection (MOI) was defined as the ratio of virus particles to potential host cells and prepared according to Birge . Morphological characteristics (Electron microscopy) High stock titer of phages (4.8×1011 pfu/ml, 5.0×1011 pfu/ml ) were negatively stained with 2% (w/v) aqueous uranyl acetate (pH 4.0) on a carbon-coated grid and examined by transmission electron microscopy JEOL( JEM-1400cx) at an accelerating voltage of 80 KV. Effect of bacteriophages on mortality of Nile tilapia by Aeromonas hydrophila infections Nile tilapia (O. niloticus) (weight range: 25-40 g) were obtained from ponds of Fish Research Center of Abbassa, Abo- Hammad, Sharkia. “Where these fishes were transferred alive to Microbiological laboratory, Faculty of Veterinary Medicine, Zagazig University. All fishes were kept in tanks (40 cm×70 cm×60 cm) with approximately 45 l de-chlorinated tap water. Acclimatized for 1 week prior to the experiment and fed with organic feeds. Four aquarium were used for the experiment and each aquarium contained 5 fishes. The aquaria were maintained at 28 ± 1°C with a pH of 7. Bacterial inoculum of A.hydrophila was prepared using a 24 hrs old culture of A. hydrophila inoculated in TSB. The inoculum was subjected to to 10-7, 10-8 and 10-9 dilutions. These were transferred to falcon tubes and were centrifuged for 30 min at 3,000×g. After centrifugation, the supernatant was removed and 5 mL of normal saline solution (0.9%) was added. The lethal dose (LD) of A. hydrophila was determined by intraperitoneally injecting the fishes with 0.5 mL of doses . The dose sufficient to cause death among the fishes within 72-96 h was taken as the optimum LD100 (lethal dose causing 100% mortality). Clinical signs of MAS such as skin lesions, hyperemia, rotting of caudal and dorsal fins, and hyperemia in fin bases were observed prior to the experimental treatment . The mortality in the tanks was monitored after 15 days for each challenge. The concentration of bacteria and phage in the water tanks was monitored by inoculating the corresponding dilution in TSA plates to detect the bacteria, and using the double-layer agar plaque assay to determine the phage concentration . Two groups only were injected with A. hydrophila. Group 1, which contain water only. Group 2, which contain water and fishes served as the negative control. Group 3, which was used as positive control contain water and fishes were injected with A. hydrophila but was not treated with bacteriophages. Group 4, was also injected with A. hydrophila and was treated with bacteriophage. Administration of bacteriophage (Group 4) was done 24 hrs after injection of A.hydrophila. Morphology of plaques and phage(s) Result in Table 1 and Figures 1 and 2 showed that two phenotypic plaques were appeared. One of them remarked as ΦZH1 which measured 3.0 mm in diameter with turbid center (LTC), while other remarked as ΦZH2 which measured 4.0 mm in diameter with clear center (LCC). |Phage No.||Appearance of plaques||Diameter of isolated phages (nm)| Table 1: Morphology of plaques and isolated phages under electronmicroscope after negatively staining. Morphology of isolated plaques under electron microscope Five successive transfer plaques with clear area and center were selected to prepare height titer phage stocks (4.8×1011 pfu/ml) and (5.0×1011 pfu/ml). Each phage stock viewed under electron microscope after staining by (1%) potassium phosphotungstate at pH 6.4. Results in Table 1 and Figures 3-5 showed that, two phages were appeared. Icosahedral heads of these phages (φZH1 and φZH2) measured 100 and 50 nm respectively. Also these phages had very short non-contractile tail measured 30 and 7 nm respectively. Phages (φZH1 and φZH2) adsorbed on cell wall and not have receptor on flagella (Figure 6). On the basis of phage morphology, the phages φZH1 and φZH2 belongs to the family Podoviridae . The host range of isolated phages (ΦZH11 and ΦZH2) was determined against isolates of Aeromonas bacteria and 4 strains of non-Aeromonas bacteria. Results in Table 2 revealed that the isolated phages ΦZH1 and ΦZH2 was very specific to infect Aeromonas and does not have the ability to infect any isolates of non-Aeromonas bacteria. |Hosts||Sources||Formation of lytic area by spot test| |Ponds of Aquaculture Research Center of Abbassa, Abo-Hammad, Sharkia||- |Non Aeromonashydrophilabacteria||Escherichia coli P. aeruginosa 62 |Central Laboratory of aquaculture Research Center of Abbassa, Abo-Hammad, Sharkia Central Laboratory of aquaculture Research Center of Abbassa, abo-Hammad, Sharkia Faculty of Scince , zagazig University (Accession number KR270348) Faculty of Pharmacy, Zagazig University (Gen bank Bio project 219845) Table 2: Host range of A.hydrophila phages φZH1 and φZH2. Effect of thermal inactivation The infectivity of both phages was highly sensitive to temperature above 40°C. Whereas both phages lost its infectivity by percentage reached to 88% and 50% for ΦZH1 and ΦZH2 respectively (Figure 7). Effect of irradiation by ultraviolet light on the isolated phages The exposure of 1011 pfu/ml purified phages suspensions to UV irradiation (at high 20 cm) for different periods of time (0-120 min) illustrated by Figure 8 from these results, isolated phages (ΦZH1 and ΦZH2) are resistant to UV irradiation. Whereas the infectivity of this phage still active after exposure to UV (40w) for 120 min. ΦZH1 lost 50% of its infectivity after exposure to UV irradiation for 100 min While, phage ΦZH2 reached to this percentage after exposure to UV for 80 min. The two phages in current studies ΦZH11 and ΦZH2 exhepted to different rates. The maximum adsorption and percentage of adsorption are presented in Table 3. Adsorption rate were fast since the maximum phages adsorbed reached 51% and 66.8% of phages ΦZH11 and ΦZH2 were adsorbed after 20 and 30 min respectively the adsorption constant (K) were (2.7×10-13 ml/min) and (2.2×10-13 ml/min) for ΦZH11 and ΦZH2 respectively as determined by the formula K=2.3/ (B) t × log (Po/P), where Po= phage assay at zero time, P= phage not adsorbed at time t min, (B) = concentration of bacteria as number of cells/ml and K=velocity constant expressed as ml/min. Adsorption rate constant for both phages were similar. |Incubation time (min)||φZH1||φZH2| Table 3: Adsorption rate of A.hydrophila phages φZH1 and φZH2 in TS broth at 37°C MOI >1.0. One-step growth experiment One step growth curve (Figure 9) shows that the latent period was about 20 min, the rise period was 60 min, and the mean burst size was about 113 and 114 pfu per infected cell for ΦZH1 and ΦZH2 respectively. Effect of different concentration of phages on growth of A. hydrophila Each phage was used at MOI = 10, 1 and 0.1 over a time from 0-24 hrs. Data in Table 4 showed that, the highest reduction in bacterial count was observed when phage ΦZH1 and ΦZH2 added separately to A. hydrophila at M.O.I = 10 and incubated at 37°C for 12 hrs. On other hand, addition of phage ΦZH1 or ΦZH2 to A. hydrophila at M.O.I less than 10 (1.0 or 0.1) not gave efficient in reduction of bacterial growth. |MOI=0.1||MOI = 1||MOI =10||MOI=0.1||MOI = 1||MOI =10| Table 4: Effect of incubation of several different multiplicity of infection (M.O.I) on growth curve of the host A.hydrophila. The challenges to motile Aeromonas Septicemia (MAS) causing bacteria by both ΦZH1 and ΦZH2 Result in Table 5 showed that the addition of A. hydrophila to fishes in aquaria containing Nile water increases the mortality compared to it control. 2nd aquaria mortality of this treatment reached 68%. Addition of phage to such treatment reduced the mortality to 18% with reduction efficiency reached above 50%. Also phage reduced total count of bacteria from 4.8×1013 cfu/ml to 7.3×107 cfu/ml. on other hand, phages titer in fourth treatments increased above the initial titer of phages it changed from 8.1×109 to 8.1×1013 pfu/ml. |Treatment||Mortality%||Total bacterial count after 72 hrs.||Phages titer after 72 hrs.| |Nile water + fishes||5.00||3.9×1011||0.00| |Nile water + fishes + A. hydrophila (3.73×109 cfu/ml)||68.00||4.8×1013||0.00| |Nile water + fishes + A. hydrophila + phages (8.1×109 pfu/ml) (MOI=2.1)||18.00||7.3×107||8.1×1013| Table 5: The challenges to Motile Aeromonas Septicemia (MAS) causing bacteria by both φZH1 and φZH2. Fish diseases are major problem for fish farming industry and among those bacterial infections are considered to be a major cause of mortality in fish . These awaited drawbacks enforced the fish pathologists to seek for other alternatives; the use of natural immunostimulants in fish culture for the prevention of diseases was a promising new development and could solve the problems of massive antibiotic use. Natural immunostimulants were biocompatible, biodegradable and safe for both the environment and human health. Moreover, they possess an added nutritional value . Aeromonas hydrophila was described as the dominant infectious agent of ‘fish-bacterial-septicemia’ in freshwater cultured finfish all over the world . A hydrophila was also associated with EUS, which was a major problem in different countries . The observed clinical signs in the examined fish suffering from motile Aeromonas Septicemia (MAS) were previously reported by Samal . They reported that septicemia, ascitis, erosion, ulceration, detachment of scale, exophthalmia and muscular necrosis were the most predominant clinical signs of MAS in Nile tilapia. In this study, the results showed that two bacteriophages ΦZH1 and ΦZH2 possess infection on its specific host A.hydrophila, isolated from Nile water. According to the electron micrograph; the two phages were characterized as podoviruses. The dimensions of the isolated podoviruses were similar or semi-similar to each other and also resembled those which were previously isolated for A.hydrophila . A hydrophila phages ΦZH1 and ΦZH2 infected some Ahydrophila strains, but none of other genera or species tested. These results are in accordance with those obtained by Mitchell Sc . Temperature is a crucial factor for bacteriophage survivability . The results in showed that ΦZH1 and ΦZH2 phages were thermos table, between a temperature ranges of 30-60°C where it still remained active after 10 min exposure at 60°C. Interestingly, ΦZH1 and ΦZH2 phages survived at 37°C, with no significant loss in phage particle number, which is a very important parameter for phages considered for therapeutic application. Phages examined here were tolerant to UV irradiation with distinct rate of inactivation where ΦZH1 and ΦZH2 lost 50% of their infectivity after exposure time reach to 100 and 80 min respectively. These result are in accordance with those obtain by Ramanandan . This finding indicated that phages are suitable to use in field experiment where their infectivity not affected by UV in solar reached to water in aquarium. The data obtained in one- step growth experiment were comparable with data presented by Cheng . These authors conducted a similar growth experiment with Aeromonas species. It as reported by those authors that the latent periods of phage DH1 were 90 min which was much longer than Aeromonas hydrophila phages Aeh1 and Aeh2 on other hand in our studies the latent period of phages ΦZH1 and ΦZH2 were 20 min. the average burst size of phage DH1 were about 125 PFU/cell, which was also bigger than Aeh1 and Aeh2 , but the latent period of phages ΦZH1 and ΦZH2 were 113 and 114 pfu per infected cell. The isolated phages (ΦZH1 and ΦZH2) administered via injection was found to be effective in treating fish infected with Aeromonas hydrophila shown through the significant decrease in number of A.hydrophila found in the water of treated fish. Where our results showed that the addition of phages ΦZH1 and ΦZH2 (M.O.I=2.1) to Nile water in aquaria. Which was inoculated by A.hydrophila (3.37×109 CFU/ml) reduced the percentage of mortality from 68% to 18% after treatment for 15 days. Also total number of bacteria in polluted aquarium changed from 4.18×1013 cfu/ml to 7.5×107 Cfu/ml after three days of treatments. The efficiency of isolated phages reduction of A.hydrophila in Nile water more than that founded by Donn Cruz-Papa . Make the best use of Scientific Research and information from our 700 + peer reviewed, Open Access Journals
<urn:uuid:cc95ed41-b9f8-41ed-ba49-54e1415239f3>
CC-MAIN-2021-21
https://www.omicsonline.org/open-access/new-approach-to-use-phage-therapy-against-aeromonas-hydrophilainduced-motile-aeromonas-septicemia-in-nile-tilapia-2155-9910-1000194.php?aid=74016
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988741.20/warc/CC-MAIN-20210506053729-20210506083729-00136.warc.gz
en
0.92714
4,735
2.734375
3
A hiatal hernia (HH) is a partial or total migration of the stomach across the diaphragmatic hiatus up to the mediastinum, alone or together with other abdominal organs. The estimated prevalence in the United States is between 10% and 80% (1). It’s usually related to high body mass index (BMI) and older age. However, the real prevalence of hiatal hernias is not easy to establish because many patients are asymptomatic and, hence, are not diagnosed. Hiatal hernias are classified into four types (I to IV) (Figure 1): - Type I, or "sliding hernia" is one where the gastroesophageal junction migrates proximal to the esophageal hiatus. This type occurs in the presence of the enlargement of the esophageal hiatus and the relaxation of the phrenoesophageal ligament; it represents about 95% of hiatal hernias (2). - Type II hernia is paraesophageal (PEH), due to the enlargement of the esophageal hiatus in the anterior and lateral part of the phrenoesophageal membrane. The gastric fundus or body herniates through this defect whereas the gastroesophageal junction remains in the abdomen (3). This type of hernia is rare, since it accounts for less than 1% of all hiatal hernias. - Type III is the most common paraesophageal hiatal hernia. It combines the characteristics of both type I and type II hernias. The phrenoesophageal membrane is loose and elongated, the esophagogastric junction is displaced in the thorax and a defect in the antero-lateral portion of the membrane allows the stomach to migrate into the mediastinum (2). It represents about 5% of all hiatal hernia (4). - Type IV hiatal hernias are characterized by a large diaphragmatic hiatal defect. The stomach and other intra-abdominal organs can herniate in the mediastinum (4). The most common herniated organs are small and large intestine with or without associated omentum. However, spleen, pancreas and liver can also migrate in the mediastinum. This type is the least common and accounts for about 0.1% of hiatal hernias (4). - Finally a Type V hernia has been described by some authors and is a herniated fundoplication or wrap migration. In type III and IV the hernia is a risk for gastric volvulus. An organoaxial rotation occurs in about 60% of cases and consists in the rotation around the axis that connects the esophago-gastric junction (EGJ) and the pylorus. Instead in the case of a mesenteroaxial rotation, the motion is around the short axis of the stomach: the antrum rotates anteriorly and superiorly so that the posterior surface of the stomach flips anteriorly. This is sometimes called the "upside down" stomach (5). Hiatal hernias are common and again, are often asymptomatic. There is no indication to pursue a diagnosis of hiatal hernia in asymptomatic patients, but symptomatic ones need evaluation and should be considered for elective surgical repair, especially if large and associated with obstructive symptoms or volvulus. Obstructed or gangrenous PEH require emergent repair and can have significant morbidity and mortality. The optimal work up changes depending on patient’s history and clinical presentation. Elective patients suspected for Hiatal Hernia are primarily examined for history of previous surgery, especially upper GI surgery, and comorbidities. Then a thorough analysis of signs and symptoms is mandatory. Although small hernias may be associated with typical gastro-esophageal reflux disease (GERD) symptoms such as heartburn or regurgitations, patients presenting with a large HH more often complain of obstructive symptoms such as chest or epigastric pain, dysphagia, postprandial pain or vomiting, due to compression of adjacent mediastinal and thoracic structures and organs. Extra digestive symptoms are also frequently reported including respiratory symptoms such as recurrent aspiration, pneumonia, cough, shortness of breath and dyspnea on exertion. Furthermore, fatigue and chronic iron-deficiency anemia can be an indirect sign of hiatal hernia as large hernias are sometimes associated with mechanical (Cameron's) ulcers where the stomach drapes over the hiatus (6). The first part of the work-up is common for all patients. Chest X-ray and upper gastrointestinal series are the initial tests (Figure 2). Chest X-ray may identify some opacity of the soft tissue with or without air-fluid levels. Also, a retrocardiac air-fluid level is characteristic for a paraesophageal hiatal hernia, and endoluminal gas may be seen in cases of intestinal herniation. Intestinal loops may be seen in an unusual vertical pattern toward the chest with a typical displacement or ascending deformity of the transverse colon in cases of colon herniation (7). Contrast-swallow studies help to identify the size and the position of the EGJ in relation to the hiatus and the stomach (8). Also, they display the axis of a volvulus, if present. In addition, contrast studies provide information about gastric outlet or esophageal obstruction and may suggest abnormal esophageal motility and associated alterations such as esophageal lesions, strictures or diverticula. Although not part of a standard work-up, a large number of patients are referred to the surgeon with a CT scan performed for dysphagia and dyspnea. CT images allow to assess the dimensions of the hernia, the width of the hiatus, the migration of other abdominal organs in the mediastinum and to point out complications such as gastric volvulus (9). However, a CT scan does not replace the radiological examinations necessary for a correct preoperative workup, because the incidental diagnosis of hiatal hernia, especially if small, must always be supported by further investigations (2). In fact, some studies have reported an augmented incidence of hiatal hernia during the execution of colon CT, not confirmed with abdominal CT without colonic distention (10,11) (Figure 3). All patients with suspected or confirmed symptomatic hiatal hernia should undergo an esophagogastroduodenoscopy (EGD); however, given the diffusion of endoscopy, hiatal hernias are frequently diagnosed when endoscopy has been already performed for other symptoms and/or reasons. Endoscopy helps defining the anatomy, the size and type of the hernia, any associated esophageal and gastric mucosal disease such as esophagitis, Barrett’s esophagus and cancer. It can also suggest a delayed gastric emptying when retained food is found in the stomach. A hiatal hernia is diagnosed by EGD evaluating the distance between the EGJ and the diaphragmatic incisura, which is the impression of the diaphragmatic hiatus on the gastric wall. The endoscopic hiatal hernia diagnosis is defined as a distance greater than 2 cm (3). However, some pathological conditions can complicate the endoscopic diagnosis: in patients with Barrett’s esophagus the identification of the EGJ can be difficult (6) while in presence of a wide separation of the crura, the diaphragmatic impression can be hard to recognize (3). The EGJ position can be also assessed in retroflexion using the Hill classification (12). This classification evaluates the EGJ and hiatal integrity considering a “flap-valve” mechanism and could be used also to predict reflux. According to this classification, grade I flap-valve is the normal configuration and it is defined by the presence of a prominent fold of tissue closely approximated to the shaft of the endoscope and extending 3–4 cm along the lesser curve at the entrance of the esophagus into the stomach; there is no hiatal hernia. In grade II, the fold of tissue is flattened and there are occasional periods of opening and rapid closing around the endoscope with respiration. In Hill grade III flap-valve there is no fold at the entrance of the esophagus into the stomach and the endoscope is not tightly gripped by the tissues. This condition is frequently associated with sliding hiatal hernias. Lastly, Hill grade IV valve is defined by the diaphragmatic hiatus making an extrinsic compression on the gastric mucosa; essentially no fold where the lumen of the esophagus is gaping open, allowing the squamous epithelium to be viewed from below. This grade is always associated with a hiatal hernia. Functional and motility studies In our institution whenever possible we study all the patients scheduled for a hiatal hernia repair with a high resolution manometry (HRM); this exam provides important details about the motility of the esophagus and the EGJ. HMR can also identify and calculate the size of the sliding part of the hernia by assessing the spatial dissociation between the lower esophageal sphincter (LES) and the diaphragmatic sphincter, visualized as a double peak pressure profile at the EGJ (13). The Chicago Classification of hiatal hernia by HRM is based on this spatial separation of the two “high pressure zones” (14,15). However, the accurate position of the probe for HRM can be challenging, especially in patients with large hernia (16,17). LES relaxation in patients with paraesophageal HH can be impaired, resulting in an increased intrabolus pressure and ultimately, according to the Chicago classification, EGJ outflow obstruction. HRM can help to tune the operative strategy since findings of severe dysmotility or pseudoachalasia may indicate a simple hiatal repair without fundoplication. In case of sliding hiatal hernias, a pH test is useful to identify the presence of reflux and so the patients that might benefit from antireflux surgery in addition to hernia repair. In symptomatic patients presenting with large hiatal hernia, the benefit of performing a pH study is controversial because a negative result would not change the need for operative repair (18). In patients complaining of respiratory symptoms, particularly shortness of breath and dyspnea on exertion, pulmonary function testing (PFT) may offer important information and also provide risk assessment. PFTs may be useful for assessing the degree of pulmonary impairment and to rule out underlying pulmonary disease; however, in case of coexisting pulmonary disease, it may be difficult to determine whether the hernia or the lung disease is responsible for the patients’ symptoms. An echocardiogram could also be useful to rule out cardiac dysfunction as the culprit of symptoms (19). In patients complaining of chest pain, a cardiac stress test could be performed to exclude any myocardial ischemia. After assessing the anatomy of the hernia and the function of the esophagus, patients are evaluated by the anesthesiologists for a general risk assessment especially since this condition affects primarily the elderly. Recurrence can occur from 5% to 42% (20,21). The mechanism of recurrence is still not well understood, but technical aspects of the primary repair, age, perhaps elevated body mass index (BMI), and pulmonary disease has been considered as possible risk factors for recurrence (22). Recurrences could present with intact hiatus, lateral defect, anterior defect, posterior defect or anteroposterior defect, listed in increasing order of frequency (20). In addition, apparently well-done fundoplications and cruroplasties in symptomatic patients on occasion require revision because of over-tightness. In patients with a suspected hiatal hernia recurrence, a careful workup is mandatory. Despite recurrence being frequent, radiological recurrence alone is not an indication for a redo-surgery, since the quality of life is impacted by symptoms and not by the radiological recurrence (23). Only symptomatic patients are surgical candidates. Obstructive symptoms are the most commonly reported in case of recurrence and include dysphagia, early satiety, anorexia, regurgitation, vomiting, weight loss and postprandial bloating. In the elderly population, alterations in eating habits, postprandial dyspnea and early satiety may be related to aging, in presence of a negative workup (24). If redo-surgery is necessary, a thorough reading of the previous operatory report is mandatory to understand the mechanism of failure and help plan the surgical strategy in the clinical context of the patient. The operative strategy could change depending on the size of the hernia, the presence of residual sac, the length of the intra-abdominal esophagus, the presence and the type of fundoplication, the presence of a mesh. The work-up is similar to the non-recurrent patients. Contrast swallow and EGD are useful to assess the size of the recurrent hernia, the position of the EGJ, detect mechanical and functional obstruction related to the hiatal repair and fundoplication if present and esophageal and gastric mucosal disease (Figure 4). A CT scan with sagittal, coronal, and 3D reformatted images is very useful in patients with altered anatomy because of previous upper-GI or thoracic surgery (9). Hiatal hernia may present as a rare acute complication requiring urgent surgical management mainly to correct acute gastric volvulus or ischemia with perforation. Early recognition and intervention are the key. In the case of emergency presentation, an excessive investigation may lead to delay in treatment and suboptimal outcomes (25). Patients with emergency presentations of hiatal hernia may present the Borchardt’s triad of symptoms: severe epigastric pain, retching with inability to vomit and impossibility to position a nasogastric tube into the stomach. A CT scan may be useful for patients with suspected complications from a gastric volvulus; it allows to visualize the herniated organs within the chest cavity. Furthermore, in the case of intestinal obstruction and strangulation, dilated intestinal segments with air-fluid levels can be visualized within the chest cavity and abdomen. The CT scan can also point out the presence of gastric necrosis through some suggestive findings such as pneumatosis of the gastric wall, free gas and fluid outside the gastric wall within the hernia sac, and lack of contrast enhancement of the gastric wall (26). An EGD can also have a therapeutic role in the case of gastric volvulus, helping to decompress the stomach and position of a nasogastric tube. Insufflation of the stomach can sometimes unfold the volvulus, changing the operative strategy on a semi-urgent operation in daylight time. In the meantime, a nasogastric tube is placed to maintain the decompression (26). Patients’ symptoms, clinical presentations and hiatal hernia type drive the selection of the most appropriate workup for hiatal hernia. For elective HH repair we advocate the use of UGI series, EGD and HRM as first line pre-operative tests. More specific functional and morphological studies such as pH testing, PTF, and CT scan should be used case by case depending on the hernia size, patients’ symptoms and setting. We would like to thank Catherine Cers for the graphic support. Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at http://dx.doi.org/10.21037/ales.2020.03.02). The authors have no conflicts of interest to declare. Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/. - Wu AH, Tseng CC, Bernstein L. Hiatal hernia, reflux symptoms, body size, and risk of esophageal and gastric adenocarcinoma. Cancer 2003;98:940-8. [Crossref] [PubMed] - Abbara S, Kalan MMH, Lewicki AM. Intrathoracic stomach revisited. AJR Am J Roentgenol 2003;181:403-14. [Crossref] [PubMed] - Kahrilas PJ, Kim HC, Pandolfino JE. Approaches to the diagnosis and grading of hiatal hernia. Best Pract Res Clin Gastroenterol 2008;22:601-16. [Crossref] [PubMed] - Krause W, Roberts J, Garcia-Montilla RJ. Bowel in Chest: Type IV Hiatal Hernia. Clin Med Res 2016;14:93-6. [Crossref] [PubMed] - Rashid F, Thangarajah T, Mulvey D, et al. A review article on gastric volvulus: A challenge to diagnosis and management. Int J Surg 2010;8:18-24. [Crossref] [PubMed] - Wallner B, Sylvan A, Janunger KG. Endoscopic assessment of the “Z-line” (squamocolumnar junction) appearance: reproducibility of the ZAP classification among endoscopists. Gastrointest Endosc 2002;55:65-9. [Crossref] [PubMed] - Eren S, Gümüş H, Okur A. A rare cause of intestinal obstruction in the adult: Morgagni’s hernia. Hernia 2003;7:97-9. [Crossref] [PubMed] - Kohn GP, Price RR, DeMeester SR, et al. Guidelines for the management of hiatal hernia. Surg Endosc 2013;27:4409-28. [Crossref] [PubMed] - Eren S, Ciriş F. Diaphragmatic hernia: diagnostic approaches with review of the literature. Eur J Radiol 2005;54:448-59. [Crossref] [PubMed] - Pickhardt PJ, Boyce CJ, Kim DH, et al. Should small sliding hiatal hernias be reported at CT colonography? AJR Am J Roentgenol 2011;196:W400-4 [Crossref] [PubMed] - Revelli M, Furnari M, Bacigalupo L, et al. Incidental physiological sliding hiatal hernia: a single center comparison study between CT with water enema and CT colonography. Radiol Med 2015;120:683-9. [Crossref] [PubMed] - Hill LD, Kozarek RA, Kraemer SJ, et al. The gastroesophageal flap valve: in vitro and in vivo observations. Gastrointest Endosc 1996;44:541-7. [Crossref] [PubMed] - Bredenoord AJ, Weusten BLAM, Carmagnola S, et al. Double-peaked high-pressure zone at the esophagogastric junction in controls and in patients with a hiatal hernia: a study using high-resolution manometry. Dig Dis Sci 2004;49:1128-35. [Crossref] [PubMed] - Weijenborg PW, van Hoeij FB, Smout AJ, et al. Accuracy of hiatal hernia detection with esophageal high-resolution manometry. Neurogastroenterol Motil 2015;27:293-9. [Crossref] [PubMed] - Kahrilas PJ, Bredenoord AJ, Fox M, et al. The Chicago Classification of esophageal motility disorders, v3.0. Neurogastroenterol Motil 2015;27:160-74. [Crossref] [PubMed] - Swanstrom LL, Jobe BA, Kinzie LR, et al. Esophageal motility and outcomes following laparoscopic paraesophageal hernia repair and fundoplication. Am J Surg 1999;177:359-63. [Crossref] [PubMed] - Boushey RP, Moloo H, Burpee S, et al. Laparoscopic repair of paraesophageal hernias: a Canadian experience. Can J Surg 2008;51:355-60. [PubMed] - Broeders JA, Draaisma WA, Bredenoord AJ, et al. Oesophageal acid hypersensitivity is not a contraindication to Nissen fundoplication. Br J Surg 2009;96:1023-30. [Crossref] [PubMed] - Naoum C, Falk GL, Ng ACC, et al. Left atrial compression and the mechanism of exercise impairment in patients with a large hiatal hernia. J Am Coll Cardiol 2011;58:1624-34. [Crossref] [PubMed] - Suppiah A, Sirimanna P, Vivian SJ, et al. Temporal patterns of hiatus hernia recurrence and hiatal failure: quality of life and recurrence after revision surgery. Dis Esophagus 2017;30:1-8. [Crossref] [PubMed] - Rathore MA, Andrabi SIH, Bhatti MI, et al. Metaanalysis of recurrence after laparoscopic repair of paraesophageal hernia. JSLS 2007;11:456-60. [PubMed] - Nason KS, Luketich JD, Qureshi I, et al. Laparoscopic Repair of Giant Paraesophageal Hernia Results in Long-Term Patient Satisfaction and a Durable Repair. J Gastrointest Surg 2008;12:2066-75; discussion 2075-7. [Crossref] [PubMed] - Dallemagne B, Kohnen L, Perretta S, et al. Laparoscopic repair of paraesophageal hernia. Long-term follow-up reveals good clinical outcome despite high radiological recurrence rate. Ann Surg 2011;253:291-6. [Crossref] [PubMed] - Carrott PW, Hong J, Kuppusamy M, et al. Clinical ramifications of giant paraesophageal hernias are underappreciated: making the case for routine surgical repair. Ann Thorac Surg 2012;94:421-6; discussion 426-8. [Crossref] [PubMed] - Shafii AE, Agle SC, Zervos EE. Perforated gastric corpus in a strangulated paraesophageal hernia: a case report. J Med Case Rep 2009;3:6507. [Crossref] [PubMed] - Lidor AO, Kawaji Q, Stem M, et al. Defining recurrence after paraesophageal hernia repair: correlating symptoms and radiographic findings. Surgery 2013;154:171-8. [Crossref] [PubMed] Cite this article as: Laracca GG, Spota A, Perretta S. Optimal workup for a hiatal hernia. Ann Laparosc Endosc Surg 2021;6:20.
<urn:uuid:cde9fbe1-ff5e-46f2-8d04-f12dcabb3cbc>
CC-MAIN-2021-21
https://ales.amegroups.com/article/view/5911/html
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990551.51/warc/CC-MAIN-20210515161657-20210515191657-00217.warc.gz
en
0.854388
5,165
3.234375
3
Please help support the mission of New Advent and get the full contents of this website as an instant download. Includes the Catholic Encyclopedia, Church Fathers, Summa, Bible and more all for only $19.99... A pagan religion consisting mainly of the cult of the ancient Indo-Iranian Sun-god Mithra. It entered Europe from Asia Minor after Alexander's conquest, spread rapidly over the whole Roman Empire at the beginning of our era, reached its zenith during the third century, and vanished under the repressive regulations of Theodosius at the end of the fourth century. Of late the researches of Cumont have brought it into prominence mainly because of its supposed similarity to Christianity. The origin of the cult of Mithra dates from the time that the Hindus and Persians still formed one people, for the god Mithra occurs in the religion and the sacred books of both races, i.e. in the Vedas and in the Avesta. In Vedic hymns he is frequently mentioned and is nearly always coupled with Varuna, but beyond the bare occurrence of his name, little is known of him (Rigveda, III, 59). It is conjectured (Oldenberg, "Die "Religion des Veda," Berlin, 1894) that Mithra was the rising sun, Varuna the setting sun; or, Mithra, the sky at daytime, Varuna, the sky at night; or, the one the sun, the other the moon. In any case Mithra is a light or solar deity of some sort; but in vedic times the vague and general mention of him seems to indicate that his name was little more than a memory. In the Avesta he is much more of a living and ruling deity than in Indian piety; nevertheless, he is not only secondary to Ahura Mazda, but he does not belong to the seven Amshaspands or personified virtues which immediately surround Ahura; he is but a Yazad, a popular demigod or genius. The Avesta however gives us his position only after the Zoroastrian reformation; the inscriptions of the Achaemenidae (seventh to fourth century B.C.) assign him a much higher place, naming him immediately after Ahura Mazda and associating him with the goddess Anaitis (Anahata), whose name sometimes precedes his own. Mithra is the god of light, Anaitis the goddess of water. Independently of the Zoroastrian reform, Mithra retained his place as foremost deity in the northwest of the Iranian highlands. After the conquest of Babylon this Persian cult came into contact with Chaldean astrology and with the national worship of Marduk. For a time the two priesthoods of Mithra and Marduk (magi and chaldaei respectively) coexisted in the capital and Mithraism borrowed much from this intercourse. This modified Mithraism traveled farther northwestward and became the State cult of Armenia. Its rulers, anxious to claim descent from the glorious kings of the past, adopted Mithradates as their royal name (so five kings of Georgia, and Eupator of the Bosporus). Mithraism then entered Asia Minor, especially Pontus and Cappadocia. Here it came into contact with the Phrygian cult of Attis and Cybele from which it adopted a number of ideas and practices, though apparently not the gross obscenities of the Phrygian worship. This Phrygian-Chaldean-Indo-Iranian religion, in which the Iranian element remained predominant, came, after Alexander's conquest, in touch with the Western World. Hellenism, however, and especially Greece itself, remained remarkably free from its influence. When finally the Romans took possession of the Kingdom of Pergamum, occupied Asia Minor and stationed two legions of soldiers on the Euphrates, the success of Mithraism in the West was secured. It spread rapidly from the Bosporus to the Atlantic, from Illyria to Britain. Its foremost apostles were the legionaries; hence it spread first to the frontier stations of the Roman army. Mithraism was emphatically a soldier religion: Mithra, its hero, was especially a divinity of fidelity, manliness, and bravery; the stress it laid on good fellowship and brotherliness, its exclusion of women, and the secret bond amongst its members have suggested the idea that Mithraism was Masonry amongst the Roman soldiery. At the same time Eastern slaves and foreign tradesmen maintained its propaganda in the cities. When magi, coming from King Tiridates of Armenia, had worshipped in Nero an emanation of Mithra, the emperor wished to be initiated in their mysteries. As Mithraism passed as a Phrygian cult it began to share in the official recognition which Phrygian worship had long enjoyed in Rome. The Emperor Commodus was publicly initiated. Its greatest devotee however was the imperial son of a priestess of the sun-god at Sirmium in Pannonia, Valerian, who according to the testimony of Flavius Vopiscus, never forgot the cave where his mother initiated him. In Rome, he established a college of sun priests and his coins bear the legend "Sol, Dominus Imperii Romani". Diocletian, Galerius, and Licinius built at Carnuntum on the Danube a temple to Mithra with the dedication: "Fautori Imperii Sui". But with the triumph of Christianity Mithraism came to a sudden end. Under Julian it had with other pagan cults a short revival. The pagans of Alexandria lynched George the Arian, bishop of the city, for attempting to build a church over a Mithras cave near the town. The laws of Theodosius I signed its death warrant. The magi walled up their sacred caves; and Mithra has no martyrs to rival the martyrs who died for Christ. The first principle or highest God was according to Mithraism "Infinite Time"; this was called Aion or Saeculum, Kronos or Saturnus. This Kronos is none other than Zervan, an ancient Iranian conception, which survived the sharp dualism of Zoroaster; for Zervan was father of both Ormuzd and Ahriman and connected the two opposites in a higher unity and was still worshipped a thousand years later by the Manichees. This personified Time, ineffable, sexless, passionless, was represented by a human monster, with the head of a lion and a serpent coiled about his body. He carried a sceptre and lightning as sovereign god and held in each hand a key as master of the heavens. He had two pair of wings to symbolize the swiftness of time. His body was covered with zodiacal signs and the emblems of the seasons (i.e. Chaldean astrology combined with Zervanism). This first principle begat Heaven and Earth, which in turn begat their son and equal, Ocean. As in the European legend, Heaven or Jupiter (Oromasdes) succeeds Kronos. Earth is the Speñta Armaiti of the Persians or the Juno of the Westerns, Ocean is Apam-Napat or Neptune. The Persian names were not forgotten, though the Greek and Roman ones were habitually used. Ahura Mazda and Spenta Armaiti gave birth to a great number of lesser deities and heroes: Artagnes (Hercules), Sharevar (Mars), Atar (Vulcan), Anaitis (Cybele), and so on. On the other hand there was Pluto, or Ahriman, also begotten of Infinite Time. The Incarnate Evil rose with the army of darkness to attack and dethrone Oromasdes. They were however thrown back into hell, whence they escape, wander over the face of the earth and afflict man. It is man's duty to worship the four simple elements, water and fire, air and earth, which in the main are man's friends. The seven planets likewise were beneficent deities. The souls of men, which were all created together from the beginning and which at birth had but to descend from the empyrean heaven to the bodies prepared for them, received from the seven planets their passions and characteristics. Hence the seven days of the week were dedicated to the planets, seven metals were sacred to them, seven rites of initiation were made to perfect the Mithraist, and so on. As evil spirits ever lie in wait for hapless man, he needs a friend and saviour who is Mithra. Mithra was born of a mother-rock by a river under a tree. He came into the world with the Phrygian cap on his head (hence his designation as Pileatus, the Capped One), and a knife in his hand. It is said that shepherds watched his birth, but how this could be, considering there were no men on earth, is not explained. The hero-god first gives battle to the sun, conquers him, crowns him with rays and makes him his eternal friend and fellow; nay, the sun becomes in a sense Mithra's double, or again his father, but Helios Mithras is one god. Then follows the struggle between Mithra and the bull, the central dogma of Mithraism. Ahura Mazda had created a wild bull which Mithra pursued, overcame, and dragged into his cave. This wearisome journey with the struggling bull towards the cave is the symbol of man's troubles on earth. Unfortunately, the bull escapes from the cave, whereupon Ahura Mazda sends a crow with a message to Mithra to find and slay it. Mithra reluctantly obeys, and plunges his dagger into the bull as it returns to the cave. Strange to say, from the body of the dying bull proceeds all wholesome plants and herbs that cover the earth, from his spinal marrow the corn, from his blood the vine, etc. The power of evil sends his unclean creatures to prevent or poison these productions but in vain. From the bull proceed all useful animals, and the bull, resigning itself to death, is transported to the heavenly spheres. Man is now created and subjected to the malign influence of Ahriman in the form of droughts, deluges, and conflagrations, but is saved by Mithra. Finally man is well established on earth and Mithra returns to heaven. He celebrates a last supper with Helios and his other companions, is taken in his fiery chariot across the ocean, and now in heaven protects his followers. For the struggle between good and evil continues in heaven between the planets and stars, and on earth in the heart of man. Mithra is the Mediator (Mesites) between God and man. This function first arose from the fact that as the light-god he is supposed to float midway between the upper heaven and the earth. Likewise a sun-god, his planet was supposed to hold the central place amongst the seven planets. The moral aspect of his mediation between god and man cannot be proven to be ancient. As Mazdean dualists the Mithraists were strongly inclined towards asceticism; abstention from food and absolute continence seemed to them noble and praiseworthy, though not obligatory. They battled on Mithra's side against all impurity, against all evil within and without. They believed in the immortality of the soul, sinners after death were dragged off to hell; the just passed through the seven spheres of the planets, through seven gates opening at a mystical word to Ahura Mazda, leaving at each planet a part of their lower humanity until, as pure spirits, they stood before God. At the end of the world Mithra will descend to earth on another bull, which he will sacrifice, and mixing its fat with sacred wine he will make all drink the beverage of immortality. He will thus have proved himself Nabarses, i.e. "never conquered". There were seven degrees of initiation into the Mithraic mysteries. The consecrated one (mystes) became in succession crow (corax), occult (cryphius), soldier (miles), lion (leo), Persian (Perses), solar messenger (heliodromos), and father (pater). On solemn occasions they wore a garb appropriate to their name, and uttered sounds or performed gestures in keeping with what they personified. "Some flap their wings as birds imitating the sound of a crow, others roar as lions", says Pseudo-Augustine (Quaest. Vet. N. Test. In P.L., XXXIV, 2214). Crows, occults and soldiers formed the lower orders, a sort of catechumens; lions and those admitted to the other degrees were participants of the mysteries. The fathers conducted the worship. The chief of the fathers, a sort of pope, who always lived at Rome, was called "Pater Patrum" or Pater Patratus." The members below the degree of pater called one another "brother," and social distinctions were forgotten in Mithraic unity. The ceremonies of initiation for each degree must have been elaborate, but they are only vaguely known lustrations and bathings, branding with red-hot metal, anointing with honey, and others. A sacred meal was celebrated of bread and haoma juice for which in the West wine was substituted. This meal was supposed to give the participants super-natural virtue. The Mithraists worshipped in caves, of which a large number have been found. There were five at Ostia alone, but they were small and could perhaps hold at most 200 persons. In the apse of the cave stood the stone representation of Mithra slaying the bull, a piece of sculpture usually of mediocre artistic merit and always made after the same Pergamean model. The light usually fell through openings in the top as the caves were near the surface of the ground. A hideous monstrosity representing Kronos was also shown. A fire was kept perpetually burning in the sanctuary. Three times a day prayer was offered the sun toward the east, south, or west according to the hour. Sunday was kept holy in honour of Mithra, and the sixteenth of each month was sacred to him as mediator. The 25 December was observed as his birthday, the natalis invicti, the rebirth of the winter-sun, unconquered by the rigours of the season. A Mithraic community was not merely a religious congregation; it was a social and legal body with its decemprimi, magistri, curatores, defensores, and patroni. These communities allowed no women as members. Women might console themselves by forming associations to worship Anaitis-Cybele; but whether these were associated with Mithraism seems doubtful. No proof of immorality or obscene practices, so often connected with esoteric pagan cults, has ever been established against Mithraism; and as far as can be ascertained, or rather conjectured it had an elevating and invigorating effect on its followers. From a chance remark of Tertullian (De Praescriptione, xl) we gather that their "Pater Patrum" was only allowed to be married once, and that Mithraism had its virgines and continentes; such at least seems the best interpretation of the passage. If, however, Dieterich's Mithras's liturgy be really a liturgy of this sect, as he ably maintains, its liturgy can only strike us as a mixture of bombast and charlatanism in which the mystes has to hold his sides, and roar to the utmost of his power till he is exhausted, to whistle, smack his lips, and pronounce barbaric agglomerations of syllables as the different mystic signs for the heavens and the constellations are unveiled to him. A similarity between Mithra and Christ struck even early observers, such as Justin, Tertullian, and other Fathers, and in recent times has been urged to prove that Christianity is but an adaptation of Mithraism, or at most the outcome of the same religious ideas and aspirations (e.g. Robertson, "Pagan Christs", 1903). Against this erroneous and unscientific procedure, which is not endorsed by the greatest living authority on Mithraism, the following considerations must be brought forward. (1) Our knowledge regarding Mithraism is very imperfect; some 600 brief inscriptions, mostly dedicatory, some 300 often fragmentary, exiguous, almost identical monuments, a few casual references in the Fathers or Acts of the Martyrs, and a brief polemic against Mithraism which the Armenian Eznig about 450 probably copied from Theodore of Mopsuestia (d. 428) who lived when Mithraism was almost a thing of the past these are our only sources, unless we include the Avesta in which Mithra is indeed mentioned, but which cannot be an authority for Roman Mithraism with which Christianity is compared. Our knowledge is mostly ingenious guess-work; of the real inner working of Mithraism and the sense in which it was understood by those who professed it at the advent of Christianity, we know nothing. (2) Some apparent similarities exist; but in a number of details it is quite probable that Mithraism was the borrower from Christianity. Tertullian about 200 could say: "hesterni sumus et omnia vestra implevimus" ("we are but of yesterday, yet your whole world is full of us"). It is not unnatural to suppose that a religion which filled the whole world, should have been copied at least in some details by another religion which was quite popular during the third century. Moreover the resemblances pointed out are superficial and external. Similarity in words and names is nothing; it is the sense that matters. During these centuries Christianity was coining its own technical terms, and naturally took names, terms, and expressions current in that day; and so did Mithraism. But under identical terms each system thought its own thoughts. Mithra is called a mediator; and so is Christ; but Mithra originally only in a cosmogonic or astronomical sense; Christ, being God and man, is by nature the Mediator between God and man. And so in similar instances. Mithraism had a Eucharist, but the idea of a sacred banquet is as old as the human race and existed at all ages and amongst all peoples. Mithra saved the world by sacrificing a bull; Christ by sacrificing Himself. It is hardly possible to conceive a more radical difference than that between Mithra taurochtonos and Christ crucified. Christ was born of a Virgin; there is nothing to prove that the same was believed of Mithra born from the rock. Christ was born in a cave; and Mithraists worshipped in a cave, but Mithra was born under a tree near a river. Much as been made of the presence of adoring shepherds; but their existence on sculptures has not been proven, and considering that man had not yet appeared, it is an anachronism to suppose their presence. (3) Christ was an historical personage, recently born in a well-known town of Judea, and crucified under a Roman governor, whose name figured in the ordinary official lists. Mithra was an abstraction, a personification not even of the sun but of the diffused daylight; his incarnation, if such it may be called, was supposed to have happened before the creation of the human race, before all history. The small Mithraic congregations were like masonic lodges for a few and for men only and even those mostly of one class, the military; a religion that excludes the half of the human race bears no comparison to the religion of Christ. Mithraism was all comprehensive and tolerant of every other cult, the Pater Patrum himself was an adept in a number of other religions; Christianity was essential exclusive, condemning every other religion in the world, alone and unique in its majesty. CUMONT, "Notes sur un temple Mithraique d'Ostie" (Ghent, 1891); IDEM, "Textes et Monuments figures relat. Aux Mysteres de Mithra" (2 vols., Brussels, 1896-1899); IDEM, "Les Mysteres de Mithra" (2nd., Paris, 1902), tr. McCormack (London, 1903); IDEM, "Religions Orientales dans le Paganisme Romain" (Paris, 1906); MARTINDALE, "The Religion of Mithra" in "The Month" (1908, Oct., Nov., Dec.); IDEM, "The Religion of Mithra" in "Lectures on the Hist. Of Religions", II (C.T.S., London, 1910); DILL, "Roman Society from Nero to M. Aurelius" (London, 1904); ST.-CLAIR-TISDALL, "Mythic Christs and the True"; DIETERICH, Eine Mithrasliturgie (Leipzig, 1903); RAMSAY, "The Greek of the early Church and the Pagan Ritual" (Edinburgh, 1898-9); BLOTZER, "Das hedn. Mysterienwesen und die Hellenisierung des Christenthums" in "Stimmen aus Maria-Laach" (1906-7); ALES, "Mithraicisme et Christianisme" in "Revue Pratique d'Apologétique" (Pris, 1906-7); WEILAND, "Anklange der christl. Tauflehre an die Mithraischen Mystagogie" (Munich, 1907); GASQUET, "Essai sur le culte et les mysteres de Mithra" (Paris, 1890. APA citation. (1911). Mithraism. In The Catholic Encyclopedia. New York: Robert Appleton Company. http://www.newadvent.org/cathen/10402a.htm MLA citation. "Mithraism." The Catholic Encyclopedia. Vol. 10. New York: Robert Appleton Company, 1911. <http://www.newadvent.org/cathen/10402a.htm>. Transcription. This article was transcribed for New Advent by John Looby. Ecclesiastical approbation. Nihil Obstat. October 1, 1911. Remy Lafort, S.T.D., Censor. Imprimatur. +John Cardinal Farley, Archbishop of New York. Contact information. The editor of New Advent is Kevin Knight. My email address is webmaster at newadvent.org. Regrettably, I can't reply to every letter, but I greatly appreciate your feedback — especially notifications about typographical errors and inappropriate ads.
<urn:uuid:02fbf173-014a-41bb-8893-e4544c0af1e3>
CC-MAIN-2021-21
https://www.newadvent.org/cathen/10402a.htm
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.10/warc/CC-MAIN-20210511153555-20210511183555-00137.warc.gz
en
0.966459
4,779
3.25
3
the assumption of responsibility for the welfare of the world For translation of an unfamiliar word, place the cursor over the word. The stewardship is cosmopolitan, and must be. We cannot be concerned with the welfare of one people more than another, the good of one place at the expense of another. We cannot ignore poverty or pollution or oppression, regardless of location or the individuals affected. The parochialism and chauvinism associated with nationalist sentiment are both counter to the interests of the world as a whole. The nations of the world may always exist; they will certainly exist as long as there are individuals who consider themselves part of a nation, as that is the central element of nationality. But stewards must view the world in another light. Apart from any conscious decision or organized program, the world is coming together culturally. The benefits of modern science and technology are not available to all peoples, or even all segments of any given people. But there is no question of the aspiration of most of the world’s population to the industrial/high-technology standard of living. Only resource disparities keep us all from driving the same cars, living in the same housing, using the same appliances. Scientific materialism is a belief system existing alongside the recognized religions throughout the world. Belief in a single omniscient, omnipotent, omnibenevolent god is held by a large majority, and for a growing minority is nonspecific, nonsectarian. Practitioners of many religions follow an unspoken segregation between the secularism of their daily lives and the ritualistic piety of special occasion. Modern western dress is increasingly ubiquitous. Popular entertainment is no longer confined to cultural zones, and requires only translation, and sometimes not even that. The dietary culture is becoming universal in its diversity; cuisines carried by migrants around the world have found popularity, and ethnic cuisines are now being carried by the common culture. Only language will remain as a sign of our former divergent evolution. Even language will tends towards unity eventually. There are and for some time will continue to be at least three linguistic tiers for the world. In addition to the local dialect and the global dialect, there is in most of the world a regional lingua franca, typically the dialect of the colonial power. The global dialect, as it stands and as it is likely to remain, is basically English. This is a historical accident, resulting from the advent of mass communication and cultural assimilation at a time when two successive states in the leading geopolitical position have been anglophone. But the English vocabulary is drawn from three major sources, and has a history of importing and coining words which suggests that the common dialect will be only distantly related to the Englisc-Seaxisc from which it evolved. The existence of a global dialect, and its use in global institutions and as a bearer of global culture, will lead to its adoption by more and more communities as a local dialect, until it is in fact a common dialect. The process of divergent linguistic evolution was a result of geographical isolation and lack of cultural interchange. With the spread of mass communication, that process has been exchanged for one of assimilation, around a broadcast standard. This is happening with all dialects. But it will eventually ensure that a common dialect remains so, evolving on a global basis. None of the developing common culture is desirable per se. But its existence takes us away from nationalism and makes us more cosmopolitan. It is a prerequisite to the spread of a culture of stewardship. The formation of a global economy is also underway, and also, seemingly, inexorable. But there is every indication that it will be, as the regional and local economies it replaces, highly stratified. Goods and services and information will flow freely around the world. Businesses will operate globally and have little fundamental connection to their states of origin. The dollar already serves as a common medium of exchange, and the introduction of the euro, although a competitor, will ironically further economic unity, by replacing several other common currencies, allowing for a single exchange rate with the dollar, and eventually facilitating a consolidated currency with a single (if controversial) step. Free trade areas will expand and ally. Commerce will be the king who knows no borders. The tyranny of the so-called communist states, and their preposterous mismanagement, have suggested to the world that the future is capitalist. The near future certainly is. But capitalism inevitably brings poverty, the squandering of resources, and the destruction of the natural world. Private property is in fact a form of dominion. The stewardship must work for a world economy that addresses the basic needs of everyone. The development of a single political reality in the world is far more unpredictable, farther from inevitable, but far more important. The stewardship is inherently standardist. There must be a single standard of justice for all the world, applying to all individuals equally. Some states in the world will never be just. While they exist, they must be held accountable by those forces in the world which can influence them — primarily the other states. But they must eventually be replaced. And the international community, the collective of states, is ill-suited to the role of the world’s chief protector of right. Such a body has an inherent conflict of interest. The component states’ existence rests typically not on the will of the governed or the standard of justice, but on the idea of the inviolability of the state. This notion is sometimes merely an impediment to progress, though that is bad enough. At times it is outright dangerous. The idea has no use in the protection of stewardship. But it has been far too often that the idea has been used to shield a dominion. National sovereignty and territorial integrity are things that only tyrants must be concerned with. When the terms are employed, some hideous injustice will surely lie immediately underneath. They are fools and dupes who buy the system for their own protection, if they have not also something to hide. Any time the supporters of human rights and political freedoms hear these words, they must only ask how many have something to hide. Two seemingly opposite events in Eurasia recently take on great significance in this light. The integration of western Europe and the disintegration of the empire of Россия are not important in that the result of either will necessarily be positive in the long run. But that is as likely as that it will be negative. And one thing certainly is positive: each diminishes the illusion of the inviolability of the state. Москва cedes power to Минск. Bonn cedes power to Bruxelles. In the process, it becomes more difficult for Москва and Минск, for Bonn and Bruxelles, to claim an absolute right to rule. The illusion is weakened. The shield is damaged. If we could only eliminate the principle of national sovereignty altogether, things would progress much more rapidly. Unfortunately, it is against vested interests. Specifically, it is against the self-preservational interests of the most powerful forces in the world — the states. The cession of power itself is an anomaly, not a reliable trend. With that the case, the stewardship must tear down the traditional concept of sovereignty. Sovereignty is impunity de facto. The sovereign is that institution within society which acts with impunity — whether or not this is necessary. (An autocrat can be overthrown by a conspiracy, or a state put down through war by another state, but if these things fail to happen, there is impunity de facto.) The state is the executive instrument of the sovereign, the institution through which the sovereign exercises its sovereignty. Sovereignty in principle, if it lies anywhere, lies with the individual and ends at its body. When that determination is made, it ceases to be necessary to make innumerable fine, even sophistic, distinctions to justify one state but not another. No power should act with impunity in the world. No force, however great, however broadly based, should be allowed to contravene the standard of justice. Until every state in the world is under the standard, the system will have no legitimacy. And once every state is under the standard, their consolidation will simply be a question of efficiency. The separation of powers so much an article of the patriotic faith in the United States is an illusion. All power rests with the electorate, and it exercises that power as and when it wishes, including through failure to act. No constitutional device will ever save the world from injustice. Only a world where each individual is just will be safe. The United Nations The UN could be an object of hope in the world. The internationalist spirit it represents (and encourages), the ideals of peace it was founded on, and the work of the institution itself are all more enlightened than the base from which it draws its support — the world’s states. It is not a true collective of those states, as historically it has recognized bodies which were not sovereign, and declined to recognize some which were sovereign, according to bizarre rationales put forward by some of the more powerful states. Though the states typically rest on force alone, they perpetuate the myth of legitimacy, and brand as illegitimate certain geopolitical realities, while sanctioning nonrealities, solely for the interests of the most powerful states. They hope in doing so to fool the world into accepting the legitimacy of their own régimes, and have met with considerable success. The human-rights guarantees associated with the UN have no force unless the states give them force, and thus are usually empty. But the UN’s various organs have a degree of autonomy which allows them to carry on work which would not be approved by a majority (or even a large minority) of the general assembly. The upper rank of the secretariat contains politicians who are ever-cognizant of the demands of the member states. But even they can often be seen to be primarily globalist, and more enlightened than the security council and the assembly, which appoint them. Whether that means that the UN can evolve into a world stewardship is much more uncertain. The power organs would be required to delegate even more authority to the central institution, and allow the center to act on the ideals of the movement. It is one thing to tolerate a globalist UNESCO, quite another to brook an autonomous secretary-general with an armed response force. There is a (justly-ridiculed) belief of many in the patriot militia movement in the United States that the UN will be the bearer of a new world order, sweeping in to various countries in a fleet of black helicopters and putting an end to national sovereignty. If the patriot militias ever came to real power, the only hope would be a globalist intervention, and the black helicopters would be a welcome symbol of our liberation from a home-grown dominion. The European Union The European Union began as a mere free-trade association, a harmonization in the economies of coal and steel. It is still predominantly about free trade, and has pushed harder for economic integration than for anything else. And the power for the moment rests with the component states of the union. But each of those component states is a functioning democracy, and some of them are among the most liberal in the world. And they have taken steps, and promised more steps, towards a political union. Among the various alliances in the world, the European Union is unique (to my knowledge) in at least one respect. It has established an independently-elected central institution — the European Parliament. The powers of this body are limited for the moment. But it is, by most views, the shadow government of western Europe, waiting in the wings for a decision by the various member governments. Western Europe could become a single democratic state on any given day. As time passes and the member bureaucracies are harmonized, the transition could become virtually undetectable. And this is most extraordinary. The United States, to take a contrary example, went from a de facto federation to a de facto unified state only through violence. The Yankee-Dixie war may have been precipitated by the issue of slavery, but it was not fought over that issue, and both sides knew that. The emancipation proclamation was a worthless piece of propaganda, worthless in that it freed — by design — no slaves at all. The south seceded to retain the rights of the states (primarily the right of slavery), and the north attacked to prevent secession. When the north won, it had conquered several sovereign states, and eliminated the ability of the remaining to determine their own allegiance. In effect, it had assumed the sovereign power for all of the renewed United States. The entire country was Yankeeland. A united Europe will apparently come about through attrition. The parliament will be given gradually more power by the member states, until, like the Yankee provinces, they are states in name only. This change will represent, for the cosmopolitan, an enormous achievement. Unlike the two halves of the United States, the member states of the European Union have significantly different national and cultural histories. It will be a very long time before they can even communicate with any ease, much longer than it will probably be before they live under a single state. The states of western Europe are not pure nation-states; or if they are, they are national empires. The majority peoples of each state live in union with minority nations, who were almost all incorporated into the nation-state by conquest. It would certainly be important to these subjugated nations to disrupt the myth of natural statehood. Some of them will resent the commitment of their peoples to a new state without their consent. But being full citizens of Europe may be preferable to being second-class citizens of their respective states. And the European Union will go many centuries before it can pretend, as the preceding states have done, that it is divinely entitled to the allegiance of its citizens. The benefits of a true nation-state The idea of nationality is crucial to most individuals. The aspirations of nations to states of their own is a great source of discontent in the world. As with the minority peoples of Europe, the disenfranchised nations of all the world will have to be answered before they will be willing to think about world citizenship. The Kurds are a good example. They have been divided among several states and fighting for their own Kurdistan. Until they have a seat at the table, they will not view as legitimate any global arrangement. No government can speak for all Kurds; but no government which is entirely without Kurds can speak for Kurdistan, and the Kurds will never accept such a claim. Africa is the greatest mess. The states of Africa have virtually nothing to do with nationality, and few have anything to do with the will of the populace. European colonialism bears most of the blame. States were created arbitrarily. The natural resources and labor had been exploited for years. Social and economic classes had been encouraged. And then these population groups with nothing in common but a European lingua franca were expected to live in a harmonious society in perpetuity. We can now fault the greed for wealth and power of a relative few for the lack of freedom and prosperity in Africa, and these are mostly native dictators. But these dictators were kept in power (when they were not actually placed in power) by the great cold-war factions. The United States and Россия were more interested in strategic gain than justice, and Africa had another forty years of suffering. A world stewardship would certainly redress this; but a lasting global structure will have to be built on consent, and one nation cannot consent for another. A dictator, of course, cannot consent for a nation, though no tyrant would consent to a stewardship to begin with. The penultimate stage of the demise of the nation-state will almost certainly be the genuine, democratic nation-state. Then we can begin to build peace. Global progressivism past and present The Stewardship Union is only a part of the stewardship and its impulse to post-nationalism and world unity. Many older or larger movements exist, and their continued success is important to the Stewardship Union. Some movements are in discredit, and their failures are unfortunate. But all must be examined, so that the stewardship as a whole can be strengthened. The most well-received group among these is the international Red Cross and Red Crescent movement. The religious association is unfortunate, for as the name indicates, it has already promoted division. The organization by states is also unfortunate, if practical. And the genèvan concept of neutrality, while practical in some situations, is also, in practice (if not in neutralist theory), a moral ambivalence, and suggests that all causes are equal by treating them so. But in general the movement is much to be admired; it has shown that stewardship, and globalist stewardship, can function with popular approval. It has worked for decades to promote peace and deal with the consequences of war and natural disaster. It has linked humanitarian assistance and human rights. It has mobilized individuals worldwide to care for those in need. And it is growing. The green movement is another growing part of the stewardship. Conservation is an important component of stewardship, and, thankfully, it is increasingly popular, under the name of environmentalism. Like many progressive ideas in history, it may become mainstream eventually. The greens themselves have made conservation the center of their political efforts. Green parties are typically also supportive of other aspects of stewardship, notably human rights and economic welfare. They are international as well. All that remains is for the greens to recognize the fundamental unity of their various progressive ideals, to articulate the common theme. Communism is a movement, we are told, which is dead. I agree that it is hard to envision the name ever attracting widespread support after the events of the last century. But communists, as the name suggests, were originally those who did not believe in private property — rather that all property was held “en commun”. The perversion of their creed by the revolutionaries and dictators who built the totalitarian states was not something they would have recognized. They were not fascists, not totalitarians. And the economic failures of these states had little to do with communist theory. The states involved all abandoned their traditional agricultural base for a program of megalomaniacal industrialization and the construction of mechanized militaries. Eventually they fell into self-deception, doctoring figures and forging economic reports, to further please their dictators. Communism on a large scale has never failed, because it has never been tried. The original communists wanted economic fairness, an end to poverty, a sharing of resources. They were natural cosmopolitans and the very picture of charity. They deserve our admiration, not our scorn. And the socialists, Marxists, and social democrats who carry on their ideals are natural allies of the Stewardship Union. Anarchists, like communists, are a much-maligned group. The use of ‘anarchy’ for chaos, for mob violence and destruction, is wholly unconnected with anarchist belief. Anarchists are anti-dominion, not pro-stewardship, as a rule. But they are still potential allies. They would end the rule of one person over another, the sort of rule which is necessarily violent. As such, anarchists are in fact working towards peace. And they are staunch opponents of dominion and claims of dominion throughout the world. Finally, there are those cosmopolitans who work for world unity without an accompanying political view. It is true that in theory these individuals could be fascist or ultracapitalist or supporters of some other global dominion. But for the most part this is not the case. They are primarily progressive, secularist, and devoted to peace and harmony. Their efforts towards integration and understanding are therefore a form of stewardship. A common era The large majority of the world is still nationalist, and will remain so for some time. But not all nationalists are of the dominion, and some are certainly stewards. Their nationalism persists in part because they do not see the dangers of nationalism, and in part because they have not been offered anything better. Cosmopolitans are idealists, citizens of the whole world whether that whole world is something they can believe in or not. If we as stewards would see individuals end their allegiance to local concerns and narrow interests, we must build an accessible ideal. We must make the idea of a single Earth, where all individuals are connected to the whole without the mediation of smaller, more parochial groups, a reality of their lives. In that unified Earth, we can protect the global ecosystem, care for the poor, sick, injured, and abandoned, guard against violence and tyranny, and spread the culture of responsibility and compassion, in the way most natural to the stewardship, for the benefit of the entire world. © O.T. FORD Home of the Stewardship Project and O.T. Ford
<urn:uuid:ace975a2-44f2-47f4-bc48-c5faa34e7374>
CC-MAIN-2021-21
http://the-stewardship.org/unity.htm
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991685.16/warc/CC-MAIN-20210512070028-20210512100028-00574.warc.gz
en
0.961793
4,328
2.796875
3
It is an interesting and much under researched fact that all nations have ‘national myths’. In times of war these become particularly important: they become almost ‘rocks of unity’ around which all set aside previous disputes and make them a single people determined to resist. Of National Myths By ‘myths’ in this sense I do not mean legends of knights slaying dragons, or Princesses escaping evil relatives on flying carpets but something more analagous to one Carl Jungs archetypal (neo Platonic) forms. A national myth may be or include a martydom of a person, a battle, a religion, a war or even a work of literature or a type of cuisine. They become ‘national archetypes’ in a psuedo Jungian sense; “heroes who we must live up to” in some cases, the words of poetry or music with which we associate ourselves to our ‘motherland’ recognised by all the nation, particlularly in time of war, that have maybe lain dormant for years but now identify us with our neighbours and even with the lowest stranger in need as one nation. To better illustrate my meaning let me take some examples: From England/UK the Magna Carta, the defeat of the Armada, Shakespeare, the Civil War that started about ship tax; Cromwell’s Commonwealth could still be said to define the British political system, there is lots more obviously; Nelson and Waterloo, Blenheim, Crecy, Agincourt to El Alamein and the Falklands, Churchill’s famous speeches and defiance when the Island Nation stood alone. In Poland the myth is different: Catholicism plays a large part but the Piasts, the Jagiellon Commomwealth, Sobieski’s famous victory over the Ottomans at Vienna, the great works of literature, the fine Universities that gave rise to Copernicus, the Black Madonna, the tombs in Wawel, the many revolts, the ‘Christ of Europe’, Piłsudski’s famous victory against the Soviets in 1920 etc etc.. Each nation has it’s own myths, from Joan of Arc to De Gaulle who used the Cross of Lorraine as a symbol in France to Paul Revere’s ride in the US and all have great literately works, national ‘heros’, famous battles etc etc that add to the myth. It could be argued that in many ways the role of language in keeping a subjected nation alive is as a means of keeping these myths alive. The language conveys the myth in some cases, for if the word ‘holodomor’ were ever banned in Ukraine and the history books burned those who understood the word would remember and perhaps aspire again to a free Ukraine. Similarly one could imagine that had Nazi Germany succeeded in invading and subjugating Great Britain in 1940 Shakepeare’s Henry V must have been proscribed for to read it is incitement to English sovereignty in itself. But banning books is small fry for the criminal regime that holds power in Moscow, internet memes, deaths of those on ‘special operations’ during peacetime – although ‘special operations’ are undefined and why would they be needed in ‘peacetime’ is not explained, but Bill Browders book banned, Sasha Litvininko (the polonium poisoned one) book banned but for any who want a great extract. Remember Sasha Litvinenko was a senior officer to Putin when Putin was illegally, by FSB rules, made Director in 1998. Litvinenko was a Colonel and Putin had wangled his way into the Directors post as a Major and was busy doing dubious politically orientated operations such as Ryazan. Distortions of Myth So what are Ukrainians – the inheritors of the original Kyivan Rus lands – to make of it when a so called ‘Russian’ President from St Petersburg, a city founded on land that was formerly Swedish, says that Crimea is for Moscovy their “our Temple Mount“, likening it to the Jewish reverence on the remains of the Temple in Jerusalem. Well a complex historical and genealogical case could be made on heirs of Rurik, the legendary Varangian who founded the first Viking settlements in the east of whom Vladimir, Prince of Kyivan Rus was an ancestor and was the first of his family Baptised into the Greek faith. A later descendent of Rurik later became the Prince of Moscow in 1283 (family tree here) neither by old Salic nor Greek law would such a rule apply since the Rurikids of Muscovy were replaced by the Romanovs and only in 1783 did Ukraine first become subject to Muscovy. If the case is base on conversion then the Ukrainian Patriarch is second only to the Greek and the Muscovite Church a late comer. Vladimir, Prince of Kyivan Rus, never visited Moscow, almost certainly never heard of it as it was a backwater when Princes of Anglo Saxon England such Edward the Exile, son of Edmund Ironside who fought Cnut, visited Kyiv and were given hospitality. Kyiv and Kyivan Rus was indeed deep in European engagement Yaroslav ‘the Wise’ had a grand daughter, Eupraxia married to the Holy Roman Emperor Henry lll and married other relatives to the Kings of kings of Poland, France, Hungary and Norway. Kyivan Rus was a cosmopolitan and connected entity until the Mongols came. The first Prince of Moscow to call himself ‘Grand Prince of all the Rus’ was Ivan lll it was not until 1783, after the first partition of partition of Poland in 1772 and the destruction of the Zaporozhian Cossacks in 1775 that the Imperial Muscovite tide crept to the Crimean peninsular. Having ‘ethnically cleansed’ many hundreds of thousands of Tartars during deportation when they were purposely starved Muscovite Russian speaking ‘colonists’ took over the houses and farms of the deportee’s – many of whom would never need them again of course. Crimea was never Muscovite before then but now has a ruler who was born in Moldova and was called “Goblin” in his criminal gang ‘Salem”. It has been supposed that Putin sees himself as some new ‘Vladimir’ and God forgive them but there is even a sect who almost worship him but this is stealing history. If a descendant of Rurik remains even he could not claim Ukraine nor Crimea by right of his ancestor being the ‘first convert’. Putin’s claim is therefore akin to a French President claiming Syria as ‘holy for France’ because St Paul had a vision on the road to Damascus and he is an inheritor of the Roman Emperors who learned Christianity from St Paul… It remains that Vladimir was a Prince of Kyivan Rus – long before Muscovy was heard of – and that his history and conversion is part of the history of the land now known as Ukraine; you cannot steal the history from the land where it happened. Others may share that history but having accepted Baptism in Chersonesos (from which todays city named Kherson derives) Prince Valdamarr Sveinaldsson returned to Kyiv, not Moscow. By making this claim – that the first Kyivan Prince to convert to Christianity is a Muscovite icon he is stealing someone elses history. Muscovy and the other successor states were ever at war until by intrigue or interfeuding they fell into Muscovite power. The Tartars in Crimea were the last of the free peoples to fall and so have been treated worst though the Holodomor was much the same brought to Ukraine. He entirely neglects the real and still existing reality of the Tartars existence almost – it is ‘Holy land’ – forget the real history of the real people who’s land it was. Stealing Tartar history of course most neglect but we new Ukrainians cannot forget our nations parts and roots – for it’s survival depends on its myth. When Putin says that “the Russian and Ukrainian peoples are practically one people” he is saying the same thing. “You don’t have a history!” But this of course is untrue – from Kyivan Rus of old to Debaltseve and Maryinka today Ukrainians prove they are not subjects of the Muscovite Mafia. Worse still it encourages the misguided thinking of today’s Lord Lothian’s who when Hitler reoccupied the Rhineland in 1936 said “”after all they, [the Germans] are only going into their own back garden”, for if Ukrainians and Muscovites are the ‘same people’ then of course Ukraine is Putin’s ‘back garden’. It is of a course a manipulation intended to fool but the very fact that Ukrainians are resisting his ‘hybrid army’ daily should lead him to suspect his historical analysis is mistaken were it truly a historical claim rather than the cynical aggression it is the real motivation being to intended to cower others to obey as well as cover up his deeply misguided gamble last year from his own oppressed populace. Of course ‘labeling’ is another way to steal a persons real truth; “Ukrainians are all fascists” completely ignores the real reasons why Sasha, Andrey, Anika or the many thousands of others were ever on Maidan right? Because they were all ‘fascists’ is the short answer. Well of course life is not like that – all went for different reasons and many to watch. This labeling also denies the fact that the Ukrainian people have a strong and deep desire for truth within them and thus more than anything else want reform. It forgets entirely the January 16th ‘Dictatorship Laws’, the corruption of the Yanukovych ‘family’; Yanukovych who himself had been a small time crook in Donbass, his son who just “happened to be very good at business” as if his father being President was mere coincidence this was unbelievable, much as Putin’s lies are today. But by perpertrating these lies they are stealing the truth – the true motives of the countless Sasha’s, Andrey’s and innumerable others who’s story must be heard. In a similar manner; “It was a CIA/Jewish banker/alien etc etc plot to overthrow the democratically elected President of Ukraine (Yanukovych)” is a myth in itself and seeks to steal the truth of the people who played a part in his departure – ironically from Putin himself who gave him orders to leave for Donbass on Feb 21st. But it also steals the story of the ‘Heavenly Hundred’ the countless numbers who people provided food, or bandages or the old ladies who dug up cobblestones and formed chains to pass them on as well as the countless other acts, people standing in front of trains – what was their motivation – in Ivano Frankivsk for that on Feb 18? or why did people seize the administration buildings almost spontaneously after January 16 through most of west and central Ukraine? Nor is every day the same – people come and go and moods change and motivations which might once have been mere aspirations may become determined objectives once the Berkut has beaten you. You cannot ascribe a single source or motivation to all these people but by describing them as ‘fascists’ or saying “it was the CIA etc” you are attempting to steal the real stories of the people who took part and who alone know why they took part or why many are now serving in the Ukrainian army. Of course had not the Putin regime decided to invade Ukraine on Feb 20th last year and continued since I am sure many parents, Wives and children would have their loved ones home by now. Some will never return and have payed the greatest cost that any can pay. Yet these too had a true story that we can never permit to be stolen and in some ways they serve on. They become part of the myth. Who can ever forget the ‘Cyborgs’ singing the National Anthem if once it is seen? This staunchness, in defiance and in adversity becomes part of the real and living myth from which a nation derives it’s inspiration in future times of need. Nadiya Savechenko is another case – who can doubt her loyalty or courage? But these thousands if not hundreds of thousands all have slightly different versions for each person: Sasha A is married to Nina in Rivna and misses her and their three sons while Captain Andrey has a girlfriend called Anna in Dniepro and is happy with the football etc etc You cannot ‘steal’ a persons reality anymore than you can steal the history from the land on which it happened but the attempt is still continuing. It belongs intrinsically to the people who today are making their own history in defiance of all odds. These are the real histories – all different and complicated and doubtless not all saintly or guiltless – of those who defend not just Ukraine but Europe, because any who still believe this is just about Ukraine then they need to visit the front and see for themselves that nearly all Ukrainian soldiers are Russian speakers, some being Russian citizens, but it is their history, their motivation for a better Ukraine – and they may disagree about what ‘better’ means but it does not include domination from Moscow – it is they, Ukraine’s ‘New Heroes’ who are creating the new Ukrainian national myth. The politicians will be forgotten but the myths and legends of the ‘Cyborgs’, of Nadiya on hunger strike, of the comrade who saved Sasha A or the artillery commander who arrived in time. The countless volunteers who collect food, clothing and everything else and take it to the forces, those that collect money outside Ukraine or send money, those who write as well and try to publicise the topics. There is a truth for each of us from Sasha A now on the front to those collecting money in Canada down to those who said a prayer for a Ukraine. More though we share a truth; we recognise Ukrainian sovereignty, whole and free and everyday however we serve Ukraine we are creating the new Ukraine; it is our work not some Putin label or distortion of history. But this history and reality cannot be allowed to be stolen by a label or a lie. That is why the stories of all matter. The New Ukrainian Myth Our new heroes are no doubt not without faults and we can recognise them as people like us – from the Heavenly Hundred to Nadiya Savchenko and the Cyborgs and the men still on the front now. One does not need to be a Saint to be a hero; standing up for what is right against the odds and knowing the righteousness of your cause ennobles. These are the odds that make myths and nations from them. The Ukrainian people every day embellish and add further glory to their long history. The new Myth is being made at the front and in Government reform – although slower than some might wish – but each much live up to the new ‘heroes’ in their own way and consensus is needed also. It is necessary that these true stories of today’s Ukrainian heroes, reforming the Government, arresting corrupt officials and most importantly standing in the front line for all of Europe, the thousands of stories of volunteers, the pictures the children draw be one day, when victory is achieved and Ukraine whole and free, others may tell the story and mention their names to their Grand children, the Tartars who have been martyred, the Chechens who serve with the Ukrainian forces – all stories must be remembered and form a part of the new Ukrainian myth and identity. Everyday the heroic Ukrainians create this new history. May it never end! It is said the “History is written by the winner” and often this is true – the History of the people who lived in Carthage and spawned Hannibal we only know of from Roman sources. Yet Ukrainian history has lived on from 1775 to it’s brief re-emergences in the 20th Centuries to stand tall today. The roots are remembered by some but most just want a better future and cannot equate that with the Muscovite system which has oppressed our nation with such malice and suffering that it compares only to the Mongols, who’s real successor the Muscovite Empire or what today they call the ‘Russian Federation’ is but Putin is no Temüjin, still less a Subutai. He already has a throne and what concerns him most is staying on it. Now he leads the people under his misguided rule back into the past; every week or month new announcements of repression – as if the media were not 99% controlled already. ‘Foreign funded’ Non Governmental Organisations (NGOs) because people outside his rule might care about those within and donate money to organisations for orphans or the homeless… Accept the donation and you are a ‘fifth column organisation’ accepting foreign gifts to betray the State. Then any “undesirable organisation” which being a loose term could involve anything from a football club to a prayer club but the meaning is clear; if we do not like you – or more often if you have something I want – I will take you and it out. Recently we had the decree passed forbidding news of deaths of Putin’s troops when on ‘special operations in peacetime’. Well if it’s ‘peace time’ why are ‘special operations’ underway? But now it is a crime for the mothers of those who he sends to die in Ukraine to ask about the deaths of their sons who have been doing their service in the military in ‘peace time’. Really a comedian could get a crowd laughing at this and it would be funny if people did not suffer because of this evil mafiosi regime. But backwards he hurtles his now captive populace – they will not be permitted to leave soon – to an oblivion of despair and any that oppose him… the list is too long and Vladimir Kara – Murza almost certainly the latest but it seems he at least may survive for now. Should we oppose this when it comes to Ukraine? It’s precisely the opposite of what the ‘Revolution of Dignity’ was about: for reformed corruption free, transparent government and more opportunity for all. But this is why the New Ukrainian myth and identity will live longer also. As Putin hurtles back to the Checkist state, Ukraine reforms – albeit slowly. Ukraine is making progress toward prosperity and opportunity for all whereas Putin is closing his subjugated people down and threatening them. You do not have to be a genius to see the directions diverging and work out the ends. This war was won long ago in truth, only Putin refuses to see it, as he was mistaken last year and cannot be seen to have been wrong. He doesn’t need Ukraine for security or any of the rest of it – it’s hardly as if Ukraine is about to attack Moscow and getting the reforms done and EU trade going would be a profit to the Muscovite economy; Ukraine get’s richer – it’s people can buy more etc.. make more also. Long term time is only on Ukraine’s side and victory is assured. But once victory over the Muscovite aggression is achieved it is important that the truth be collected and written – that Maidan was NOT a CIA/Jewish or even alien lizard people ‘coup’, that citizens of the Russian Federation fight today for Ukraine because this is not a war between the Ukrainian and Muscovite peoples, but a war waged against the Ukrainian people by a deeply criminal Muscovite regime. I long for the day when the statue of Prometheus is erected in Kyiv and the names of all the heroes, from the Heavenly Hundred to the heroic troops who have payed the ultimate price to the old babushka hit by misguided shellfire, inscribed beneath; indeed it could be said that Prometheus himself has spoken to Ukraine; “I gave them hope, and so turned away their eyes from death.” (Aeschylus, Prometheus Bound). Then the new Ukrainian myth and the truth will coincide and Ukraine be at peace.
<urn:uuid:3b2e7c8d-7ddd-4b6c-bf09-db3a59666d69>
CC-MAIN-2021-21
https://empr.media/opinion/analytics/the-new-ukrainian-myth/
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988831.77/warc/CC-MAIN-20210508001259-20210508031259-00176.warc.gz
en
0.972353
4,264
2.9375
3
Episode 36: Condescension We're going back to our inbox this week to answer some of your most pressing concerns. Such as: what did 'condescension' mean in the work of Jane Austen? Why does 'brilliant' mean "smart"? And what is it about the letter 'S' that strikes fear into a lexicographer's heart? Download the episode here. Emily Brewster: Coming up Word Matters: it's audience participation, with your questions. I'm Emily Brewster, and Word Matters is produced by Merriam-Webster in collaboration with New England Public Media. On each episode, Merriam-Webster editors Neil Serven, Ammon Shea, Peter Sokolowski, and I explore some aspect of the English language from the dictionary's vantage point. Each week, we put out a call to our listeners. And thanks to you, our inbox fills accordingly. Let's get to some of your letters. Another question from our mailbag, Adrian writes, "I have always wondered about Jane Austen's use of the word condescension in her novel Pride and Prejudice. The character Mr. Collins uses the word frequently when speaking of his beloved patroness, the eminent Lady Catherine de Bourgh. He holds her in such exaggeratedly high regard that he speaks of her condescension with reverence, as in "she is all affability and condescension." And Adrian goes on to question this use of condescension. It is a use that is very different from our familiar use. But the word condescension first meant exactly what it means in this use in Pride and Prejudice, goes back to about the middle of the 17th century apparently, "voluntary descent from one's rank or dignity in relations with an inferior." So this is the willing and generous stooping of some eminent person to interact with an inferior. But what Adrian is pointing out, if Austen had knowledge of the more modern use of condescension, which is the one now familiar to all of us, this patronizing attitude or behavior, or, as our Unabridged Dictionary puts it, "disdain veiled by obvious indulgence or patience." Now, to answer this question, I am no Austen scholar and we don't track when a word's new meaning develops aside from when we're actually defining it. But there is an Austen specialist, a lexicographer named Peter Chipman. Peter Chipman has written an entire lexicon of Jane Austen's language. It is yet to be published, but I really hope it is, because I'm very excited about the idea of this. Here's what Peter Chipman writes. He says that, "Of the 24 instances of condescend and its derivative words in Austen's novels, fully half refer to this Lady Catherine de Bourgh." And he says, "We have to suspect authorial irony here, given that Lady Catherine always maintains her hauteur, even when she is being most annoyingly officious. And it's the buffoonish obsequious Mr. Collins who uses the word most. So it certainly looks like Austen herself sees Lady Catherine's condescension as offensive rather than praiseworthy." And he notes that even though lexicographers, we did not define the modern use of condescension that we all know, it did not enter Merriam-Webster's dictionaries until, I believe, the 1961 Webster's Third. But Chipman's evidence shows that the use was actually developing before this. And again, this goes back to the idea of dialogue being a key to earlier usage than the prose of a news article, for example. He says that, "The pejorative use of condescension was clearly driving out the laudatory use in Austen's work." And so Adrian, shrewd reading on your part. And according to Peter Chipman, you're absolutely right. Peter Sokolowski: And there's something else about this that made me think of the etymology, condescend. Con- is "together" or "with," and descend is "to go down." So it's to go down together. And there's an idea of, again, this class, this gentility. There's so much in English that's probably hidden because we have fewer obvious class distinctions, class markers today. One of the reasons the word hello didn't exist in English before the invention of the telephone is that one always knew the rank of the person one addressed. So you knew if it was Sir or Your Grace or Madam. And later, you didn't know who you were talking to. We needed a hail. We needed a salutation to begin a conversation with someone whose social status was unknown to us. Emily Brewster: Hello did exist. Peter Sokolowski: Right. Emily Brewster: But it was used more to express surprise. Peter Sokolowski: Right. Ammon Shea: Yeah. It was more like, "Hello, what's that on the bottom of my shoe? Neil Serven: Peter, you mentioned the "with." There's this idea of balance, like, condescend, we're doing this together. Peter Sokolowski: Exactly. Neil Serven: So it equates the two classes in this weird way. I think that also speaks to this idea of who had the eye when it comes to some of these narratives and who got to use the words. The idea of condescension being a negative thing had to be pointed out. If you're from a certain class, the idea of condescension is a noble thing you think you're doing. And so it speaks to this idea that the same word, the same concept can have two different meanings depending on which end of the bargain you're on. Peter Sokolowski: And of course, one of those two sides is vastly more represented in published literature than the other. Emily Brewster: The semantic progression of this term is heartening, because the noble who was descending to be with the person they decided to be affable toward, they could use that word condescend to describe what they were doing. And then the people who were condescended to recognized it for what it was. And that's how we got the current meaning of condescend. Neil Serven has our next listener question. Neil Serven: A listener named Daniel writes about the word brilliant. He asks, "Is the word brilliant used to mean 'smarts' because of our alternate meaning of bright or vice versa? Did all definitions of brightness coevolve to include meanings of intelligence?" So brilliant and bright, we've got two adjectives that both refer to shiny things, but also refer to intelligence. You can have a brilliant idea. You can have a bright pupil. These two words have separate etymologies. Brilliant, when it entered English, was originally used for diamonds. It was later used for a particular cut of diamond not that much later. It was pretty immediate when it jumped on the scene. But then gradually, it was used for things that shined and things that stood out just by being shiny. So to be a brilliant student, performer, you were standing out from the pack. And if you think about it, we use this metaphor a lot, this idea of standing out with something that twinkles or glitters. We use star for the same reason. A star glitters in the night sky. It carries the metaphor of sticking out to the eye from amid this field of darkness. And yet we talk about the star of a movie, the star of a class, the star athlete. So brilliant comes from French, from the past participle of a verb briller, meaning "to shine," and that comes from Italian. Bright comes from Old English, and back then, it meant the same thing as brilliant. And it was used for things like fire, for sunshine, also for stars, things that glittered. Separate etymology, but from there, bright took on referring to the vividness of color. So you've got a very bright red. It was still in Old English, then it took the same route as brilliant in describing things that stood out. And from there, it was describing personalities. People who were bright were people who were cheerful or optimistic, people who were sunny dispositions, people who you wanted- Emily Brewster: Right. Neil Serven: ... were happy to be around. Then it came to be used for things like wit in writing. And from there, it was used for intelligence. The original question, does bright influence brilliant or does brilliant influence bright, they seem to have come by their own separate routes to mean someone of wicked intelligence or someone who's just standing out well in a field because of being very talented or just being very adept, whether it has to do with learning or something else. Emily Brewster: It seems like the metaphor exists and the language has adopted these different words to fit the metaphor. There's also scintillating, right? A scintillating, shining, shimmering wit. The idea of brightness, the idea of light amidst darkness, as you said, it is reminiscent of the way intelligence can shine light in the darkness. Neil Serven: We've used the term enlightenment for the period of the 17th century when great thinkers rose in prominence with John Locke and Rousseau and Isaac Newton. We talk about enlightenment in Buddhism as well when you have this superior knowledge, when you reach nirvana. Emily Brewster: And then there's the metaphorical lightbulb over the head in cartoons, right? Of course, we do it in imagery also, not just in words. This idea of light and intelligence or knowledge or understanding is pervasive. Neil Serven: The image of the lightbulb, meaning a great idea, that occurred only shortly after the real light bulb was invented. Apparently used in Felix the Cat comic strips, whenever someone would have an idea, the bulb would just appear right above it the character, and then [crosstalk 00:08:37]. Emily Brewster: I love that. That must have seemed so modern. Neil Serven: It must have been a joke that maybe not even everybody had access to in that they might not have all had light bulbs yet. Emily Brewster: Right. At a time when people didn't have them in their houses, but they were aware of them as these phenomena. I want to see the oil lamp above the head of somebody, the wick- Neil Serven: Yes. Emily Brewster: ... and the ... I want to see the predecessor. Peter Sokolowski: We don't say a shiny student, but we say the student shined in math, for example, so that this image is broad. And it strikes me that bright and brilliant is another one of these pairs that English is so rich in, pairs of words that are essentially synonymous, but have different roots, that typically one has Old English roots and the other has Latin or French roots. There's so many. The words new and novel, for example, or same and equal or doable and feasible or buy and purchase. There's a million of these things that English has as pairs or doublets. This is one of them. They share semantic fields, but they have completely different etymologies. And that's one of the reasons English is so rich and nuanced. Emily Brewster: This is an interesting case because it didn't begin that way with many of those cases. The fact that brilliant began with this technical use referring to diamonds distinguishes it. And I think also brilliant is quite a bit newer than the typical Latinate borrowing that we got in Anglo-French. Peter Sokolowski: That's a really good point. It's a really late borrowing, in fact, and it seems weird because it's such a big, important word in English. And it was borrowed, what, in the early 1600s? Is that right? Ammon Shea: I think it was slightly later. I think it was definitely the 17th century. Peter Sokolowski: Amazing. Ammon Shea: But I think it was the late 17th century. Peter Sokolowski: In the history of the language, that's recent. Ammon Shea: It is recent, especially for a Norman term, many of which came in the 15th and 14th century. Peter Sokolowski: Yeah. So you're right. Brilliant is clearly a special case. And also, of course, it's pretty much spelled the same way in English as it was in French. Emily Brewster: You're listening to Word Matters. We'll be back after the break with another of your questions. Word Matters is produced by Merriam-Webster in collaboration with New England Public Media. Ammon Shea: I'm Ammon Shea. Do you have a question about the origin, history, or meaning of a word? Email us at WordMatters@m-w.com. Peter Sokolowski: I'm Peter Sokolowski. Join me every day for The Word of the Day, a brief look at the history and definition of one word, available at Merriam-Webster.com or wherever you get your podcasts. And for more podcasts from New England Public Media, visit the NEPM podcast hub at NEPM.org. Emily Brewster: Here's Peter with our next question. Peter Sokoloski: So we got a note from John. He says he was curious to know what we meant when we said that the letter S was a difficult letter. And that's a really good question. We might've let that go quickly. And there's sometimes inside baseball in dictionary world. And there is a method to this madness. There is a reason that S is difficult. And part of it is just because it's a really big letter. There's a lot of words and a lot of hard words that have to be defined that begin with the letter S. This is a familiar thing for working lexicographers. Ammon Shea: We even have a very technical term for this, which is alphabet fatigue. Peter Sokolowski: I love it. Ammon Shea: It's the reason why, if you look at many, many, especially older dictionaries or reference works, and you look what is thought of as the traditional midway point of the alphabet, M and N, the section from A to M is significantly larger than the section from N to Zed or Z. This has long been the case with a lot of research works in our own dictionary of English usage. The midway point comes around H, and I think it was that we were very enthusiastic in putting it together. And at some point, we realized we probably can't make this thousands and thousands of pages long. So for the rest of this, we have to cut it short. I think that's actually the most common thing, is that a lot of reference works tend to run longer than they thought they would. And at some point, the powers that be, the people funding the projects, say, "We're not going to publish a 50,000 page work. You have to make this shorter." And so the second half gets short shrift. Peter Sokolowski: There are examples of this. A famous one is the Encyclopedia Britannica, that first edition, which I think is 1768, in three volumes that are roughly the same size. Volume one is A and B, volume two is C through L, and volume three is M to Z. So it's almost like they started walking and then started jogging, and then started running, and then started sprinting. Emily Brewster: No, the alphabet is clearly to blame here. It is all the alphabet's fault. And it is the fault of having to work alphabetically, which we no longer do for the most part. We do still have some alphabetically informed, or dictated actually, editorial projects. For example, when we do a new edition of the Collegiate Dictionary, we go through the entire alphabet. But it used to be that every project was done alphabetically. You might not start with A. We never start with A. You would be working alphabetically through sections of the alphabet. And so by the time you get to S, everybody is exhausted, there are pressures to wrap the project up, and yet S, there are just so many words that begin with S. Neil Serven: There are so many words because there are so many blends. There's S-H, there's S-L, there's S-M, there's S-N, there's S-P, and then you've got words that just start with S on their own. Emily Brewster: That's right. You've also got some prefixes like semi-, that generates a lot of... Neil Serven: Right. I think statistically it's... Emily Brewster: ... any productive prefix. Neil Serven: I think statistically, it's got the largest number of words in the dictionary, I think, right. Ammon Shea: It also has a number of huge individual words like set. Neil Serven: Right. Peter Sokolowski: Set is, I think, the biggest one in Merriam Webster's Unabridged Dictionary, I think. Ammon Shea: Right. It's usually neck and neck between set and put and things like that. But set is traditionally the largest. Peter Sokolowski: Page after page after page. Ammon Shea: Yeah. Just the sheer number. And Emily, didn't you define set? Emily Brewster: I don't think I did do set. I may have worked on it for the Learner's Dictionary, but no. And I have to say that in my personal experience, I have experienced a letter that I would say is far more difficult than S. Neil Serven: Which is? Emily Brewster: And this was years ago, we were working on our Learner's Dictionary. And the project was done alphabetically, and I think we started ... We went D through F and then went back to A, and did A through C, and then continued on through the alphabet. But by the time the project got to E and F, it was clear to the director of defining, Steve Perrault, that the project was going to be very different than the concept was when D was first defined. And so the letter D had to be redone entirely. And I was one of two editors assigned to redo D. And D is difficult because of words like do. That's another beastly word. Great fun, but beastly. But what I learned in spending, I don't even remember how long it was, because this was a long time ago, but really just two editors doing all of the content in the letter D, that's a big project. What I learned is that D is just full of really dark, depressing, deathly, dismal words. Neil Serven: Dolorous. Emily Brewster: Dolorous. Yes. Like dilapidated, die, death. All of it is in there. There's so many desperately depressing D words. Ammon Shea: Yeah, and you even end with the prefix of dys-, which for dysphemism or ... Emily Brewster: Yeah. Dystopia. Ammon Shea: Dysphonious and things like that. Emily Brewster: Oh, wow. And something, I think, people don't think about that really ... As a definer, when you are defining a term, you are steeped in evidence of the word in use. That is how you do the act of defining, is by really just sinking into all these examples of these words in use. So if you are defining the word dystopia, you are going to be reading hundreds of examples of the word dystopia in use. Ammon Shea: Do you find that when you're defining utopia, that you leave the office with a spring in your step, you feel atomous, more infused, uplifting? Emily Brewster: I admit to being affected by the citations I encounter, yes. Neil Serven: See, by then, we're in U, anyway. So we see the end of the horizon coming. So we feel great anyway. U through Z are just taking weight off your shoulders. You're just swimming through. And it doesn't feel like any effort. It feels like you know the end is coming at that point. Ammon Shea: Right, because X doesn't even really count. It's just... Emily Brewster: Oh, if you're lucky enough to be the editor assigned to X, you're golden. Ammon Shea: Yeah. Emily Brewster: You just plow through that. Peter Sokolowski: And Emily, your anecdote just brings up something that I think people don't think about, which is that that particular project was a dictionary written from scratch that was so new that we changed the way that we were writing it as we were writing it. And then we had to go back and redo some of the early work. And that just shows you also this honest labor that goes into dictionaries, a lot of thought. And then we think, "Oh, we could do this better or differently or have a different set of rules," because we do need rules because it's a team project. You need a bunch of people exerting effort in the same direction. And that's interesting. And also, the other two things that this makes me think of is how axiomatically long it takes to write dictionaries. It's always longer than you think. Emily Brewster: It was John Morse, former president of Merriam-Webster, he used to talk about how defining the Collegiate Dictionary was like painting the Golden Gate Bridge. You get to the end and you really just have to start over. Peter Sokolowski: Exactly. Emily Brewster: And that's always true in lexicography. When you get to the end of the alphabet, it actually is time to start reviewing the vocabulary in A. Ammon Shea: Do you think that we'll ever come up with our own in-house idiom, "As easy as X?" Emily Brewster: I like it. I like it. Thank you to all who have written to us. If you have a question or a comment, email us at WordMatters@m-w.com. You can also visit us at NEPM.org. And for The Word of the Day and all your general dictionary needs, visit Merriam-Webster.com. Our theme music is by Tobias Voigt. Artwork by Annie Jacobson. Word Matters is produced by Adam Maid and John Voci. For Neil Serven, Ammon Shea, and Peter Sokolowski, I'm Emily Brewster. Word Matters is produced by Merriam-Webster in collaboration with New England Public Media.
<urn:uuid:a39626bc-00ae-4c75-875c-101fb6b0ce07>
CC-MAIN-2021-21
https://www.merriam-webster.com/word-matters-podcast/episode-36-condescension
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988986.98/warc/CC-MAIN-20210509122756-20210509152756-00497.warc.gz
en
0.971856
4,784
2.765625
3
Chapter 6 contains three major portions, each with its own significance and relevance. The first portion presents additional conflicts between Jesus and the Pharisees, but this time the subject focuses on laws about the Sabbath. In the eyes of the Pharisees, Jesus routinely violates Sabbath law. One of those conflicts about the Sabbath involve the healing of a man with a withered hand. This conflict will reveal the true evil hearts of the scribes and Pharisees. The second portion of the chapter describes Jesus’ calling “the twelve” as apostles. The last portion of the chapter is Luke’s version of the Sermon on the Mount, which the author is calling the Sermon on the Plain. It is important to understand the relationship between Luke’s record of Jesus’ sermon, and the teaching of John the Baptist. What to look for in Luke 6 As you read each paragraph ask, “How is God speaking to me personally through His word?” Look for the conflicts between Jesus and the Pharisees and determine the intent of the hearts of the Pharisees. Throughout the chapter, underline each instance of the word “good.” Discern the hearts of the Pharisees when Jesus heals the man with the withered hand. Look for the naming of the twelve apostles, and what activity Jesus was engaged in before He appointed them. Look for the key elements of the Sermon on the Plain, and determine what are the central themes. Look for Jesus’ warning concerning the words He taught during the Sermon on the Plain. 6:1-5 Luke continues his theme of conflict with the Pharisees, only this time it is regarding the Sabbath. Luke’s purpose in recording this incident is to demonstrate that the Pharisees were not knowledgeable about God’s law, the Torah, the law of Moses. This first incident involves plucking and eating corn. According to the law, plucking and eating corn as the disciples were doing was permissible under the law (Deut. 23:25). It was using a tool to cut the grain or harvesting for the purpose of selling the grain that was considered work on the Sabbath, and therefore not permissible. Jesus asks the Pharisees, “Have you not even read…?” implying they had spent more time memorizing the 2000 manmade laws about the Sabbath than studying what God’s word actually says. Essentially, they were holding their own laws above the Law of the Lord. Therefore Jesus directs the Pharisees to the incident when David and his men ate the showbread; that is, the sanctified bread of the Tabernacle that only the priests were to eat. David did what was unlawful, but the Pharisees didn’t seem to have a problem with that. The irony in this story is that the Pharisees accuse Jesus of violating the law—which He didn’t—while ignoring David’s infraction eating the showbread, which he did. This clearly demonstrates that the Pharisees had an agenda against Jesus which defied logic, reason, biblical facts and truth. This is the second time in Luke that Jesus refers to Himself as “the Son of Man,” the first being when He forgave the sins of the paralytic. “Son of Man” is a title used once in Psalms, once in Daniel and eighty times in Ezekiel. Interesting enough, it is used 82 times in the gospels. The title was not necessarily messianic, but did come to have the meaning of God’s acceptable representative of mankind. There is also an eschatological significance in that the Son of Man is the one to whom God’s plan and will for mankind has been revealed. “The Son of Man is Lord of the Sabbath” is another way of saying, “The Sabbath was made for man, and not man for the Sabbath” (Mark 2:27). Lost in the conflict over Sabbath law is a love issue. God created the Sabbath out of love for mankind. Of all the Ten Commandments, the fourth commandment regarding Sabbath rest is the only one that encourages man to act in love toward himself; that is, to take care of himself. The Sabbath is to be a time of rest, refreshing and relaxation, not fear and anxiety over potentially breaking some law. Jesus Himself is showing love for His disciples by letting them take time to eat. There is an underlying theme of caring here. Unfortunately, it is a theme the Pharisees cannot grasp, for they are more concerned about maintaining power over people’s lives than doing what is best for God’s people. Although the Pharisees make this incident an issue of law, the real issue is one of love. 6:6-11 In this next encounter with the Pharisees, Luke will once again highlight the incompetence of the Pharisees, how far they had strayed from God’s word, and their total inability to understand the intent of God’s word. The key word in this incident is “good.” The second conflict over the Sabbath is the most disturbing of all, for it clearly demonstrates that the Pharisees had so idolized their own laws that they had lost the primary purpose of God’s law—to do good. The man in subject has a withered right hand. In Middle Eastern culture, each hand has its purpose. The right hand is considered the “clean” hand; that is, it is used to shake hands, eat and transfer something to another. The left hand is considered unclean and is used for personal hygiene, including wiping oneself. The fact that this man’s right hand was unusable meant that he could only use his left, or unclean hand, for everything. That would essentially make the man unclean, and that is why Luke includes the fact that it was the man’s right hand that was withered. Verse 7 demonstrates that the relationship between Jesus and the Pharisees has become hostile; they are “watching…to accuse Him.” It is a very sad scene that the spiritual leaders of the community are more interested in Jesus keeping their laws than in healing someone. Jesus is not intimidated by the religious leaders and commands the man to “come forward.” Before the healing, Jesus asks what should have been an unqualified rhetorical question: “Is it lawful to do good…on the Sabbath?” Without answering—for the answer was obvious—Jesus commences to heal the man’s hand. The reaction of the Pharisees and scribes is beyond comprehension. They were “filled with rage.” Why? Because Jesus violated one of the 2,000 rules the rabbis had concocted concerning the Sabbath. Clearly, they had supplanted God’s word for their own. Let’s see how it came to this. In the giving of the Ten Commandments, the fourth commandment prohibited “work” on the Sabbath. However, there are very few passages in the Torah (the Law) that define what actually comprises work. Therefore, over the years, an oral tradition developed called the Midrash, meaning “interpretation.” The Midrash first started as oral traditions for the purpose of “filling in the gaps”; that is, interpretation of that which was found in scripture, and supplying information on things not found in scripture. Therefore, the rabbis concluded, someone needs to interpret what God means by “work” and subsequently began to elaborate on what comprised “work.” These laws began to be codified during the period between Ezra and Jesus, to the extent that by the time of Jesus, there were approximately 2,000 laws defining what a Jew could or could not do on the Sabbath. Jesus, in confronting the Pharisees about this, warns, “You weigh men down with burdens hard to bear….” One of these Sabbath laws specified that non-emergency healing was not allowed on the Sabbath because it comprised “work.” Jesus challenges the Pharisees about this law, which is not found in the Old Testament, but in the rabbinical laws which are not from God, but from man. Therefore, the Pharisees are more concerned about Jesus keeping their rabbinical laws than they are concerned about the man himself. And herein lies the tragedy of the situation. They love the law more than they love the man. What should have been the reaction of the Pharisees and scribes? They should have rejoiced for the man! They should have praised God for caring enough about the man to allow him to become “clean” and a normal part of society. They should have crowded around him, congratulated him, hugged him and been glad for him. And they should have fallen at the feet of Jesus and confessed their unbelief, as Peter did. But instead they were “filled with rage,” huddled together and discussed how they might undo Jesus. It is a terrible, tragic and disheartening scene, and one of the most pathetic scenes in the gospels. This is the hideous outcome of valuing law over love. The Pharisees had substituted their own law for God’s, and in so doing, lost the whole point of the law of the Lord which is to love and care for one another. 6:12-16 The spiritual leaders of the Jews have disqualified themselves, as demonstrated in the two incidents above concerning the Sabbath. Not only do they not know God’s word, but they have substituted God’s word with their own—the “tradition of the elders,” as it will be called. Therefore, God must begin anew. New spiritual leadership is going to be established and this will come in the form of the apostles. This new leadership will proclaim the true law, which James will call “the royal law”; that is, “You shall love your neighbor as yourself” (James 2:8). Once again, Jesus departs from the crowd and spends the night on a mountain in prayer. Jesus demonstrates His complete dependence on the Father, for the two are One. The result of this night in prayer is the appointing of twelve of the disciples (for there were many) who would also take on the role of “apostles.” An apostle literally means “one who is sent out,” or “messenger.” They will also be referred to as “the twelve,” as other apostles will be identified later in the Early Church. These twelve men have symbolic significance to the twelve tribes of Israel, although there is no indication that each man was from a different tribe. When the church first begins, as described in the Book of Acts, false apostles will arise. Therefore, true apostles needed to be identified. There were three qualifications for becoming a true apostle. First, he had to have been with Jesus from the very beginning. (Paul was the only known exception to this.) Second, he had to be a witness of the resurrection; that is, he had to have seen Jesus after the resurrection. And third, he had to be able to perform the “signs and wonders of a true apostle.” Today, there are no more apostles, no matter what a person, preacher, evangelist or healer chooses to call himself (or herself). Anyone today who calls himself an apostle is a false apostle. There are four lists of the apostles in the New Testament, three in each of the synoptic gospels and once in Acts. Peter is always the first one named. Note that there are two Simons (though Simon Peter is always identified as Peter or Simon Peter), two James, and two men named Judas. Judas Iscariot is always identified as the “traitor.” To help the student not be confused with who is whom, neither James the brother of John nor James the son of Alphaeus are the author of the Epistle of James. That James was not one of the twelve but was a half-brother of Jesus and became the leader of the church in Jerusalem. John, the apostle James’ brother, is the author of the Gospel of John, the letters of John, and the Book of Revelation. Matthew, or Levi, is the author of the Gospel of Matthew. Peter himself wrote two letters in the New Testament. Few details are known about the twelve after the resurrection, and especially after the end of Acts, with the exception of the James the brother John who was executed by Herod Antipas. Most of what is known about each of the twelve is gleaned from the writings of the early church fathers; that is, theologians who anchored the church after the twelve had passed away. Tradition states that all of the twelve were martyred except for John, who for a while was exiled on Patmos where he wrote the Book of Revelation and, after his release, eventually died of natural causes. 6:17-19 Luke is about to share with us the basic precepts of Jesus’ teaching. This teaching is the essence of “the gospel” before the resurrection, gospel meaning “good news.” Notice that those who have come to hear Him and be healed are coming from as far away as “Tyre and Sidon.” They are probably Jews living in these Gentile towns, but it is possible there were Gentiles in the crowd. Sidon is about 50 miles northwest of Capernaum. Note that Jesus stood “on a level place” whereas in Matthew’s account of the Sermon on the Mount, Jesus was on the side of a mountain or hillside. Yet the text of the sermons are almost identical. This means that Jesus probably preached essentially the same message everywhere He went, perhaps hundreds of times. That is why the authors of the gospels can record Jesus’ words so accurately. The primary means of memorizing in that day was audible (rather than visually in writing) and no doubt they had every teaching of Jesus thoroughly memorized. Therefore we can safely say that the words the gospel writers have preserved for us are the actual words of Jesus, word for word. 6:20-26 Oh! The volume of commentary that could be written on this section! The writer will do his best to summarize the main points. Jesus now explains what it means to be “good,” something the Pharisees were incapable of comprehending or expressing. Those who are good love others and care about them in a manner that is consistent with God’s word. Those who are evil do not. 6:20 The poor are blessed because they know they have a need and are willing to learn about the kingdom of God. Those who are self-sufficient don’t need God. God loves those who are legitimately poor. When Jesus uses the term “blessed,” He is actually invoking a blessing; that is, He is blessing the poor. He is speaking for God because He is God, and He is promising better things to come. Better things will come because, by following Him and obeying Him, they will receive the benefits of the kingdom of God. This will eventually result in joy, peace, happiness and contentment. The rich will not be interested because they are being “blessed” (made happy and content) by the things of the world. Therefore, they do not perceive that they need anything more. The poor, on the other hand, know what it is to live in need, and therefore are open to good things promised. 6:21 God loves those who are oppressed and all the more those who weep out of grief from great loss. All their losses, their sufferings, their afflictions, and their deprivations will be replaced by the fullness of all things lacking. Those who weep will learn to laugh because all that is lost will seem trivial to all that is gained. As the martyred missionary Jim Elliot wrote, “He is no fool who gives what he cannot keep to gain what he cannot lose.” The kingdom of God will turn their hunger into satiation and their grief into joy. 6:22-23 Christians who are persecuted, ostracized, rejected, mocked, scorned, belittled and even sued for taking a moral stand will receive a great reward for their faithfulness and their unwillingness to yield to the intimidation of those who do not know God. Those who persecute do not love Christians because they do not love God, and they do not love God because they are blinded to the fact that God loves them and still has a plan for their life. They either cannot see it or their refuse to see it. Those who identify with Jesus Christ will be scorned and rejected by the same ignorance and hatred with which Christ was scorned and rejected. God loves those who, in spite of all threats, maintain their loyalty to God and their faith in Jesus. Persecuted Christians will receive a greater reward in heaven, for they have paid a greater price for their faith. The great prophets also suffered; therefore, Christians who suffer can take refuge in the truth that they are not alone 6:24 The rich are rich only because God has allowed them to be rich. Self-made men are self-made men only because gave them the wherewithal to become self-made men. Therefore, they should take no credit for their own success, and give all the glory to God. The rich are also rich because they keep what they gain. They (we) fail to share with those who are legitimately poor. Their (our) richness is a failure to love others as God loves others. The teaching of Scripture is clear—keep only what you need, and give the rest to those who have legitimate needs. For those who hoard their riches, there is an eternal price to pay. “Woe” is a negative interjection referring to horror, denunciation, and judgment. The sentence could be paraphrased, “Judgment is coming upon you who are rich…. Your world may be luxurious and pleasurable now, but in eternity, it will be the very opposite.” (cf. Luke 16:19-31) 6:25 Merriment must never be at the expense of those who suffer. There is nothing wrong with eating or laughing, but it is a failure of love if the legitimate needs of others are being ignored. The reference to “those who laugh now” is not contradictory to what Jesus has said in verse 21. The context is the same for those who are rich and dwelling in luxury and comfort at the expense of others. Once again, it is a failure to love others; the love of oneself has overruled the command to “love one another.” 6:26 Those who seek the approval of men rather than God, and those who desire to please men rather than God are deceived and deceivers. The gospel of Christ is an unpopular message, and the world hates the truth because biblical truth always exposes personal sin. 6:27-36 This next section of Jesus’ teaching goes into somewhat more detail that what is recorded by Matthew in his gospel. However, the content is essentially the same when it comes the subject of one’s enemies. 6:27 No other religion teaches that one should love his enemy. The message of the Bible is unique in this. Truth always trumps love because true love requires biblical definition. This is one of the definitions of biblical love—it is a love so great and so inclusive that it reaches out even to one’s enemies. Note that Jesus says, “Love your enemies.” He does not say, “Like your enemies.” There is a huge difference, and it is often a stumbling block to those who first hear these words. The nature of biblical love (agape) is that it is a love that supersedes emotions and feelings. Agape is not void of emotions and feelings, but emotions and feelings are not to get in the way of doing the right thing; in this case, treating your enemy like you would want him to treat you. Why is this an imperative for the Christian? Because all who have sinned are essentially enemies of God: enemies of all that is good, enemies of all that is holy, and enemies of all that is righteous. Yet “God so loved the world that He gave His only begotten Son….” The world could be considered an enemy to God, yet He loved the world anyway.” “Do good to those who hate you” is the only way to respond. The world’s method of treating those who hate you is “An eye for an eye and a tooth for a tooth.” One cannot watch the news these days without seeing that same endless cycle of vengeance played out in every conceivable arena. But it is not the way of Christ. The law of Christ is to “love one another,” and that includes one’s enemies. 6:28 Doing good to those who hate you, blessing those who curse you and praying for those who mistreat you is the only chance of breaking up the cycle of worldly violence. The church is called upon to reflect God’s love in the world and to be the forerunners of those who break up the cycle. This is why it is so tragic when churches split, sue one another, and treat each other in the same manner the world treats its own. 6:29 Love requires that possessions should never interfere with relationships, even adversarial ones. Verse 29 could be taken literally, but not necessarily so. The issue is caring more about oneself and one’s material possession than demonstrating the kind of love that only comes from God. It does not mean that one should be careless or allow himself to become a target of scammers and con men. But it does mean that in the arena of personal relationships—neighbor to neighbor, worker to worker, church member to church member—it is agape that should define the outcome, not selfishness. 6:30 Love insists on letting go of the things of this world. People, especially those held captive by the devil to do his will, hang on to the things of the world. They have yielded to the second temptation of Jesus. Once again, the same principle applies concerning those who would take advantage of someone. The lesson is more about what a person loves more—people or possessions. 6:31 If you want others to treat you with love and respect, treat all others with love and respect, even your enemies. 6:32 Loving those who love you is far easier than loving the unlovable. God loves everyone, especially the unlovable and those who are unloved. The church of Jesus Christ is called to do the same—love the unlovable. Unfortunately for evangelical Christianity, the Catholic church does a better job at this. Equally unfortunately, they do not have the message to go along with the good works. 6:33 Loving others is not necessarily having great feelings of love, but rather it is willingness to do good and show love in spite of feelings. True love is not void of feelings, but true love is never ruled by feelings. 6:34 Lending to those in legitimate need who may not be able to pay it back is another definition of agape. Failure of those who have to give to those in need is a failure to love others as God’s has loved the one who has. It is also a failure to trust that God will provide for the follower of Jesus Christ. Here again, the issue is one of loving others more than loving one’s possessions. 6:35 This is the second time there is a command to love your enemies. That also means doing good to them. Remember that God is not expecting His people to feel loving toward their enemies; that may be impossible to do. But one can still do good in spite of feelings. Indeed, doing good to enemies in spite of feelings is an ultimate act of love. Why does Christ ask His followers to do these things? Because He will demonstrate this very love at the cross. The last statement in verse 35 is absolutely contrary to Jewish thinking and theology. Old Testament theology teaches that God blesses righteous people and withholds blessings from the unrighteous, or removes blessings from those who sin. This principle is well spelled out in the Book of Deuteronomy, and that is why the Book of Job was difficult for Jewish rabbis to reconcile. Jesus’ statement here would be astonishing to His listeners: “For He Himself is kind to ungrateful and evil men.” In their thinking, God is to do away with and judge ungrateful and evil men. But what Jesus has introduced in this entire sermon is the expansiveness and inclusiveness of God’s incredible love—“God so loved the world….” What Jesus is introducing is that anyone is capable of receiving God’s love. This new theology will be instrumental when Gentile Christians begin entering the church. God wants everyone to experience His love and learn to express His love to others. How else will even evil and ungrateful men experience God’s love if Jesus’ followers are not there to demonstrate it 6:36 Thus the final command in this section—Jesus’ followers are to reflect God Himself. 6:37 This statement by Jesus is one of the most frequent misapplied quotations, especially by non-Christians. Unfortunately, many Christians are intimidated and do not know how to respond to the statement when used in an adversarial context. “Do not judge” does not mean that Jesus’ followers are not to judge in terms of discernment. It means that Christians are not to judge out of prejudice or a spirit of criticism and condemnation. Christians must discern truth and love. Instead of clamming up with intimidation, the Christian should respond to the sarcastic skeptic with something like, “Oh really? Where do you find that in the Bible. What is the context. Is that what Jesus meant?” Or, “Are you actually quoting Scripture to correct me? Then you must believe in the Bible.” And then, “speaking the truth in love,” gently inform the skeptic what the real meaning of the passage is, and then get them to agree that not judging others in a condescending way is the right thing to do. 6:38 The illustration is that of wheat being measured out in the market place. The buyer wants the just portion being paid for. Therefore, portion out goodness to others in the same way you want it measured out to you. Give a good measure of love and a good measure of love will be given back. 6:39 Showing biblical love and doing good to others is supported by a foundation of logic. To love or do good any other way than God’s way is destined to fail. 6:40 This principle includes love. Jesus was the incarnation of God’s love, expressed in the flesh. Jesus’ life was the epitome of love that every Christian should seek to imitate. Such love can only be obtained by studying Him in God’s word, by seeking Him in fervent prayer, and by ministering to others in genuine love. The apostle Paul summarized it in his first letter to Timothy: “But the goal of our instruction is love from a pure heart, a good conscience and a sincere faith” (1 Tim. 1:5). 6:41 It is a lot easier finding fault with others than it is finding fault with oneself. The lesson here is to examine oneself first before taking up the task of examining others. If one Christian observes another failing to express God’s love, the question should first be asked, “Am I always expressing God’s love?” 6:42 One can easily become confused here, along with verse 37, when it comes to confronting wrong doctrine or errant Christian behavior. Jesus is not saying that those who teach error or behave in a bad way should not be confronted. All of the letters in the New Testament confront false teaching and those who teach it. What Jesus is referring to is criticizing another without good reason, and without considering one’s own faults. The simple answer here is that active believers should be constantly self-critiquing their behavior and beliefs in accordance with biblical standards 6:43-45 The key word in these verses is “fruit.” The term fruit is used metaphorically here to refer a person’s character. There are many popular preachers, evangelists and so-called prophets whose lifestyle does not reflect New Testament values and standards, particularly when measured by the greatest fruit of all, love. Like all ripening fruit, it may take some time to determine whether a person’s character is actually in keeping with the biblical standards of love and grace. Christian leaders who divorce, commit adultery, embezzle, and are caught in acts of homosexuality or pornography are the more obvious rotten fruit. However, less obvious are those who build financial empires, whose lifestyle is opulent, whose behavior does not reflect love, who manage by intimidation or anger, who create fear in others, or even those who build great Christian ministries on the premise that the end justifies the means. Good fruit is seen in the one who loves and is the servant of all. 6:46-49 The word “Lord” here is not “God” but “master.” That is, if a person calls Jesus his or her master, then as a servant he or she should actually carry out the Master’s will and commands. This statement applies to all that has been recorded above in verses 21 through 45; that is, the Sermon on the Plain. This poses a problem for the one who reads these words as well. Once a person has decided to call Jesus Christ “Lord,” he or she is obligated to live in a manner consistent with that relationship. This most likely means a lot will change in the person’s life. Habits, behaviors and relationships that are not biblical must be done away with, and new habits, behaviors and relationships must replace them. Only then will a Christian’s life be built on solid ground, and only then will that life be able to truly experience God’s love. Once God’s love has been experienced, that person is now in a position to express God’s love to others. 6:21-49 In summary, reread these verses and see how many are referring to relationships. Then make an assessment of how many verses have love as the basis for one’s behavior. Finally, note how many verses mention the Sabbath, rituals, sacrifices, or laws concerning what is clean or unclean. Here we see an expansion of John the Baptist’s instructions to those who came for baptism and asked, “What shall we do?” Questions for Your Personal or Group Reflection In this chapter, there are three easily remembered sections. Can you name them? Can you determine what is the relationship between each section and why Luke constructed his gospel in this way? What do all three sections—particularly the first and the last—have in common, and why is that important? Review the first conflict between Jesus the Pharisees. What are the Pharisees most concerned about, keeping the law or meeting legitimate human needs? Were they correct in saying that Jesus and His disciples were breaking a Sabbath law? An important question to ask is, “Why did they even care how Jesus and His disciples were getting food?” Review the second conflict between Jesus and the disciples; that is, the healing of the man with the withered hand. Why was it important for Luke to specify that it was the man’s right hand that was withered? How did the Pharisees and scribes come to the place where they were teaching that it was wrong to heal on the Sabbath? What is the relationship between healing on the Sabbath and doing good? If you haven’t already, go back and underline the number of times you find the word “good” in chapter 6.What is the context for each time the word is used? How is Jesus using the word “good” throughout His teaching, and what does it mean? Throughout the chapter, there is a contrast between good and evil. What portion of the chapter illustrates evil and what portions illustrate good? Identify three key themes throughout the Sermon on the Plain. How do they relate to the definition of good and evil? How do Christians identify those who are good and those who are evil? What does Jesus liken them to? How is a follower of Jesus supposed to respond when he or she reads Jesus’ words? How do we know that Jesus’ teaching are accurate and word for word?
<urn:uuid:09704939-76cb-40c2-8716-49e70b85cefe>
CC-MAIN-2021-21
http://walkwiththeword.org/Commentary/Luke/01-Luke_Commentary_Chapter-06.html
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991772.66/warc/CC-MAIN-20210517115207-20210517145207-00017.warc.gz
en
0.970403
6,877
3.15625
3
Time Series Analysis Time Series Analysis and Forecasting "Predicting the future is hard, especially if it hasn't happened yet." -- Yogi Berra A time series is a chronological sequence of observations on a particular variable. Usually the observations are taken at regular intervals (days, months, years), but the sampling could be irregular. Common examples of time series are the Dow Jones Industrial Average, Gross Domestic Product, unemployment rate, and airline passenger loads. A time series analysis consists of two steps: (1) building a model that represents a time series, and (2) using the model to predict (forecast) future values. If a time series has a regular pattern, then a value of the series should be a function of previous values. If Y is the target value that we are trying to model and predict, and Yt is the value of Y at time t, then the goal is to create a model of the form: Yt = f(Yt-1, Yt-2, Yt-3, …, Yt-n) + et Where Yt-1 is the value of Y for the previous observation, Yt-2 is the value two observations ago, etc., and et represents noise that does not follow a predictable pattern (this is called a random shock). Values of variables occurring prior to the current observation are called lag values. If a time series follows a repeating pattern, then the value of Yt is usually highly correlated with Yt-cycle where cycle is the number of observations in the regular cycle. For example, monthly observations with an annual cycle often can be modeled by Yt = f(Yt-12) The goal of building a time series model is the same as the goal for other types of predictive models which is to create a model such that the error between the predicted value of the target variable and the actual value is as small as possible. The primary difference between time series models and other types of models is that lag values of the target variable are used as predictor variables, whereas traditional models use other variables as predictors, and the concept of a lag value doesn’t apply because the observations don’t represent a chronological sequence. DTREG Time Series Analysis and Forecasting The Enterprise Version of DTREG includes a full time series modeling and forecasting facility. Some of the features are: - Choice of many types of base models including neural networks. - Automatic generation of lag, moving average, slope and trend variables. - Intervention variables - Automatic trend removal and variance stabilization - Autocorrelation calculation - Validation using hold-out rows at the end of the series - Several charts showing actual, validation, predicted, trend and residual values. ARMA and modern types of models Traditional time series analysis uses Box-Jenkins ARMA (Auto-Regressive Moving Average) models. An ARMA model predicts the value of the target variable as a linear function of lag values (this is the auto-regressive part) plus an effect from recent random shock values (this is the moving average part). While ARMA models are widely used, they are limited by the linear basis function. In contrast to ARMA models, DTREG can create models for time series using neural networks, gene expression programs, support vector machines and other types of functions that can model nonlinear relationships. So, with a DTREG model, the function f(.) in Yt = f(Yt-1, Yt-2, Yt-3, …, Yt-n) + et can be a neural network, gene expression program or other type of general model. This makes it possible for DTREG to model time series that cannot be handled well by ARMA models. Setting up a time series analysis When building a normal (not time series) model, the input must consist of values for one target variable and one or more predictor variables. When building a time series model, the input can consist of values for only a single variable – the target variable whose values are to be modeled and forecast. Here is an example of an input data set: Passengers 112. 118. 132. 129. 121. The time between observations must be constant (a day, month, year, etc.). If there are missing values, you must provide a row with a missing value indicator for the target variable like this: Passengers 112. 118. ? 129. 121. For financial data like the DJIA where there are never any values for weekend days, it is not necessary to provide missing values for weekend days. However, if there are odd missing days such as holidays, then those days must be specified as missing values. It is also desirable to put in missing values for February 29 on non-leap years so that all years have 366 observations. A lag variable has the value of some other variable as it occurred some number of periods earlier. For example, here is a set of values for a variable Y, its first lag and its second lag: Y Y_Lag_1 Y_Lag_2 3 ? ? 5 3 ? 8 5 3 6 8 5 Notice that lag values for observations before the beginning of the series are unknown. DTREG provides automatic generation of lag variables. On the Time Series Property page you can select which variables are to have lag variables generated and how far back the lag values are to run. You can also create variables for moving averages, linear trends and slopes of previous observations. Here is an example of a Variables Property Page showing lag, moving average and other variables generated for Passengers: On the Variables property page, you can select which generated variables you want to use as predictors for the model. While it is tempting to generate lots of variables and use all of them in the model, sometimes better models can be generated using only lag values that are multiples of the series’ cycle period. The autocorrelation table (see below) provides information that helps to determine how many lag values are needed. Moving average, trend and slope variables may detract from the model, so you should always try building a model using only lag variables. An exceptional event occurring during a time series is known as an intervention. Examples of interventions are a change in interest rates, a terrorist act or a labor strike. Such events perturb the time series in ways that cannot be explained by previous (lag) observations. DTREG allows you to specify additional predictor variables other than the target variable. You could have a variable for the interest rate, the gross domestic product, inflation rate, etc. You also could provide a variable with values of 0 for all rows up to the start of a labor strike, then 1 for rows during a strike, then decreasing values following the end of a strike. These variables are called intervention variables; they are specified and used as ordinary predictor variables. DTREG can generate lag values for intervention variables just as for the target variable. Trend removal and stationary time series A time series is said to be stationary if both its mean (the value about which it is oscillating), and its variance (amplitude) remain constant through time. Classical Box-Jenkins ARMA models only work satisfactorily with stationary time series, so for those types of models it is essential to perform transformations on the series to make it stationary. The models developed by DTREG are less sensitive to non-stationary time series than ARMA models, but they usually benefit by making the series stationary before building the model. DTREG includes facilities for removing trends from time series and adjusting the amplitude. Consider this time series which has both increasing mean and variance: If the trend removal option is enabled on the Time Series property page (see above), then DTREG uses regression to fit either a linear or exponential function to the data. In this example, an exponential function worked best, and it is shown as the blue line running through the middle of the data points. Once the function has been fitted, DTREG subtracts it from the data values producing a new set of values that look like this: The trend has been removed, but the variance (amplitude) is still increasing with time. If the option is enabled to stabilize variance, then the variance is adjusted to produce this series: This transformed series is much closer to being stable. The transformed values are then used to build the model. A reverse transformation is applied by DTREG when making forecasts using the model. Selecting the type of model for a time series DTREG allows you to use the following types of models for time series: - Decision tree - TreeBoost (boosted series of decision trees) - Multilayer perceptron neural network - General regression neural network (GRNN) - RBF neural network - Cascade correlation network - Support vector machine (SVM) - Gene expression programming (GEP) Experiments have shown that decision trees usually do not work well because they do a poor job of predicting continuous values. Gene expression programming (GEP) is an excellent method for time series because the functions generated are very general, and they can account for trends and variance changes. General regression neural networks (GRNN) also perform very well in tests. Multilayer perceptrons sometimes work very well, but they are more temperamental to train. So the best recommendation is to always try GEP and GRNN models, and then try other types of models if you have time. If you use a GEP model, it is best to enable the feature to allow it to evolve numeric constants. Evaluating the forecasting accuracy of a model Before you bet your life savings on the forecasts of a model, it is nice to do some tests to evaluate the predictive accuracy of the model. DTREG includes a built-in validation system that builds a model using the first observations in the series and then evaluates (validates) the model by comparing its forecast to the remaining observations at the end of the series. Time series validation is enabled on the Time Series property page (see above). Specify the number of observations at the end of the series that you want to use for the validation. DTREG will build a model using only the observations prior to these held-out observations. It will then use that model to forecast values for the observations that were held out, and it will produce a report and chart showing the quality of the forecast. Here is an example of a chart showing the actual values with black squares and the validation forecast values with open circles: Validation also generates a table of actual and predicted values: --- Validation Time Series Values --- Row Actual Predicted Error Error % ----- --------- --------- ---------- -------- 133 417.00000 396.65452 20.345480 4.879 134 391.00000 377.05068 13.949323 3.568 135 419.00000 446.66871 -27.668706 6.604 136 461.00000 435.56485 25.435146 5.517 137 472.00000 462.14325 9.856747 2.088 138 535.00000 517.45376 17.546240 3.280 139 622.00000 599.82994 22.170064 3.564 140 606.00000 611.68442 -5.684423 0.938 141 508.00000 507.37890 0.621100 0.122 142 461.00000 447.01704 13.982962 3.033 143 390.00000 398.09507 -8.095074 2.076 144 432.00000 444.67910 -12.679105 2.935 If you compare validation results from DTREG with other programs, you need to check how the other programs compute the predicted values. Some programs use actual (known) lag values when generating the predictions; this gives an unrealistically accurate prediction. DTREG uses the lag values for predicted values when forecasting: this makes validation operate like real forecasting where lag values must be based on predicted values rather than known values. Time series model statistics report After a model is created, DTREG produces a section in the analysis report with statistics about the model. Autocorrelation and partial autocorrelation The autocorrelation and partial autocorrelation tables provide important information about the significance of the lag variables. ----------------------------- Autocorrelations ------------------------------ Lag Correlation Std.Err. t -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1 1 0.70865407 0.083333 8.504 | . |************** | 2 0.23608974 0.117980 2.001 | . |***** | 3 -0.16207088 0.121217 1.337 | . **| . | 4 -0.41181655 0.122712 3.356 | *******| . | 5 -0.46768898 0.131961 3.544 | ********| . | 6 -0.46501203 0.143009 3.252 | ********| . | 7 -0.43595197 0.153150 2.847 | ********| . | 8 -0.36759217 0.161538 2.276 | ******| . | 9 -0.13341625 0.167246 0.798 | . **| . | 10 0.20091610 0.167984 1.196 | . |**** . | 11 0.58898400 0.169644 3.472 | . |************ | 12 0.82252315 0.183296 4.487 | . |**************** | 13 0.58265202 0.207349 2.810 | . |************ | 14 0.17178261 0.218423 0.786 | . |*** . | 15 -0.16852975 0.219360 0.768 | . **| . | 16 -0.36938903 0.220257 1.677 | . ******| . | An autocorrelation is the correlation between the target variable and lag values for the same variable. Correlation values range from -1 to +1. A value of +1 indicates that the two variables move together perfectly; a value of -1 indicates that they move in opposite directions. When building a time series model, it is important to include lag values that have large, positive autocorrelation values. Sometimes it is also useful to include those that have large negative autocorrelations. Examining the autocorrelation table shown above, we see that the highest autocorrelation is +0.82523155 which occurs with a lag of 12. Hence we want to be sure to include lag values up to 12 when building the model. It is best to experiment with including all lags from 1 to 12 and also ranges such as just 11 through 13. DTREG computes autocorrelations for the maximum lag range specified on the Time Series property page, so you may want to set it to a large value initially to get the full autocorrelation table and then reduce it once you figure out the largest lag needed by the model. The second column of the autocorrelation table shows the standard error of the autocorrelation, this is followed by the t-statistic in the third column. The right side of the autocorrelation table is a bar chart with asterisks used to indicate positive or negative correlations right or left of the centerline. The dots shown in the chart mark the points two standard deviations from zero. If the autocorrelation bar is longer than the dot marker (that is, it covers it), then the autocorrelation should be considered significant. In this example, significant autocorrelations occurred for lags 1, 2, 11, 12 and 13. Partial autocorrelation table ------------------------- Partial Autocorrelations -------------------------- Lag Correlation Std.Err. t -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1 1 0.70865407 0.083333 8.504 | . |************** | 2 -0.53454362 0.083333 6.415 | **********| . | 3 -0.11250388 0.083333 1.350 | . *| . | 4 -0.19447876 0.083333 2.334 | ***| . | 5 -0.04801434 0.083333 0.576 | . | . | 6 -0.36000273 0.083333 4.320 | ******| . | 7 -0.23338000 0.083333 2.801 | ****| . | 8 -0.31680727 0.083333 3.802 | *****| . | 9 0.14973536 0.083333 1.797 | . |*** | 10 -0.03381760 0.083333 0.406 | . | . | 11 0.54592233 0.083333 6.551 | . |*********** | 12 0.18345454 0.083333 2.201 | . |**** | 13 -0.45227494 0.083333 5.427 | ********| . | 14 0.16036757 0.083333 1.924 | . |*** | The partial autocorrelation is the autocorrelation of time series observations separated by a lag of k time units with the effects of the intervening observations eliminated. Autocorrelation and partial autocorrelation tables are also provided for the residuals (errors) between the actual and predicted values of the time series. Measures of fitting accuracy DTREG generates a report with several measures of the accuracy of the predicted value. The first section compares the predicted values with the actual values for the rows use the train the model. If validation is enabled, a second table is generated showing how well the predicted validation rows match the actual rows. ============ Time Series Statistics ============ Exponential trend: Passengers = -239.952648 + 351.737895*exp(0.005155*row) Variance explained by trend = 86.166% --- Training Data --- Mean target value for input data = 262.49242 Mean target value for predicted values = 261.24983 Variance in input data = 11282.932 Residual (unexplained) variance after model fit = 254.51416 Proportion of variance explained by model = 0.97744 (97.744%) Coefficient of variation (CV) = 0.060777 Normalized mean square error (NMSE) = 0.022557 Correlation between actual and predicted = 0.988944 Maximum error = 41.131548 MSE (Mean Squared Error) = 254.51416 MAE (Mean Absolute Error) = 12.726286 MAPE (Mean Absolute Percentage Error) = 5.5055268 If DTREG removes a trend from the time series, the table shows the trend equation, and it shows how much of the total variance of the time series is explained by the trend. There are many useful numbers in this table, but two of them are especially important for evaluating time series predictions: Proportion of variance explained by model – this is the best single measure of how well the predicted values match the actual values. If the predicted values exactly match the actual values, then the model would explain 100% of the variance. Correlation between actual and predicted – This is the Pearson correlation coefficient between the actual values and the predicted values; it measures whether the actual and predicted values move in the same direction. The possible range of values is -1 to +1. A positive correlation means that the actual and predicted values generally move in the same direction. A correlation of +1 means that the actual and predicted values are synchronized; this is the ideal case. A negative correlation means that the actual and predicted values move in opposite directions. A correlation near zero means that the predicted values are no better than random guesses. Forecasting future values Once a model has been created for a time series, DTREG can use it to forecast future values beyond the end of the series. You enable forecasting on the Time Series property page (see above). The Time Series chart displays the actual values, validation values (if validation is requested) and the forecast values. The analysis report also displays a table of forecast values: --- Forecast Time Series Values --- Row Predicted ----- --------- 145 457.63942 146 429.32697 147 459.64579 148 506.19975 149 514.89035 150 584.91959
<urn:uuid:1624fda3-6285-4e63-9f5a-d951abf47fd5>
CC-MAIN-2021-21
https://www.dtreg.com/methodology/view/time-series-analysis
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992516.56/warc/CC-MAIN-20210516075201-20210516105201-00337.warc.gz
en
0.82135
4,332
3.421875
3
From a geological and physiographic point of view the Sierra Nevada is, as its Spanish name implies, a single range built on very simple structural lines. It belongs to the class of mountains of the Basin Range type, so called because it is best exemplified in the Great Basin, that great region of no drainage to the sea which lies between the Sierra Nevada and the Wasatch. In this region the earth’s crust has been broken into blocks, elongated in a general north-south direction. Some of these have been depressed and lie beneath the broad valleys of the desert, while others have been uplifted and constitute its linear mountain ranges. The uplift has, however, not been uniform in most cases, but has been effected by a rotation of the block On an axis parallel to the range. It results from this that the block as a whole has been tilted so as to Present a steep front, or scarp, on one side and a gentle slope on the other. The simplicity of the Profile thus produced has of course been greatly modified by the erosion of the block which has taken place irl the long period of time that has elapsed since its uplift. The Sierra Nevada is one of these uplifted and tilted blocks, presenting a very steep, bold front to the east and a slope of only about 2° to the west. The edge of the tilted block is the crest of the range; its eastern front is the surface of the break whereby it was dislocated from the relatively depressed region of the desert; and its western slope represents the old, low surface of the region before it was elevated. Both the eastern front and the western slope have suffered greatly from erosion since the range came into being by this process of uplift and tilting. On the east the fault scarp has been battered to a slope much less steep than it was originally, and the crest of the range has thereby migrated westward. On the west the tilting of the surface determined a drainage by streams running transverse to the axis of the range; and these, by reason of their velocity, cut sharp trenches, which in the course of time have been deepened and widened into the great cañons of the Sierra Nevada. On the divides between the cañons, in cases where their slopes have not yet intersected, there are still remnants of the old surface much in the same condition as it was before the uplift. From these flat-topped divides, an observer may get such extended views in all directions that he forgets he is in the mountains, and, overlooking the deep cañons, gets the sensation of being on a vast sloping plain with occasional low hills rising above the general surface. It is this plain which has a slope of 2° from the edge of the Great Valley to the region of the summit peaks and crests. Its remarkable evenness on many of the divides, particularly in the northern part of the range, is due to the fact that the region before its uplift was extensively covered by deposits of meandering streams, and these in turn covered by layers of volcanic ash, agglomerate, and lava, thus smoothing out and obscuring the inequalities of an erosional surface of comparatively low relief. Considered in the light of these observations and viewed in its entirety, the Sierra Nevada is recognized not only as one of the Basin Ranges, but as the most perfect and most magnificent example of the type. The uplift of the Sierra Nevada was not, however, a simple sudden event, nor even a continuous process of earth deformation. The uplift proceeded by stages of which two are strongly pronounced, particularly in the southern part of the range. Between these stages the process of tilting stopped for a long time, and in the interval the cañons, which had been deepened to the limit for the first stage, were widened into broad flat-floored valleys. As a consequence of the second stage of uplift, the streams flowing through these wide valleys were rejuvenated and resumed the work of cañon cutting, leaving large remnants of the old valley floors as benches or terraces above the brink of the cañon walls. The high valleys that border the cañon of the Kern are perhaps the best records of this Period of stability in the Sierra Nevada between two stages of uplift. The wide, level, rock floor of Tuolumne Meadows, by its contrast with the gorges of Tenaya Creek and Tuolumne River below the Meadows, suggests that it, also, may have been developed in its preglacial outlines, during this same period of stability. The uplifting process of the second stage is not yet completed. In 1872, at the time of the heavy earthquake of that year, a movement occurred on the great fault which bounds the Sierra Nevada on the east. By this movement the elevation of the southern part of the range, in relation to Owens Valley, was increased by about twenty feet; and a fresh scarp was formed, causing a sharp step in the profile of the lower flank of the range. Having now acquired an understanding of the general configuration of the Sierra Nevada as a single mountain of the Basin Range type, formed by the tilting of an elongated block of the earth’s crust, we may proceed to consider the cañons which have been cut into the rising mass, and particularly the cañon of the Merced with its famous Yosemite Valley. With one notable exception the great cañons of the Sierra Nevada are transverse to the range. The dominant drainage still follows the lines imposed upon it consequent upon the tilting of the crustal block. In this respect the drainage pattern differs in a marked degree from that of many other mountain ranges. The main valleys and stream courses of the Appalachian Range, for example, are parallel to the length of the elevated mass, with only short transverse outlets for the extended longitudinal system. Similarly the main drainage of the Coast Ranges is parallel to their elongation. The reason for this is that in these and similar mountain ranges the belts of rock conform in direction to the elongation of the range, and belts of soft rock generally alternate with belts of hard rock; so that, although the drainage may have been originally transverse, consequent upon the uplift, the tributaries of these consequent streams that happened to be on belts of soft rock eroded cañon’s very much faster than those which drained belts of hard rock, and faster than the consequent streams, which traversed both hard and soft belts. The result of this has been that, in the course of time, the great valleys of old mountains become located in the soft belts, and so become the dominant features of the drainage systems. Such longitudinal drainage is technically designated subsequent to distinguish it from the earlier transverse or consequent drainage. The reason that no notable subsequent drainage has been developed in the Sierra Nevada is twofold: 1. The time that has elapsed since the uplift of the range has been so short that, even in the northern part of the range where the contrast of hard and soft belts is pronounced, the tributary streams have as yet made but little headway in establishing their domination. Geologists and geographers regard a well developed subsequent drainage as characteristic of a relatively old mountain range. So we may classify the Sierra Nevada, on the basis of the meagerness of subsequent streams, as a young mountain range. 2. In the southern part of the Sierra Nevada there are but few contrasts in the hardness of the rocks, the mass of the mountain being almost wholly granite, so that the condition favorable for the development of subsequent streams, on a well-marked longitudinal pattern, is lacking. The notable exception to this transverse disposition of the Sierran streams is the Kern River. This stream, however, had its position determined for it by a remarkable rift in the earth’s crust, parallel to the great fault which marks the eastern boundary of the range. Long after the uplift of the Sierra Nevada to practically its present altitude, after the cañons had been eroded down to very nearly their present depth below the flat-topped divides, the climate changed for the worse. In the summit region the ablation of summer failed to remove the snows of winter. The snows of many years accumulated and became packed down of their own weight into ice, so that glaciers were formed. At first these were small and situated on the north side of the great peaks where ablation was feeble; but later they expanded into great névés from which tongues of ice extended down into the cañons. These tongues were veritable streams of ice many hundreds, and even thousands, of feet deep, which flowed slowly through the cañons for ages. They extended far below the line of perennial snows, and in each case reached a limit, at the time of maximum severity of climate, at a point in the cañon where the ablation of summer just balanced the forward flow of the ice. At this point the glacier dumped the load of rock débris which had been shed upon it from the cañon walls above. Thus a great lunate ridge of fragments, ranging in size from grains of sand to spauls the size of a house, was formed, spanning the cañon from wall to wall. This ridge is called a terminal moraine; and there are many such moraines in the great cañons. Besides the terminal moraine, there are others farther up the cañons, called moraines of retreat, which mark stages in the waning of the glaciers when the climate again became more genial. To these moraines were delivered not only the rock, fragments which had ridden on the back of the glacier to its end, but also similar fragments carried in the body of ice, which had either fallen into crevasses or had been plucked out of the floor of the cañon in places where the rock was so intersected by joint cracks as to be divided into blocks, and so easily dislodged by the ice stream. Many rock fragments were also carried Besides the terminal moraines and moraines of various stages of retreat, the upper slopes of the cañons formerly occupied by glaciers are in many cases, particularly near their headwaters, modified by great ridges of glacial débris, known as lateral moraines. These accumulations were formed in the same way as the terminal moraines due to a sideways movement of the ice to balance the ablation along its margin. The effect of the long continued flow of these ice streams upon the configuration and aspect of the cañons is very notable, and may be observed in the upper reaches of all the great Sierran cañons. In their lower stretches, the cañons are all V-shaped in transverse profile, the slopes are uneven and generally encumbered with a mantle of rock débris and soil arising from the disintegration and decay of the rock tinder the weather. Where rock surfaces are exposed these present the appearance of fractures, or are bounded by joint planes, and the rock is usually somewhat decomposed for a short distance below the surface. As we pass up the cañons, within the limits of former glaciation, the whole aspect of the landscape changes. The cañons are no longer V-shaped in profile, but more nearly U-shaped. All of the rock débris and soil has been swept out of the cañon, and bare rock surfaces are seen on every hand. These surfaces are, moreover, not commonly fractures or joint planes, but are clearly surfaces due to abrasion since they show abundant evidence of scoring, striation, fluting, and polishing. Many surfaces are so highly polished that they reflect the sun’s rays like a mirror. Where the polish is lacking it is generally evident that this is due to exfoliation, the scaling off of the rock in thin slabs, and that the polish once extended over the areas thus denuded. The bare rock surfaces have acquired rounded or hummocky forms which, from a distance and in the aggregate, look like the backs of sheep in a flock. They are therefore known technically as roches moutonnées. The hummocks are characteristically elongated in the direction of the cañon and have a symmetrical transverse profile but an asymmetrical longitudinal profile, with a steep front facing down stream. This asymmetry is due to the fact that the upstream side of the hummock received the full force of the abrasive impact of the ice current, whereas the tendency on the downstream side was to pull out or pluck fragments from the rock mass and so leave a steep front. The abrasion thus so apparent on the roches moutonnées affected all surfaces over which the ice passed. It was not done by the ice itself, however, but by the rock fragments imbedded in it. The passage of the ice stream through the cañons not only swept away all loose rock and soil on the slopes, but by this process of abrasion removed all the decomposed material, so that the rocks so generally exposed in the glaciated Portions of the cañons are in a wholly sound and fresh condition. It is evident also that the abrasive process was competent to reduce even the fresh hard rock after the superficially decomposed material had been removed. The long continued abrasion and the plucking together have deepened the cañons, at the same time giving them their characteristic U-shaped profile. One of the finest examples of the effect of ice upon the configuration of a cañon, once merely a stream gorge, is afforded by Tenaya Cañon, a good general view of which may be had from Mirror Lake. The power of glaciers to deepen the cañons in which they flow is perhaps best exemplified in their upper reaches where the general grade is steeper. Here the characteristic longitudinal profile of the cañons is a series of steps, the present streams cascading from one level to another. On each step is a rock-rimmed basin, or tam. In some cases there may be several such steps and tarns in the course of a mile, while in others the steps may be much broader and more widely spaced. No other agency is known whereby these tarn basins could have been formed except by the abrasive power of rock-laden ice. Seizing upon the inequalities of the original stream profile the ice has accentuated these into a series of giant steps, the treads of which suffered heavy local abrasion by the impact of the ice descending from above, while the risers were developed into cliffs by exceptionally active Plucking. One of the lowest of the tarns in the vicinity of Yosemite is Tenaya Lake which lies in a rock-rimmed basin 140 feet deep. Another extremely interesting feature of most glaciated cañons, to be found at their very head, where the ice stream has its origin, is the cirque. This is a vast amphitheatre of bare rock enclosed by nearly vertical cliffs, generally hundreds of feet high, in the floor of which is a tam. In some of the larger cirques there are several tarns at slightly different levels. These cirques appear as great bites in the mass of the mountain and are clustered around the high peaks of the summit that divide the drainage. They have been formed by that peculiarly vigorous process of ice erosion which, on a smaller scale, has given us the steep faces of the roches moutonnées hummocks, and the risers on the steps of the glaciated cañons. The glaciers at their heads ate their way into the mountain mass, by nibbling at the base of the cliffs and so undermining them. The blocks of rock plucked out from the cliffs were incorporated into the ice and carried away by the glacier to be delivered chiefly on its sides, to make the great lateral moraines which are now found below the cirques. As the process proceeded, and the cirques were enlarged at the expense of the peaks and ridges, the divides between opposing cirques were in many cases reduced to thin partitions with sharp knife-like crests. As still further enlargement proceeded, these divides were rapidly lowered. We have thus presented to us in this encroachment of cirques, one on the other, a process whereby lofty mountain crests and summits are first gradually narrowed and then rapidly reduced. This glacial destruction of mountain crests may eventually so lower the elevation that the conditions favoring the accumulation of ice may be done away with. Alpine glaciers may therefore be said to be self-destructive. The glaciation of the high Sierra, however, occupied but a brief time from a geological point of view and before the destruction of the high peaks and summits had proceeded far the climate changed and the glaciers almost wholly melted away, leaving only remnants in a few of the higher cirques. Of these lingering glaciers within the limits of the National Park the most notable, as well as the most accessible, are those on the east side of Mt. Dana, on the north side of Mt. Lyell, and on the northeast side of Mt. McClure. These glaciers are very small compared with the great ice stream that once filled the whole of Tuolumne Meadows, and sent one tongue down the cañon. of the Tuolumne River far below Hetch Hetchy and another down Tenaya Cañon into Yosemite. They are, however, interesting features of the high Sierra and well worth a visit. Although they are very small, they have all the essential features and functions of their great ancestors except that they are in some cases broader than long. Here at the lower edge of the ice one may see a moraine in actual process of accumulation; and on searching among the bowlders one may find some that have been abraded and scratched. The ice is traversed by crevasses just as in the case of the great glaciers and riding on the ice may be seen the débris shed from the cliffs above. If we cross the glacier to its upper edge where it appears to adhere to the base of the cirque walls, we find that the appearance is deceptive, for the ice, instead of hugging closely the base of the cliffs, is separated from them by a space of several feet. The space extends down for a long distance between the wall of rock and the wall of ice as a great chasm. This detachment of the glacier from the cliffs is known as the bergschrund. At the bottom of the chasm goes on the plucking and sapping action which gives the cirque walls their verticality, as seen when the ice eventually vanishes. Among the glaciated cañons of the Yosemite National Park those of the Merced and the Tuolumne are the most impressive and the most interesting. just as at present these two streams gather up and carry forward to the San Joaquin Valley practically all the drainage of the park, so, in glacial time, the great bodies of ice which covered the summit region within the limits of the park, excepting the highest peaks, converged on the same two cañons, and flowed down them to the limit where ablation balanced the forward movement. Some of the ice, however, flowed through the passes on the crest of the range toward the east and gave rise to glacier tongues on that side which were much shorter than those on the west; because then, just as now, the climate was much drier and the summers hotter than on the west side of the summit. Most of these short glaciers on the eastern flank of the range have left splendidly developed lateral moraines. Some of these, particularly those of Bloody Cañon, Leevining Cañon, and Parker Creek, in the vicinity of Mono Lake are easily accessible to visitors in the park. It is interesting to note that, at the maximum extension of these east flank glaciers, the level of Mono Lake was about 675 feet higher than at present. At this high level the glaciers reached the lake. But even under these conditions the greatly increased influx of water from the melting glaciers was balanced by evaporation, for at its highest stage Mono Lake had no outlet. Thus we have the apparent anomaly of glaciation combined with aridity. The explanation of course is that the glacial streams flowed from a humid region west of the crest of the Sierra Nevada into an and region to the east of the crest. The line between two strongly contrasted climatic provinces was, however, very sharp. In the drainage system of Merced River Tenaya is perhaps the most typical illustration of a thoroughly glaciated stream gorge. It is at the same time the most easily accessible to visitors in the Yosemite Valley. Everybody who goes to Yosemite gets a glimpse of Tenaya Cañon from Mirror Lake. Yosemite is also a glaciated cañon. There is a large moraine spanning the Valley just below El Capitan and the ice must have extended that far at least. Yet the contrast between Tenaya Cañon and Yosemite Valley is very great. If Tenaya be the type of a glaciated cañon, Yosemite must be abnormal. In what does the departure from the type consist? Evidently in the width of the Valley floor, its level character, and the entire absence of bare rock surfaces. The floor of Yosemite is everywhere sandy and there is reason to believe that the deposits of sand are several hundred feet thick. If we imagine this sand removed and the talus at the base of the great cliffs nonexistent, we would see the Valley as it actually was immediately on the retreat of the glacier. The picture before the mind’s eye would then differ in no essential respect from the view we get of Tenaya Cañon. The Valley would then be true to type. It would be larger and deeper, but there are good reasons for this. The glacier entering Yosemite from Tenaya was not the Only one that filled the Valley with ice. An equally important one flowed in from the Upper Merced and Little Yosemite; and another moved down the Illilouette. These three great glaciers converged on Yosemite and the cross section of the confluent glacier in the Valley was probably not less than the sum of the cross sections of the three tributaries. This great increase in the volume of the ice, particularly as expressed in its depth, together with the steepness of approach of the tributaries to the Valley, would greatly increase the abrasive action of the glacier on its floor. Just below the confluence, that is in Yosemite, the cañon would be over-deepened and we would have a rock-rimmed basin formed, like that of Tenaya Lake but larger and deeper. Thus, in our mental picture of the restoration of Yosemite as it was at the immediate retreat of the glacier, we must introduce a beautiful lake, in which were mirrored the majestic walls encircling it. At the lower end of this lake, just below El Capitan, had been left a moraine which helped to accentuate the depression caused by the scour of the ice. Into this lake poured the sandy and milky waters of the three glaciers, now separate during the long period of their retreat. These streams built out a delta into the lake which eventually filled it, giving us the present floor of the Valley, seven miles long and one mile wide. On this floor has accumulated the talus of rock spauls shed from the cliffs; and across the floor in a shallow sandy trench flows the Merced River, cascading over the moraine below El Capitan, giving us the Valley as we know it to-day. Hetch Hetchy Valley is generally recognized as being analogous to Yosemite though on a small scale. The profound gorge of the Tuolumne, with it stepped profile of bare, glaciated rock emerges suddenly on a wide, flat-floored, sandy valley, just Tenaya Cañon opens on Yosemite. Both valleys have had the same history. Glacial abrasion and plucking over-deepened the cañon, so that, when the Little Yosemite Valley owes its flat, sandy floor and its breadth between walls to the same process. The valley is but a tread on the great stepped profile of the Merced; and on this tread there had been scoured out a rock-rimmed basin which, on the final retreat of the ice, contained a tarn about three miles long. Several meadows in Tenaya Cañon above Tenaya Lake are similarly filled tarns, as are also many meadows of the higher altitudes. The contrast between the typical glaciated cañon of Tenaya Creek, and the aberrant Yosemite and Hetch Hetchy valleys is not, however, due wholly to the fact that they once held lake basins now filled with sand. The contours and the profiles of the walls of both Yosemite and Hetch Hetchy differ from those of Tenaya Cañon. The contours of Yosemite are in general zig-zag, expressive of salients and reëntrants which are full of surprises and suggest some mysteriously intentional process of sculpture. In the profiles the vertical element dominates and gives the Valley its atmosphere of solemnity and majesty, the same atmosphere which the great architects of the Middle Ages gave to their splendid Gothic cathedrals. These features are in striking contrast with the smoothly flowing, though undulatory, contours and profiles of Tenaya Cañon below Clouds Rest as seen from Mirror Lake. When we try by close observation to ascertain the cause of this contrast we discover, as Matthes has so well told us, that in the sculptural modification of the Valley by glacial erosion there has been a large element of control inherent in the structure of the granite, which is the prevailing rock. The granite originally solidified from a molten condition under a cover of immense thickness. This cover was removed by erosion ages before the uplift which gave the Sierra Nevada its present configuration. The relief of load, as erosion proceeded, and the lowering of the temperature of the mass as the granite was brought nearer the surface and eventually into the zone of erosion, greatly changed the condition of compressive strain which the force of gravity imposes upon the rock. This redistribution of strain caused certain portions of the mass to be overstrained, and relief was obtained by the development of systems of cracks or fissures which we call joints. These joints are in some cases straight and parallel so as to divide the rock into thick slabs, as on the face of Sentinel Rock. In other cases there are two or three intersecting systems which divide the rock into prisms, or cuboidal, or rhomboidal blocks. In still other cases the joints are curved and roughly concentric as at the Royal Arches. Many portions of the granite are, however, almost free from joints, or, if they be present, they are so widely spaced that they only slightly affect the integrity of the rock, as at El Capitan. Now, nearly all of the vagaries of erosion, a particularly of ice sculpture, in Yosemite Nation Park are referable to the erratic distribution of these systems of joints and to the disposition of the joint planes in each system. The ice, in passing over or past jointed granite, plucked out the blocks one by one and incorporated them within its body, carrying them forward with the glacial flow. In the course of time a vast quantity of rock was thus removed. In the unjointed portions of the granite in contact with the ice, on the other hand, erosion was limited to abrasion and comparatively little rock was removed by this process. In this way there were large differences in the rate of glacial erosion in near-by localities; and the same influence had also affected ordinary atmospheric and stream erosion in the ages that preceded the glacial period. The great salients like El Capitan are composed of granite in which the joint structure is but feebly developed and were, therefore, resistant to erosion by the dislodgment of blocks. The trough in which Bridalveil Creek flows above the falls is clearly conditioned by the intersection of inclined joints. The great steps over which Vernal Falls and Nevada Falls tumble are equally clearly determined by the disposition of the joint planes there. In the exfoliation of the curved slabs, so well exemplified in the Royal Arches, we have an excellent illustration of the control exercised by curved joints in the development of the great domes of the park, such as North Dome, Half Dome, and many others. The curious spires which are so common about Tuolumne Meadows, and which characterize the landscape in the wonderful view from the summit of Mt. Conness, owe their configuration to the same control of erosion by internal structure. It is well to note, however, that the curved joints which determined the configuration of the domes and spires presented a structure which was not so favorable to dislodgment of spauls, whether by atmospheric agencies or by glacial plucking as was the structure formed by intersecting straight joints The domes and spires represent portions of the granite which were, like the unjointed rock, relatively resistant to mechanical disintegration, so that, when the rest of the region was reduced in level, they remained as eminences. The higher domes and spires probably rose well above the surface of the great névé and the ice streams flowing from it, so that the exfoliation which gave them their present configuration is referable not to ice plucking, but to the heaving action of freezing water in the joint cracks, and to the slow, recurrent movements due to dilatation and contraction under varying temperature. Half Dome, the asymmetry of which, no less than its isolation and height, makes it so conspicuous a feature of Yosemite Valley, owes its peculiar configuration to the intersection of two systems of joints: a system of straight, vertical joints parallel to the flat west face, probably disposed in a narrow zone, and a system of curved joints concentric with the rounded east side. Many of the lower domes have, however, been over ridden by the ice and so have had a glacial modelling, by abrasion, imposed upon surfaces originally wholly determined by curved joints. In this process a glacial modelling great thick slabs of granite which had become loosened from the parent mass were plucked out by the ice, leaving vertical walls from a few inches to twenty feet or more high facing down stream. The ice immediately flowed into the reëntrants thus formed and abraded the new surface exposed to its attack by the removal of the slab. In cases where this happened during the retreat of the ice front we find abundant manifestations of glacial scouring at sharply different levels on the bare rounded surfaces. Some extreme cases of this sort have suggested to Mr. Matthes that there may have been two glaciations of the region, and he has adduced other evidence in favor of this view. In this he is supported by the earlier interpretation of the moraines near Mono Lake by Russell; by the observations of Turner and Ransome in the Big Trees quadrangle; and by the later observations of Knopf on the eastern flank of the Southern Sierra Nevada. But the doctrine of two distinct glaciations of the Sierra Nevada is one which must be subjected to much more critical study before it can be accepted by geologists as an established fact. There is, of course, in this doubt as to the reality of two distinct glacial periods no objection to the recognition of frequent retreats and advances of the glacier front in one and the same glacial period; for we are familiar with such oscillations in the existing glaciers of Alaska, Norway, and the Alps. Yosemite Valley differs from Tenaya Cañon in still other important features which excite not only geological interest, but also the wonder and admiration of all who come to the Valley. These are the waterfalls, the grace and beauty of which, no less than their great height, have made them famous the world over. Streams which flow with gentle gradients in comparatively shallow channels on the uplands corresponding to the old surface before it was uplifted, reach the brink of the Valley and plunge headlong into the abyss, clearing all contact with the cliffs for hundreds of feet. These upland channels which thus appear at the brink of Yosemite Valley as small notches in its walls belong to the class of so-called "hanging valleys," which are rather characteristic of glaciated regions in general. They may have one of two different origins: 1. The relatively shallow upland channels may have been nonexistent prior to the glaciation of the region, the drainage having then taken some other path. In this case the present position of the streams was determined by the configuration of the surface vacated by the ice, and their channels, in so far as these have been cut by water erosion and not by glacial scouring, is wholly a post-glacial effect. If this be the explanation of Yosemite and Bridalveil creeks then there is nothing surprising in the fact that, in the short time since the ice vanished, they have eroded but shallow trenches in the glaciated upland, and so appear as hanging valleys on the brink of the Valley. 2. The upland creeks that now cascade into the Valley at Yosemite Falls and Bridalveil Falls may have been preglacial drainage lines which were temporarily occupied by the ice with the rest of the country, and which again became functional when their channels were vacate, by the ice. In this case these two tributaries of the Merced must have been engaged in the work of stream erosion as long as the main stream and there should have been, just prior to glaciation, no glaring discordance in the depth of their trenches. If this be so, then the main cañon of the Merced at Yosemite Valley must have been very shallow just prior to glaciation, and nearly the whole depth of the Valley would have to be ascribed to glacial erosion. But we cannot accept this latter explanation because the cañon of the Merced below the limit of glaciation affords us the measure. of preglacial. erosion and tells us that Yosemite has been only modified and over-deepened by ice work, but not, in its larger features, created by the glacier. It would seem, therefore, that both Yosemite Creek and Bridalveil Creek are post-glacial drainage features; although the argument applies with greater force to the former than to the latter. The same interpretation can scarcely, however, be placed on Illilouette Falls, and much less can it be applied to Vernal and Nevada falls on the main flow of the Merced. These three magnificent cascades clearly are on lines of pre-glacial drainage, and their relation to Yosemite Valley is not the same as that of Yosemite and Bridalveil falls. The drop from Little Yosemite and that from the upland valley of the Illilouette to the floor of Yosemite are nearly the same, and the gorge into which the waters tumble in both cases is a glacially modified inheritance of a pre-glacial condition. Attention has been called to the fact that the uplift of the Sierra Nevada took place by two main elevatory movements, with a long period of rest between during which the high valleys of the Kern region were evolved to their present notable width. It may well be that on the drainage system of the Merced there were also similar high valleys carved out of the mountain mass, and that Little Yosemite and upper Illilouette are remnants of this old topography. Such valleys after the second uplift would, of course, be subject to vigorous dissection by reason of the accentuation of the stream grades. This dissection, however, probably proceeded as it does In plateaus underlain by flat lying strata; that is, by the recession of falls so well exemplified at Niagara and by the falls of the Yellowstone. At Niagara the rocks are hard limestones resting on soft shales, while in the Yellowstone the strata are sheets of volcanic rock. But in both cases the gorge has been formed by the slow upstream recession of the falls. Horizontal jointing in the granite, such as is so well displayed near the top of Lower Yosemite Falls, and one third of the way up the walls of Hetch Hetchy, would have the same effect as planes of stratification in promoting this process of gorge cutting, particularly if combined with transverse vertical jointage, which would determine the verticality of the head of the gorge. Both horizontal and vertical jointage are well displayed in the gorge between Nevada Falls and the floor of the Valley. We may thus picture to ourselves a pre-glacier Yosemite Valley, not as deep, nor as wide, nor as sheer-walled as the present Valley, but nevertheless profound erosional gorge ending in spray filled culs-de-sac below both Little Yosemite and the high valley of the Illilouette, with great cascades in them not essentially different from those we see to-day with so much pleasure and interest. Nevada,Vernal, and Illilouette are, therefore, from this point of view, falls which handed over their work of extending the cañon of the Merced into the High Sierra to the Merced Glacier for a geologically brief time, and have since resumed operations at nearly the old stand. The amount of recession effected by the glacier was probably not great, since the work must have been done chiefly, not wholly, by the process of plucking, and the paucity of the moraines below Yosemite indicates but a small product. Pre-glacial Tenaya Cañon, in contrast to that of the Merced, was not extended upstream by a sapping process, but by stream corrasion through granite traversed by a zone of vertical joints parallel to its length, and deficient in horizontal and in transverse vertical joints. The gorge was narrow and steep, and although it doubtless had its cascades, these did not have the sheer drop displayed by the Nevada and Vernal Fails. The deepening of the cañon by stream corrasion was more uniformly distributed throughout the length of the cañon.
<urn:uuid:5e8dfaa2-9d2d-4a44-a85c-9ccef73b8893>
CC-MAIN-2021-21
http://sonnysredwoods.org/library/geologyyosmite.cfm
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.25/warc/CC-MAIN-20210507060253-20210507090253-00017.warc.gz
en
0.970611
7,869
3.765625
4
Vietnamese agricultural policy has changed radically during the past 5 decades. Decollectivization in the 1980s and 1990s followed 2 decades of collective agriculture. This article examines the effects of agricultural policy on land use. It reports the results of remote image interpretation and socioeconomic field study in a Black Thai commune in Vietnam's northern mountains. It suggests that the landscape in the commune has been highly dynamic and that this dynamism was partly the result of the agricultural policy. Collectivization and decollectivization affected land use, but their influence was mediated by other factors, primarily changing technology and markets. In addition, the relationship between national policy and local land use is complicated by 2 factors: (1) changes in local institutions may predate national reforms, and (2) implementation of national policy and the resulting local institutions may differ from place to place. Black Thai villages have experienced radical changes in agricultural policy during the past 5 decades. The Vietnamese government mandated the villages to work on the land in agricultural collectives and subjected exchange to administrative controls in the 1960s and 1970s. Decollectivization shifted control over production and exchange back to households in the 1980s and early 1990s. In examining the effects of agricultural policy on land use in a Black Thai commune of northern Vietnam, we ask whether radical changes in policy caused similarly drastic transformations in land use. This article aims to contribute to a growing number of studies on land use changes in the mountains of mainland Southeast Asia (Fox et al 1995; Long et al 1999; Xu et al. 1999; Trebuil et al 2000). Focusing on 1 commune in northern Vietnam, it examines changes in forests, vegetation cover, and land under cultivation during the past 50 years. Our analysis of remote imagery and statistical data highlights the dynamic nature of land use: forests and agricultural fields increase and decrease over time. We also seek to enhance understanding of the socioeconomic forces shaping land use in the mountains of mainland Southeast Asia. In particular, we examine the effects of collectivization and decollectivization on land use. One may hypothesize that collectivization and decollectivization led to significant changes in land use because they implied comprehensive and radical changes in agricultural institutions. Our findings not only suggest linkages between policy and land use changes, but they also indicate that interactions between policy and practice go both ways. In addition, we find that other factors, especially technological change and marketization, also exert a significant influence on land use. Brief introductions of background and methods are presented, followed by a description of changes in agricultural policy and land use, and an examination of the effects of agricultural policy on land use in the Black Thai commune. We conclude by discussing the linkages between agricultural policy and land use in the postcollective countries of Southeast Asia as well as in a broader context. Black Thai people moved into the mountains of what is today northwestern Vietnam in the first centuries AD (Wyatt 1982). The valleys and lower mountain ranges provided good conditions for wet rice agriculture and upland cultivation (Figure 1). Black Thai villages remained fairly autonomous over the centuries. The rugged topography and lack of infrastructure protected them against outside influences. After 1954, however, Black Thai villages were integrated into the Democratic Republic of Vietnam. Today, there are approximately 400,000 Black Thai living in northwestern Vietnam. Virtually all Black Thai continue to be engaged in agriculture, which has remained the major source of livelihood (Nguyen and van der Poel 1993). Chieng Dong commune, our study site, includes 10 Black Thai villages (Figure 2). The villages are located in the valley of a small river that flows into the Da River, one of the major rivers in northern Vietnam. Villagers work in paddy fields in the valley and in upland fields far up the surrounding slopes (Figure 3). The population in the villages has grown steadily at around 2.6% annually during the past 5 decades, from less than 2000 in 1950 to more than 6000 in 1997. These villages can be considered fairly representative of Black Thai villages, with one exception: road improvements have put them at a distance of only 7 hours from the lowlands. Our research used data from 3 primary sources. First, we acquired SPOT satellite imagery for 1989, 1993, and 1997, and aerial photographs for 1952 and 1968. We interpreted the aerial photographs and satellite images manually and transferred the results to a 1:25,000 base map. The land cover maps were digitized and entered into a geographic information system (GIS) database. We checked the accuracy of the land cover classifications on the basis of knowledge gained during numerous walks through the terrain. Second, we collected government statistics on agricultural production to complement the remotely sensed data. Local authorities had collected statistical data on population and agricultural output since 1958. Third, data on land use practices, implementation of state policy, and other factors with the potential to influence land use stem from 1 year of in-depth research in 3 villages of Chieng Dong. Research included semistructured interviews with a randomly chosen set of 65 households, direct observation, key informant interviews with elders, village leaders, merchants, and local government officials, and review of government documents. Changes in agricultural policy and local implementation The central government expanded the collectivization drive into the mountains in 1959 (Ban 1994). By 1961, almost all households in the valleys of the northwest, including those in Chieng Dong, had joined agricultural producer cooperatives. Control over wet rice and buffalo production and distribution shifted toward collectives. Corn and cassava cultivation as well as pig and poultry raising remained with individual households. Collectivization came in combination with an ambitious program for mountain development (Chu 1962; Ban 1994). Local authorities constructed irrigation projects, distributed new seed varieties and chemical fertilizer, and provided technical advice to promote the intensification of wet rice production. They also designated large upland areas as “forestry land,” that is, land for forestry. The villages had to seek official approval annually for their upland fields. Collective agriculture remained an unstable project in Chieng Dong, as in many other Black Thai villages. Collective control over production eroded after 1975, when the war against the South Vietnamese regime came to an end, removing a major motivation for collective production. People increasingly preferred working in fields and raising animals outside the collective. The labor they contributed to the collective declined significantly, as did the share of land worked in common. Decree 100, promulgated in January 1981, responded to the widespread erosion of collective control across northern Vietnam through a partial devolution of management authority to households (Kerkvliet 1995). The decree legalized the “end-product contract,” under which cooperative leaders concluded annual contracts with members concerning the management of collective fields. Henceforth, members were to assume all basic production tasks and to be allowed to keep output in excess of a predetermined quota. Implementation of the end-product contract in the cooperatives of Chieng Dong halted the erosion of collective control. Cooperative leaders concluded contracts with households in which the latter were requested to work in specific wet rice fields and in a certain area of upland rice fields. Households were required to meet output quotas for each plot. If they harvested more than the quota, they were allowed to keep the surplus. If production fell short of the quota, they had to make up the deficit from production outside the collective. But the success of cooperative reform was short-lived. Households rapidly gained full control over labor allocation after a few years. The collectives gave up control over land preparation and sold most of their water buffaloes to households, which increasingly raised their own buffaloes. Similarly, collective control over output weakened. Much of the crop production in the uplands took place outside the collective distribution system. Only paddy output from wet rice cultivation remained under collective control to a significant extent. Resolution 10, passed by the Communist Party in April 1988, called for virtually full-fledged decollectivization. Problems with the end-product contract had become widespread throughout the country and not just in Chieng Dong (Ban 1987). But the implementation of Resolution 10 had little effect on institutions in Chieng Dong concerned with agricultural production. Households had already gained extensive control over production in previous years. In addition, the villages failed to implement a key element of Resolution 10: they did not allocate the collective wet rice fields to households under the long-term lease arrangements mandated by the new policy. Instead, they continued to reallocate collective wet rice fields among households every few years. Lowland traders began to pass through Chieng Dong in greater numbers in 1989, when central policy mandated the lifting of barriers on interprovincial trade. The private traders brought consumer goods, which had been notoriously scarce in previous years. They also purchased cassava and corn to meet the rapidly growing demand from feed mills in the lowlands. Market expansion also gave villagers access to new seed varieties of rice and corn. Chemical fertilizer became available in greater amounts at decreasing prices. The nationwide program of land allocation reached Chieng Dong in 1994. The National Assembly had passed a new Land Law in 1993 that mandated the state to allocate land to households under long-term lease arrangements. Despite its importance at the national level, the new Land Law had virtually no effect on land tenure relations in Chieng Dong (Sikor 2001). Villagers openly protested the long-term allocation of collective wet rice fields, which motivated the local state authorities to exclude collective wet rice land from allocation. They continued to expand upland fields far up the slopes, ignoring formal demarcations of forestry land. They also maintained the practice of flexible adjustment of upland boundaries between neighbors from year to year, although these boundaries had been fixed on paper. In sum, national policy on rural areas and people has changed radically during the past 5 decades. Yet as radical as the changes looked in policy texts, they turned out to be much more moderate in practice. People reacted directly to policy changes and adapted them to their own conditions and interests. In addition, decollectivization policy was in large part a reaction to changes in local practices that predated national-level reforms. Changes in land use Analysis of aerial photographs and satellite images demonstrates that land use in Chieng Dong has been highly dynamic during the last 5 decades (Figure 4; Table 1). Forest cover shrank and then increased. The area covered with scrubland expanded, remained stable, and finally decreased. The only constant trend was the increase in area under cultivation: the later the year, the larger the area under cultivation. The statistical data support the dynamic picture portrayed in remote imagery. Wet rice outputs grew over the whole period, yet annual growth rates fluctuated widely (Figure 5). Upland rice output fluctuated, typically increasing when wet rice output declined, and vice versa. Cassava output was initially insignificant, then experienced strong growth, and finally gave way to skyrocketing corn output (Figure 6). The water buffalo and cattle populations exhibited different trends (Figure 7). The water buffalo population dropped in the 1970s and never reached its initial level again. On the other hand, farmers began to raise cattle in significant numbers only in the 1970s. Cattle husbandry boomed quickly and stabilized at a high level thereafter. Remote imagery and statistical data suggest 3 periods of land use in Chieng Dong: Agricultural production shifted from extensive upland cultivation to valley-based wet rice fields in the 1960s and the first half of the 1970s. Production had been very extensive in the 1950s, as indicated by the predominance of scrubland and open canopy forest in 1952 (Figure 4; Table 1). Agricultural fields and cattle husbandry rapidly expanded up the slopes in the second half of the 1970s and 1980s, whereas wet rice cultivation stagnated. The land under cultivation almost doubled. By 1989, scrubland covered about three-quarters of the land. Agricultural intensification set in around 1990. Intensive use of land grew rapidly, especially for cultivation of wet rice and corn, whereas extensive use in the form of upland rice and cassava farming declined. Agricultural intensification allowed forests to regenerate, although agricultural fields continued to grow (Figure 4; Table 1). In sum, land use has been very dynamic during the past 5 decades. Villagers intensified production in the 1960s and early 1970s, drastically expanding the land under cultivation in the late 1970s and 1980s and then shifting to more intensive uses again in the 1990s. The forests of Chieng Dong reflected changing trends in land use. They regenerated in the 1960s and early 1970s and then disappeared rapidly in the late 1970s and 1980s, regenerating once again in the 1990s. Our results indicate that agricultural policy and land use have undergone radical changes during the last 5 decades and that these major changes roughly coincided. Can we thus conclude that collectivization and decollectivization policies caused the changes in land use? This conclusion would be premature. Associations between changes in policy and land use do not necessarily imply that policy changes transformed land use. Causation may take the opposite direction, with policy reforms responding to land use changes. Or, changes in land use may be due to other factors such as markets, technology, population, or climate. Analysis of the relationship between state policy and land use requires further discussion. How did collectivization affect land use in Chieng Dong? The lack of hard data—on weather and taxation, for example—prohibits conclusive explanations. Our findings allow us to infer, however, that collectivization contributed to intensification. Collective organization of production facilitated the cooperation required for investments in water control and changes in paddy management practices. Besides collectivization, direct state intervention appeared to have a strong influence on land use. The demarcation of large upland areas as forestland generated disincentives for upland rice farming because villagers were confined to small areas and risked fines if they expanded beyond these areas. State support for new seed varieties, chemical fertilizer, and technical extension increased labor productivity in wet rice. How did decollectivization influence land use? Here we need to differentiate between national reforms and the local-level erosion of collective control. Our material suggests that local-level erosion of collective control over production drove the expansion of land under cultivation in the late 1970s and 1980s. The loss of collective control “pulled” household production into the uplands because new opportunities opened up there. Continuing collective control over wet rice also “pushed” villagers into the uplands. Upland rice fields provided twice the yield on household labor that wet rice cultivation provided (6 versus 3 kg paddy rice per day of labor), and households also retained a larger share of output. What factors explain the shift toward agricultural intensification around 1990? National decollectivization policy around 1990 had virtually no effect on land use in Chieng Dong. Resolution 10 did not cause any changes in land use because the shift toward household-based production in Chieng Dong anticipated the policy reform. Land allocation did not influence land use because it did not modify land tenure institutions at the level of the villages (Sikor 2001). Agricultural intensification in the 1990s was driven by market expansion and newly available technologies. New seed varieties and increasingly available chemical fertilizer at decreasing prices facilitated significant yield increases in wet rice cultivation. In connection with the rapidly declining fertility of upland soils, changing markets and technology boosted the returns on labor for wet rice above those for upland rice (5 versus 3 kg paddy rice per day of labor). Increasingly secure food supply, improved seed, development of a stable outlet, and increasingly favorable relative product price also motivated villagers to cultivate more corn. One frequently cited factor is suspiciously absent from our discussion: population growth. As noted at the beginning, Chieng Dong's population grew rapidly during the past 5 decades. Population growth clearly influenced land use in the long term because it increased local food requirements. The villagers of Chieng Dong worked a much larger area of wet rice and upland fields in 1997 than in 1952. Forests receded to upper slopes and limestone rocks. Landscape transformations over the long term thus reflected the effect of population growth. Yet our findings call attention to other factors that have modified the effect of population growth on land use, in particular state policy, markets, and available technology. It is the latter factors that account for the highly dynamic nature of land use in Chieng Dong. Our account of a highly dynamic landscape in Chieng Dong matches the literature on land use change in the mountains of mainland Southeast Asia (see Fox et al 1995; Long et al 1999; Xu et al 1999; Trebuil et al 2000). Agricultural land expands and contracts over time. Forests shrink and regenerate, facilitated by favorable climatic conditions. The dynamic nature of land use implies that short-term changes may differ from long-term changes in land use. Long-term trends can be hidden by short-term changes, just as one cannot assume that short-term changes follow long-term trends. We surmise that collectivization and decollectivization shaped mountain landscapes in Vietnam and China. Although this is largely speculative, we hypothesize that collectivization provided means and opportunities for agricultural intensification. By comparison, Fox et al (1995) observed in 3 small watersheds in Thailand that land use became more extensive during the same period. Yet collectivization only led to agricultural intensification if it was accompanied by investment in wet rice cultivation. In the absence of such investment, collectivization drove expansion of upland fields through its emphasis on grain production (Xu et al 1999). We further speculate that decollectivization caused an initial boom in production driven by the expansion of agriculture up the slopes, a reaction also observed by Xu et al (1999) in China. Subsistence needs initially remained at the core of production and growth. Thereafter, in the face of rapidly declining soil fertility, expansion was followed by more intensive forms of agricultural production. Ecological decline, new market and technological opportunities, and the lack of off-farm employment opportunities accelerated the intensification process, including the greater role of market crops (Donovan et al 1997; Long et al 1999). Decollectivization thus accelerated the transition toward more intensive agricultural practices in comparison with other parts of mountainous Southeast Asia such as Thailand (Fox et al 2000). Our findings support the increasing attention paid to the influence of macro policy on land use (Mertens et al 2000; Sunderlin et al 2000). At the same time we suggest that the relationship between national policy and local land use is complicated by 3 factors. First, changes in local institutions may predate national policy reforms. Policy reforms may be a response to, not a cause of, changes in local practice. Second, changes in land use may be due to other socioeconomic factors. Changes in state policy often come together with changes in other factors such as technology and markets. Third, implementation of national policy and the resulting local institutions may differ from place to place. Local authorities and people may enjoy significant leeway in interpreting national policy. This last complication, local mediation of national policy, may be particularly relevant in mountain regions. Mountains are typically characterized by physical remoteness and geographical conditions different from those found in other regions. The integration of mountain people into nation-states has mostly been a recent phenomenon. Mountain people enjoy more extensive autonomy than do their compatriots in the lowlands and have different types of social relations. In addition, the interests of local governments in the mountains frequently differ from those in other regions. If mountains, their people, and government interests are different, we may expect a relatively high degree of local mediation. National policy may thus affect land use in the mountains, yet its effects may be mediated in ways particular to mountain conditions. We are grateful to the Ford Foundation office in Hanoi and the National Science Foundation (USA) for financial support of our field research.
<urn:uuid:ffcc9d15-5fbf-4819-8974-ce77423e1a66>
CC-MAIN-2021-21
https://bioone.org/journals/mountain-research-and-development/volume-22/issue-3/0276-4741(2002)022%5B0248:APALUC%5D2.0.CO;2/Agricultural-Policy-and-Land-Use-Changes-in-a-Black-Thai/10.1659/0276-4741(2002)022%5B0248:APALUC%5D2.0.CO;2.full
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990551.51/warc/CC-MAIN-20210515161657-20210515191657-00213.warc.gz
en
0.939836
4,106
2.578125
3